id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2310.03340 | Constant rank subspaces of alternating bilinear forms from Galois Theory | Let $L/K$ be a cyclic extension of degree $n = 2m$. It is known that the
space $\text{Alt}_K(L)$ of alternating $K$-bilinear forms (skew-forms) on $L$
decomposes into a direct sum of $K$-subspaces $A^{\sigma^i}$ indexed by the
elements of $\text{Gal}(L/K) = \langle \sigma \rangle$. It is also known that
the components $A^{\sigma^i}$ can have nice constant-rank properties. We
enhance and enrich these constant-rank results and show that the component
$A^\sigma$ often decomposes directly into a sum of constant rank subspaces,
that is, subspaces all of whose non-zero skew-forms have a fixed rank $r$. In
particular, this is always true when $-1 \not \in L^2$. As a result we deduce a
decomposition of $\text{Alt}_K(L)$ into subspaces of constant rank in several
interesting situations. We also establish that a subspace of dimension
$\frac{n}{2}$ all of whose nonzero skew-forms are non-degenerate can always be
found in $A^{\sigma^i}$ where $\sigma^i$ has order divisible by $2$. | Ashish Gupta, Sugata Mandal | 2023-10-05T06:47:43Z | http://arxiv.org/abs/2310.03340v1 | # Constant rank subspaces of alternating bilinear forms from Galois theory
###### Abstract.
Let \(L/K\) be a cyclic extension of degree \(n=2m\). It is known that the space \(\operatorname{Alt}_{K}(L)\) of alternating \(K\)-bilinear forms (skew-forms) on \(L\) decomposes into a direct sum of \(K\)-subspaces \(A^{\sigma^{i}}\) indexed by the elements of \(\operatorname{Gal}(L/K)=\langle\sigma\rangle\). It is also known that the components \(A^{\sigma^{i}}\) can have nice constant-rank properties. We enhance and enrich these constant-rank results and show that the component \(A^{\sigma^{r}}\) often decomposes directly into a sum of constant rank subspaces, that is, subspaces all of whose non-zero skew-forms have a fixed rank \(r\). In particular, this is always true when \(-1\notin L^{2}\). As a result we deduce a decomposition of \(\operatorname{Alt}_{K}(L)\) into subspaces of constant rank in several interesting situations. We also establish that a subspace of dimension \(\frac{n}{2}\) all of whose nonzero skew-forms are non-degenerate can always be found in \(A^{\sigma^{i}}\) where \(\sigma^{i}\) has order divisible by \(2\).
**Keywords.** alternating form, skew-symmetric form, constant rank space, cyclic extension
**2020 Math. Subj. Class.**: 12F05, 12F10, 15A63
## 1. Introduction
Let \(K\) be a field of characteristic other than two and \(\operatorname{Alt}_{K}(V)\) denote the space of all alternating bilinear forms (skew-forms) on a \(K\)-space \(V\) of dimension \(n\). Suppose \(K\) admits a Galois extension \(L\) of degree \(n\). Taking the \(n\)-dimensional \(K\)-space \(L\) as a model for \(V\) it was shown in [6] that ideas from Galois Theory can be fruitfully applied for studying skew-forms on \(V\). Notably, this approach sheds light on the subspaces of \(\operatorname{Alt}_{K}(V)\) whose nonzero skew-forms all have the same rank equal to \(k\), say. Such "\(k\)-subspaces" besides being interesting in their own right play an important role in coding theory (see [9],[8]). Of particular importance are the \(n\)-subspaces of \(\operatorname{Alt}_{K}(V)\), that is, subspaces all of whose nonzero skew forms are non-degenerate.
Replacing \(V\) by the \(K\)-space \(L\), we begin with some definitions and facts given in [6, Lemma 2]. For each \(\sigma\in G:=\operatorname{Gal}(L/K)\) and \(b\in L\) we may define the skew-form
\[f_{b,\sigma}(x,y)=\operatorname{Tr}_{K}^{L}(b(x\sigma(y)-\sigma(x)y)),\qquad \forall x,y\in L. \tag{1.1}\]
where \(\mathrm{Tr}^{L}_{K}:L\to K\) is the Galois-theoretic trace map defined by
\[\mathrm{Tr}^{L}_{K}(a)=\sum_{\sigma\in\mathrm{Gal}(L/K)}\sigma(a),\quad\forall a \in L.\]
With each \(\sigma\in G\) we can thus associate a subspace \(A^{\sigma}\) of \(\mathrm{Alt}_{K}(L)\) defined as \(A^{\sigma}:=\{f_{b,\sigma}:b\in L\}\). Each \(A^{\sigma}\) has dimension \(n\) unless \(\sigma\) has order \(2\) (see [6, Theorem 1]). It was shown in [6] that \(\mathrm{Alt}(L)\) decomposes as a direct sum of the spaces \(A^{\sigma}\) with \(\sigma\) ranging over the elements of the Galois group \(G\) (see Theorems 1 and 2 below).
Let \(\mathrm{ord}(\sigma)\) denote the order of \(\sigma\in G\). Interestingly, for \(n\) odd, each \(A^{\sigma}\) is an \(n-n/\mathrm{ord}(\sigma)\)-subspace (Theorem 1). However when \(n\) is even the situation is less clear as in this case we only know that the subspace \(A^{\sigma}\) has a constant rank property only when \(\sigma\) is either an involution or else it has odd order (see Section 2). When \(\sigma\) has even order it is only known that a skew form \(f_{b,\sigma}\in A^{\sigma}\) may have rank either \(n\) or \(n-2n/\mathrm{ord}(\sigma)\) and that both of these values are attained as ranks of suitable skew forms in \(A^{\sigma}\). We study this last case more closely here and show that there are constant-rank subspaces in \(A^{\sigma}\). In fact, \(A^{\sigma}\) always has an \(n\)-subspace of dimension \(\frac{n}{2}\) and moreover decomposes as a direct sum of \(k\)-subspaces for suitable \(k\) (see Theorems A-D).
**Theorem 1** ([6])).: _Suppose that \(n=[L:K]\) is odd and the Galois group \(G=\{1,\sigma_{1},\cdots,\sigma_{m},\sigma_{1}^{-1},\cdots,\sigma_{m}^{-1}\}\) where \(m=(n-1)/2\). Then there is a direct decomposition_
\[\mathrm{Alt}_{K}(L)=A^{1}\oplus A^{2}\oplus\cdots\oplus A^{m}, \tag{1.2}\]
_where \(A^{i}:=A^{\sigma_{i}}\) has dimension \(n\) (\(1\leq i\leq m\)). Moreover, if \(\mathrm{ord}(\sigma_{i})=2r_{i}+1\), the non zero skew-forms in \(A^{i}\) all have rank \(n-\frac{n}{2r_{i}+1}\)._
**Theorem 2**.: ([6]) _Suppose that \(n=[L:K]\) is even and the Galois group_
\[G=\{1,\tau_{1},\cdots,\tau_{k},\sigma_{1},\cdots,\sigma_{m},\sigma_{1}^{-1}, \cdots,\sigma_{m}^{-1}\},\]
_where \(\{\tau_{1},\tau_{2},\cdots,\tau_{k}\}\) are the involutions of \(G\), then there is a direct decomposition_
\[\mathrm{Alt}_{K}(L)=B^{1}\oplus B^{2}\oplus\cdots\oplus B^{k}\oplus A^{1} \oplus A^{2}\oplus\cdots\oplus A^{m}. \tag{1.3}\]
_where \(B^{i}:=A^{\tau_{i}}\) is an \(n\)-subspace of dimension \(n/2\) for all \(1\leq i\leq k\) and \(A^{j}:=A^{\sigma_{j}}\) (\(1\leq j\leq m\)) has dimension \(n\). Moreover if \(\mathrm{ord}(\sigma_{i})\) is odd then \(A^{\sigma}\) is an \(n-n/\mathrm{ord}(\sigma_{i})\)-subspace of dimension \(n\)._
If \(L/K\) is cyclic Galois extension of degree \(n\) with \(G=\mathrm{Gal}(L/K)=\langle\sigma\rangle\) we define \(A^{i}:=A^{\sigma^{i}}\). Thus \(A^{i}=\{f_{b,\sigma^{i}}:b\in L\}\). If \(n\) is even then there is a unique involution
\(\tau_{1}=\sigma^{n/2}\) and in this case we denote \(B^{1}:=A^{\tau_{1}}=\{f_{b,\sigma^{n/2}}:b\in L\}\). Then the decomposition (1.3) becomes
\[\operatorname{Alt}_{K}(L)=B^{1}\oplus A^{1}\oplus A^{2}\oplus\cdots\oplus A^{m}, \tag{1.4}\]
**Theorem A**.: _Let \(K\) be a field and \(n=2k\), where \(k\geq 1\) is odd. Let \(L\) be any cyclic extension of \(K\) of degree \(n\) with Galois group \(G=\langle\sigma\rangle\). Then_
\[A^{1}=\mathcal{U}_{1}\oplus\mathcal{V}_{1}, \tag{1.5}\]
_where \(\mathcal{U}_{1}\) is an \(n\)-subspace of dimension \(k\) and \(\mathcal{V}_{1}\) is an \((n-2)\)-subspace of dimension \(k\)._
In view of Theorem A in following theorems we focus on the case where \(n\) is divisible by \(4\).
**Theorem B**.: _Suppose \(n=2^{\alpha}k\) where \(\alpha\geq 2\) and \(k\) is odd. Let \(K\) be an algebraic number field such that \(-1\) is not a square in \(K\). Then there exists a cyclic extension \(L\) of \(K\) of degree \(n\) with the Galois group \(G=\langle\sigma\rangle\) such that_
\[A^{1}=\mathcal{E}_{1}\oplus\cdots\oplus\mathcal{E}_{\alpha-1}\oplus\mathcal{ V}_{1}\oplus\mathcal{V}_{2}, \tag{1.6}\]
_where_
* \(\mathcal{E}_{i}\) _is an_ \(n\)_-subspace of dimension_ \(n/2^{i}\) _for_ \(1\leq i\leq\alpha-1\)_,_
* \(\mathcal{V}_{j}\) _is an_ \((n-2)\)_-subspace of dimension_ \(k\) _for_ \(1\leq j\leq 2\)_._
**Theorem C**.: _Let \(K\) be a finite field with \(q\) elements such that \(-1\) is not a square in \(K\). Let \(q+1=2^{a}l\) (\(l\) odd) where \(a\geq 1\) and \(n=2^{\alpha}k\) (\(k\) odd) where \(\alpha\geq 2\). Suppose \(L\) is a cyclic extension of \(K\) of degree \(n\) with \(\operatorname{Gal}(L/K)=\langle\sigma_{f}\rangle\) where \(\sigma_{f}\) is the Frobenius map of \(L\) defined by \(\sigma_{f}:b\to b^{q}\)._
* _If_ \(\alpha\leq a+1\) _then_ \[A^{1}=\mathcal{V}_{1}\oplus\mathcal{V}_{2}\oplus\mathcal{E}_{1}\oplus\cdots \oplus\mathcal{E}_{\alpha-1},\] (1.7) _where_
* \(\mathcal{E}_{i}\) _is an_ \(n\)_-subspace of dimension_ \(n/2^{i}\) _for_ \(1\leq i\leq\alpha-1\)_,_
* \(\mathcal{V}_{j}\) _is an_ \((n-2)\)_-subspace of dimension_ \(k\) _for_ \(1\leq j\leq 2\)_._
* _If_ \(\alpha>a+1\) _and_ \(l=1\)_, that is,_ \(q=2^{a}-1\)_, then_ \[A^{1}=\mathcal{V}_{1}\oplus\mathcal{V}_{2}\oplus\mathcal{E}_{1}\oplus\cdots \oplus\mathcal{E}_{\alpha-1},\] (1.8) _where_
1. \(\mathcal{E}_{i}\) _is an_ \(n\)_-subspace of dimension_ \(n/2^{i}\) _for_ \(1\leq i\leq a\) _and an_ \((n-2)\)_-subspace of dimension_ \(n/2^{i}\) _for_ \(a+1\leq i\leq\alpha-1\)_,_
2. \({\mathcal{V}}_{j}\) _is an_ \((n-2)\)_-subspace of dimension_ \(k\) _for_ \(1\leq j\leq 2\)_._
**Theorem D**.: _Let \(p\) be a prime and \(K=\mathbb{Q}_{p}\) be the \(p\)-adic completion of \(\mathbb{Q}\) such that \(-1\) is not a square in \(K\). Let \(p+1=2^{a}l\) (\(l\) odd) where \(a\geq 1\) and \(n=2^{a}k\) (\(k\) odd) where \(2\leq\alpha\leq a+1\). Then there exists a cyclic extension \(L\) of \(K\) of degree \(n\) such that the decomposition (1.7) holds._
## 2. Skew forms and Galois extensions
Retaining the notation of the previous section we now collect some basic results from [6] concerning the application of Galois theory to the study of some crucial properties of bilinear forms over \(K\). In the following \(L/K\) is a (not necessarily cyclic) Galois extension and \(1\neq\sigma\in\operatorname{Gal}L/K\) is arbitrary.
**Lemma 2.1**.: ([6, Lemma 2]) _Let \(f=f_{b,\sigma}\) be an alternating bilinear form as defined above with \(b\neq 0\) and let \(F\) be the fixed field of the automorphism \(\sigma^{2}\). If \(\sigma(b)b^{-1}\) is expressible in the form \(\sigma^{2}(c)c^{-1}\) for some \(c\in L^{\times}\) then \(\operatorname{rk}(f_{b,\sigma})=n-n/[L:F]\). Otherwise \(\operatorname{rk}(f_{b,\sigma})=n\)._
**Lemma 2.2**.: ([6, Lemma 4]) _Suppose that the automorphism \(\sigma\) has even multiplicative order \(2r\), say. Then there exist elements \(b\in L^{\times}\) such that the equation \(\sigma(b)b^{-1}=\sigma^{2}(c)c^{-1}\) has no solution for all \(c\in L^{\times}\)._
**Remark 2.1**.: _If \(\sigma\) is not an involution then the map \(b\to f_{b,\sigma}\) defines an isomorphism of \(K\)-spaces between \(A^{\sigma}\) and \(L\)[6, Theorem 1]._
**Lemma 2.3**.: ([6, Lemma 3]) _Suppose that the automorphism \(\sigma\) has odd multiplicative order \(2r+1>1\), say. Then, if \(b\neq 0\), the rank of the skew-form \(f=f_{b,\sigma}\) is \(n-n/2r+1\)._
**Lemma 2.4**.: ([6, Lemma 4]) _Suppose that the automorphism \(\sigma\) has even multiplicative order \(2r\geq 2\), say. Then, if \(b\neq 0\), the rank of the skew-form \(f=f_{b,\sigma}\) is either \(n-\frac{n}{r}\) or \(n\)._
## 3. Preliminary results
Our aim in this section is to establish certain facts which will be found useful in the subsequent sections and are also interesting in their own right. Recall that if
is an intermediate subfield and \(a\in L\) then the \(L/F\)-norm \(N_{L/F}(a)\) of \(a\) is defined as \(N_{L/F}(a)=\prod_{\theta\in\operatorname{Gal}(L/F)}\theta(a)\).
**Notation 1**.: _Throughout this section \(L/K\) denotes a cyclic extension with Galois group \(\operatorname{Gal}(L/K)=\langle\sigma\rangle\). For the sake of convenience in what follows we shall denote the subfield \(L^{\langle\sigma^{i}\rangle}\) as \(L_{i}\)._
We begin by noting the following restatement of the degeneracy criterion Lemma 2.1.
**Proposition 3.1**.: _Let \(b\in L\). Then the skew-form \(f_{b,\sigma}\) is degenerate if and only if_
\[N_{L/L_{2}}(\sigma(b)/b)=1, \tag{3.1}\]
_that is, \(f_{b,\sigma}\) is degenerate if and only if_
\[N_{L/L_{2}}(b)=b\sigma^{2}(b)\cdots\sigma^{n-2}(b)\in K. \tag{3.2}\]
Proof.: By Lemma 2.1, the skew form \(f_{b,\sigma}\) is degenerate if and only if \(\sigma(b)/b=\sigma^{2}(c)/c\) for some \(c\in L\). The first assertion is now clear in view of the Hilbert Theorem 90. Moreover the condition \(N_{L/L_{2}}(\sigma(b)/b)=1\) is easily seen to be equivalent to the product \(b\sigma^{2}(b)\cdots\sigma^{n-2}(b)\) being \(\sigma\)-invariant.
Suppose that \(\sigma^{i}\) is not an involution. By Lemma 2.1 the skew-form \(f_{b,\sigma^{i}}\in A^{i}\subseteq\operatorname{Alt}_{K}(L)\) is degenerate if and only if \(\sigma^{i}(b)/b=\sigma^{2i}(c)/c\). As \(\sigma^{2i}\) is a generator for \(\operatorname{Gal}(L/L_{2i})\), in view of Hilbert Theorem 90, \(f_{b,\sigma^{i}}\) is degenerate if and only if \(N_{L/L_{2i}}(\sigma^{i}(b)/b)=1\). A glance at Proposition 3.1 above shows that this is precisely the condition for the skew-form \(f^{\sim}_{b,\sigma^{i}}\in\operatorname{Alt}_{L_{i}}(L)\) defined by
\[f^{\sim}_{b,\sigma^{i}}=\operatorname{Tr}^{L}_{L_{i}}(b(x\sigma(y)-\sigma(x)y )),\qquad\forall x,y\in L.\]
to be degenerate (we write \(f^{\sim}_{b,\sigma^{i}}\) instead of \(f_{b,\sigma^{i}}\) to emphasize the fact that we are now considering \(L\) as \(L_{i}\)-space).
Let us write \(A^{\sim 1}:=\{f^{\sim}_{b,\sigma^{i}}\mid b\in L\}\). In view of Remark 2.1 we then have a K-isomorphism \(A^{i}\equiv L\) via \(f_{b,\sigma^{i}}\mapsto b\) and an \(L_{i}\)-isomorphism \(L\equiv A^{\sim 1}\) via \(b\mapsto f^{\sim}_{b,\sigma^{i}}\). The composition of these maps clearly yields a \(K\)-isomorphism \(A^{i}\cong A^{\sim 1}\). The following is then clear.
**Remark 3.1**.: _With respect to the above isomorphism if an \(L_{i}\)-subspace \(\mathcal{W}\leq A^{\sim 1}\) has all its non-zero skew forms non-degenerate (or all its non-zero skew forms degenerate) then the same is true for the corresponding (K-) subspace in \(A^{i}\)._
**Lemma 3.1**.: _Let \(n=2^{\alpha}k\) where \(\alpha\geq 2\) and \(k\) is odd. Suppose that \(L\) is a cyclic extension of a field \(K\) of degree \(n\) with Galois group \(\operatorname{Gal}(L/K)=\langle\sigma\rangle\). Then the following hold._
* _For_ \(1\leq i\leq\alpha-1\) _the subspace_ \(E_{i}:=\{b\in L:\sigma^{n/2^{i}}(b)=-b\}\leq L\) _has dimension_ \(n/2^{i}\)_._
* _Let_ \(V_{1}:=\{b\in L:\sigma^{k}(b)=b\}\) _and_ \(V_{2}:=\{b\in L:\sigma^{k}(b)=-b\}\)_. Then_ \(\dim(V_{1})=\dim(V_{2})=k\)_._
Proof.: Let \(1\leq i\leq\alpha-1\). As the order of the automorphism \(\sigma^{n/2^{i}}\) is \(2^{i}\) so the fixed field \(L_{n/2^{i}}\) of \(\sigma^{n/2^{i}}\) has dimension \(n/2^{i}\) over \(K\). We can view \(\sigma^{n/2^{i}}\) as a \(K\)-linear map of \(L\). By the Dedekind independence theorem the minimal polynomial of \(\sigma^{n/2^{i}}\) is \(x^{2^{i}}-1\). Let \(j_{i}\in L\) be an eigenvector of \(\sigma^{n/2^{i}}\) corresponding to the eigenvalue \(-1\). It is easily checked that the corresponding eigenspace is \(E_{i}:=j_{i}L_{n/2^{i}}\). It follows that \(\dim(E_{i})=n/2^{i}\). The proof of (ii) is similar.
**Lemma 3.2**.: _Let \(n=2^{\alpha}k\) where \(\alpha\geq 2\) and \(k\) is odd. Suppose that \(L\) is a cyclic extension of a field \(K\) of degree \(n\) with Galois group \(\operatorname{Gal}(L/K)=\langle\sigma\rangle\). Then \(\forall b_{i}\in E_{i}\setminus\{0\}\)_
\[N_{L/L_{2}}(b_{i})=(-1)^{n/2^{2}}w_{i}^{2^{i}}, \tag{3.3}\]
_where \(w_{i}:=b_{i}\sigma^{2}(b_{i})\cdots\sigma^{n/2^{i}-2}(b_{i})\). Moreover, \(f_{b_{i},\sigma}\) is degenerate if and only if \(\eta_{i}:=\sigma(w_{i})/w_{i}\) is a \(2^{i}\)-th root of unity in \(L\) such that \(\sigma(\eta_{i})=-{\eta_{i}}^{-1}\). In particular, \(f_{b_{1},\sigma}\) is non-degenerate for all \(b_{1}\in E_{1}\setminus\{0\}\)._
Proof.: In view of the chain of inclusions
\[L\supset L_{n/2}\supset\cdots\supset L_{n/2^{i-1}}\supset E_{i},\]
we have for \(b_{i}\in E_{i}\setminus\{0\}\)
\[N_{L/L_{2}}(b_{i}) =b_{i}\sigma^{2}(b_{i})\cdots\sigma^{n-2}(b_{i})\] \[=\left(b_{i}\sigma^{2}(b_{i})\cdots\sigma^{n/2-2}(b_{i})\right) \left(\sigma^{n/2}(b_{i})\sigma^{n/2+2}(b_{i})\cdots\sigma^{n/2+n/2-2}(b_{i})\right)\] \[=\left(b_{i}\sigma^{2}(b_{i})\cdots\sigma^{n/2-2}(b_{i})\right)^{2}\] \[=\left(b_{i}\sigma^{2}(b_{i})\cdots\sigma^{n/4-2}(b_{i})\right)^{2}\] \[\qquad\qquad\qquad\vdots\] \[=\left(b_{i}\sigma^{2}(b_{i})\cdots\sigma^{n/2^{i-1}-2}(b_{i}) \right)^{2^{i-1}}\] \[=\left[\left(b_{i}\sigma^{2}(b_{i})\cdots\sigma^{n/2^{i}-2}(b_{ i})\right)\left(\sigma^{n/2^{i}}(b_{i})\sigma^{n/2^{i}+2}(b_{i})\cdots\sigma^{n/2^{ i}+n/2^{i}-2}(b_{i})\right)\right]^{2^{i-1}}\] \[=\left[\left(b_{i}\sigma^{2}(b_{i})\cdots\sigma^{n/2^{i}-2}(b_{ i})\right)\left((-b_{i})(-\sigma^{2}(b_{i}))\cdots(-\sigma^{n/2^{i}-2}(b_{i})) \right)\right]^{2^{i-1}}\] \[=\left[(-1)^{n/2^{i+1}}\left(b_{i}\sigma^{2}(b_{i})\cdots\sigma^{ n/2^{i}-2}(b_{i})\right)^{2}\right]^{2^{i-1}}\] \[=(-1)^{n/2^{2}}[b_{i}\sigma^{2}(b_{i})\cdots\sigma^{n/2^{i}-2}(b_ {i})]^{2^{i}}\] \[=(-1)^{n/2^{2}}w_{i}^{2^{i}}.\]
Then
\[\frac{N_{L/L_{2}}(\sigma(b_{i}))}{N_{L/L_{2}}(b_{i})}=\left(\frac{(\sigma((-1) ^{n/2^{2}}w_{i}))}{(-1)^{n/2^{2}}w_{i}}\right)^{2^{i}}=\left(\frac{\sigma(w_{i })}{w_{i}}\right)^{2^{i}}=\eta_{i}^{2^{i}}.\]
Set \(\eta_{i}:=\frac{\sigma(w_{i})}{w_{i}}\). By Proposition 3.1, \(f_{b_{i},\sigma}\) is degenerate if and only if \(\eta_{i}\) is a \(2^{i}\)-th root of unity \(\eta_{i}\). Moreover,
\[-w_{i}=\sigma^{2}(w_{i})=\sigma(\eta_{i}w_{i})=\sigma(\eta_{i})\eta_{i}w_{i},\]
whence \(\sigma(\eta_{i})\eta_{i}=-1\), that is, \(\sigma(\eta_{i})=-\eta_{i}^{-1}\). The last assertion in the theorem is now clear.
**Lemma 3.3**.: _Let \(n=2^{\alpha}k\) where \(\alpha\geq 2\) and \(k\) is odd. Suppose that \(L\) is a cyclic extension of \(K\) of degree \(n\) with \(\operatorname{Gal}(L/K)=\langle\sigma\rangle\). Then \(\forall b\in V_{1}\cup V_{2}\), \(f_{b,\sigma}\) is degenerate._
Proof.: Case I: Let us first assume that \(k>1\). Then the field \(V_{1}=L_{k}\) has dimension \(k\) over \(K\). Again by Dedekind's independence theorem it follows that the minimal polynomial of \(\sigma^{k}\) is \(x^{2^{\alpha}}-1\). Let \(j_{\alpha}\) be an eigenvector of \(\sigma^{k}\) corresponding to the eigenvalue \(-1\) and it is easily checked that the corresponding eigenspace is \(V_{2}=j_{\alpha}L_{k}\)
Thus \(\dim(V_{1})=\dim(V_{2})=k\). Note that \(V_{1}\) and \(V_{2}\) are \(\sigma\)-invariant. Again in view of the inclusions
\[L\supset L_{n/2}\supset\cdots\supset L_{n/2^{\alpha-1}}=L_{2k}\supset L_{k}=V_ {1},\]
we have \(\forall b\in V_{1}\setminus\{0\}\),
\[N_{L/L_{2}}(b) =b\sigma^{2}(b)\cdots\sigma^{n-2}(b)\] \[=\left(b\sigma^{2}(b)\cdots\sigma^{n/2^{\alpha-1}-2}(b)\right)^{2 ^{\alpha-1}}\] \[=\left(b\sigma^{2}(b)\cdots\sigma^{2k-2}(b)\right)^{2^{\alpha-1}}\] \[=\left[\left(b\sigma^{2}(b)\cdots\sigma^{k-1}(b)\right)\left( \sigma^{k+1}(b)\cdots\sigma^{2k-2}(b)\right)\right]^{2^{\alpha-1}}\] \[=\left[\left(b\sigma^{2}(b)\cdots\sigma^{k-1}(b)\right)\left( \sigma(b)\cdots\sigma^{k-2}(b)\right)\right]^{2^{\alpha-1}}\] \[=\left[b\sigma(b)\sigma^{2}(b)\cdots\sigma^{k-1}(b)\right]^{2^ {\alpha-1}}\] \[=N_{L/L_{2}}(\sigma(b)).\]
On the other hand in view of the inclusions
\[L\supset L_{n/2}\supset\cdots\supset L_{n/2^{\alpha-1}}=L_{2k}\supset j_{ \alpha}L_{k}=V_{2},\]
we have \(\forall b\in V_{2}\setminus\{0\}\),
\[N_{L/L_{2}}(b) =b\sigma^{2}(b)\cdots\sigma^{n-2}(b)\] \[=\left(b\sigma^{2}(b)\cdots\sigma^{n/2^{\alpha-1}-2}(b)\right)^{ 2^{\alpha-1}}\] \[=\left(b\sigma^{2}(b)\cdots\sigma^{2k-2}(b)\right)^{2^{\alpha-1}}\] \[=\left[\left(b\sigma^{2}(b)\cdots\sigma^{k-1}(b)\right)\left( \sigma^{k+1}(b)\cdots\sigma^{2k-2}(b)\right)\right]^{2^{\alpha-1}}\] \[=\left[\left(b\sigma^{2}(b)\cdots\sigma^{k-1}(b)\right)\left((- \sigma(b))\cdots(-\sigma^{k-2}(b)\right)\right]^{2^{\alpha-1}}\] \[=\left[b\sigma(b)\sigma^{2}(b)\cdots\sigma^{k-1}(b)\right]^{2^{ \alpha-1}}\] \[=N_{L/L_{2}}(\sigma(b)).\]
Consequently \(N_{L/L_{2}}(\sigma(b)/b)=1\) and thus by Proposition 3.1\(\forall b\in V_{1}\cup V_{2}\), \(f_{b,\sigma}\) is degenerate.
Case II: We now assume that \(k=1\) (thus \(n=2^{\alpha}\) and \(L_{2k}=L_{2}\)). Then \(V_{1}:=K\) and it is easily checked that \(V_{2}:=j_{\alpha}K\), where \(j_{\alpha}\) is an eigenvector of \(\sigma\) corresponding to the eigenvalue \(-1\), Thus \(\dim(V_{1})=\dim(V_{2})=1\). Clearly if \(b\in L_{2}^{\times}\) then
and \(N_{L/L_{2}}(\sigma(b))=\left(\sigma(b)\right)^{2^{\alpha-1}}\) as \(L_{2}\) is \(\sigma\)-invariant. By definition if \(b\in V_{1}\cup V_{2}\) then \(\sigma(b)=\pm b\) and in either case
\[N_{L/L_{2}}\left(\frac{\sigma(b)}{b}\right)=\left(\frac{\sigma(b)}{b}\right)^{ 2^{\alpha-1}}=1.\]
Thus by Proposition 3.1 if \(b\in V_{1}\cup V_{2}\), \(f_{b,\sigma}\) is degenerate.
## 4. Proofs of Theorems A and B
### Proof of Theorem A
Proof.: Let \(V:=L_{k}\) and \(0\neq v\in V\). Clearly
\[\sigma^{2}(v),\sigma^{4}(v),\cdots,\sigma^{2k-2}(v)\in V.\]
It follows that
\[N_{L/L_{2}}(v)\in L_{2}\cap V=L_{2}\cap L_{k}=K.\]
By Proposition 3.1 the skew-form \(f_{v,\sigma}\) is degenerate and by Lemma 2.4 it has rank \(n-2=2k-2\).
By Lemmas 2.1 and 2.2 there exists a \(j\in L\) such that \(f_{j,\sigma}\) is non-degenerate. Then for \(0\neq v\in V\)
\[N_{L/L_{2}}(jv)=N_{L/L_{2}}(j)N_{L/L_{2}}(v)\not\in K.\]
It thus follows by proposition 3.1 that all the nonzero skew-forms \(f_{b,\sigma}\) where \(b\) lies in the subspace \(U=jV\) (of dimension \(k\)) are non-degenerate. Clearly \(U\cap V=\{0\}\) so \(L=U\oplus V\). By Remark 2.1 the subspace \(U\) of \(L\) corresponds to a subspace \(\mathcal{U}\) of \(\operatorname{Alt}_{K}(L)\) with the same dimension defined by \(\mathcal{U}:=\{f_{b,\sigma}:b\in U\}\). Similarly \(V\) corresponds to \(\mathcal{V}\leq\operatorname{Alt}_{K}(L)\) such that \(\dim(V)=\dim(\mathcal{V})\). Then the decomposition (1.5) follows.
**Corollary 4.1**.: _Let \(K\) be a field and \(n\) be even. Suppose \(L\) is a cyclic Galois extension of a field \(K\) of degree \(n\) with Galois group \(\operatorname{Gal}(L/K)=\langle\sigma\rangle\). If \(\operatorname{ord}(\sigma^{i})\equiv 2\pmod{4}\) and \(\operatorname{ord}(\sigma^{i})\neq 2\) then_
\[A^{i}=\mathcal{U}_{i}\oplus\mathcal{V}_{i},\]
_where \(\mathcal{U}_{i}\) is an \(n\)-subspace of dimension \(n/2\) and \(\mathcal{V}_{i}\) is an \((n-2n/\operatorname{ord}(\sigma^{i}))\)-subspace of dimension \(n/2\)._
Proof.: This follows from Theorem A, noting Remark 3.1 and the fact (Lemma 2.4) that a skew form in \(A^{i}\) is either non-degenerate or has rank equal to \(n-2n/\operatorname{ord}(\sigma^{i})\).
Consequently we obtain the following.
**Corollary 4.2**.: _Let \(K\) be a field and \(n=2k\), where \(k\geq 1\) is odd. Let \(L\) be any cyclic Galois extension of \(K\) of degree \(n\) with Galois group \(G=\langle\sigma\rangle\). Then_
\[\operatorname{Alt}_{K}(L)=B^{1}\oplus\left(\bigoplus_{\begin{subarray}{c} \operatorname{ord}(\sigma^{i})\equiv 0\ (\operatorname{mod}2)\\ \operatorname{ord}(\sigma^{i})\neq 2\end{subarray}}\left(\mathcal{U}_{i} \bigoplus\mathcal{V}_{i}\right)\right)\bigoplus\left(\bigoplus_{ \begin{subarray}{c}\operatorname{ord}(\sigma^{i})\equiv 1\ (\operatorname{mod}2) \end{subarray}}A^{i}\right) \tag{4.1}\]
Proof.: Clear in view of Corollary 4.1, Lemma 2.3 as well as the decomposition (1.4).
**Remark 4.1**.: _Let \(n=2^{\alpha}k\) where \(\alpha\geq 1\) and \(k\) is odd. Suppose that \(L\) is a cyclic extension of a field \(K\) of degree \(n\) with Galois group \(\operatorname{Gal}(L/K)=\langle\sigma\rangle\). If \(\operatorname{ord}(\sigma^{i})\) is even then there always exists an \(n\)-subspace of dimension \(n/2\) inside \(A^{i}\). If \(\alpha=1\) this follows from Corollary 4.2. Otherwise if \(\alpha>1\) then it follows from Lemma 3.2 that \(\mathcal{E}_{1}:=\{f_{b,\sigma}:b\in E_{1}\}\) is the desired subspace for \(A^{1}\). The corresponding assertion for \(A^{i}\) now follows in the light of Remark 3.1._
### Proof of Theorem B
Proof.: Firstly we will construct a cyclic extension \(L\) of \(K\) such that \(i\notin L\) where \(i\) is a primitive \(2^{2}\)-th root of unity. Let \(p\) be a prime such that \(p\equiv 1(\operatorname{mod}\ n)\) and consider the cyclotomic extension \(\mathbb{Q}(\eta_{p})\) where \(\eta_{p}\) is a primitive \(p\)-th root of unity. As is known (e.g., [5, Lemma 4]) it is possible to pick the prime \(p\) as above such that \(\mathbb{Q}(\eta_{p})\cap K(i)=\mathbb{Q}\). Let \(L\) be the unique intermediate field \(\mathbb{Q}\subseteq L\subseteq\mathbb{Q}(\eta_{p})\) such that \([L:\mathbb{Q}]=n\). Clearly \(L\cap K(i)=\mathbb{Q}=L\cap K\). By a well known fact (e.g., [3, Chapter 6, Theorem 1.12]) the extensions \(LK(i)/K(i)\) and \(LK/K\) are Galois and
\[\operatorname{Gal}(LK(i)/K(i))\cong\operatorname{Gal}(L/L\cap K(i))= \operatorname{Gal}(L/\mathbb{Q})=\operatorname{Gal}(L/L\cap K)\cong \operatorname{Gal}(LK/K).\]
If \(i\in LK\) then by the last equation
\[[LK:K]=[LK:K(i)][K(i):K]=[LK(i):K(i)][K(i):K],\]
whence \([K(i):K]=1\) thus contradicting the hypothesis on \(K\). Redefining \(L:=LK\) yields the desired cyclic extension \(L/K\) with degree \(n\).
Let \(E_{i}:=\{b\in L:\sigma^{n/2^{i}}(b)=-b\}\) (\(1\leq i\leq\alpha-1\)). By Lemma 3.1 we obtain \(L_{n/2^{i-1}}=L_{n/2^{i}}\oplus E_{i}\) and \(L_{2k}=V_{1}\oplus V_{2}\), where \(V_{1}\) and \(V_{2}\) denote the eigenspaces of
with respect to the eigenvalues \(1\) and \(-1\) respectively. Consequently, we obtain
\[L=L_{n/2}\oplus E_{1}=L_{n/4}\oplus E_{2}\oplus E_{1}=L_{2k}\oplus E_{\alpha-1} \oplus\cdots\oplus E_{1}=V_{1}\oplus V_{2}\oplus E_{\alpha-1}\oplus\cdots\oplus E _{1}. \tag{4.2}\]
Let \(\mathcal{E}_{i}\) be the subspace of \(A^{1}\) corresponding to \(E_{i}:=\{b\in L:\sigma^{n/2^{i}}(b)=-b\}\) under the isomorphism of Remark 2.1, that is, \(\mathcal{E}_{i}=\{f_{b,\sigma}:b\in E_{i}\}\) ( \(1\leq i\leq\alpha-1\) ). By our construction, the only \(2^{i}\)-th roots in \(L\) are \(\pm 1\). As \(\sigma\) fixes both these roots, it follows from Lemma 3.2 that \(\mathcal{E}_{i}\) is an \(n\)-subspace for all \(i\) in the above range.
Similarly, let \(\mathcal{V}_{j}\) correspond to the subspace \(V_{j}\) of \(L\). By Lemma 3.3 the nonzero skew-forms in \(\mathcal{V}_{j}\), where \(j=1,2\) are degenerate whence these are \((n-2)\)-spaces by Lemma 2.4. The required decomposition (1.6) is now immediate from (4.2).
**Corollary 4.3**.: _In the situation of Theorem B if \(\mathrm{ord}(\sigma^{i})\equiv 0\pmod{4}\), say \(\mathrm{ord}(\sigma^{i})=2^{\beta}k^{\prime}\) (\(\beta\geq 2\)) then_
\[A^{i}=\mathcal{V}_{1}^{i}\oplus\mathcal{V}_{2}^{i}\oplus\mathcal{E}_{1}^{i} \oplus\cdots\mathcal{E}_{\beta-1}^{i}, \tag{4.3}\]
_where_
1. \(\mathcal{E}_{k}^{i}\) _is an_ \(n\)_-subspace of dimension_ \(n/2^{i}\) _for_ \(1\leq k\leq\beta-1\)_,_
2. \(\mathcal{V}_{j}^{i}\) _is an_ \((n-2)\)_-subspace of dimension_ \(k^{\prime}n/\mathrm{ord}(\sigma^{i})\) _for_ \(1\leq j\leq 2\)_._
Proof.: This follows from proof of Theorem B, noting Remark 3 and the fact (Lemma 2.4) that a skew form in \(A^{i}\) is either non-degenerate or has rank equal to \(n-2n/\mathrm{ord}(\sigma^{i})\).
**Corollary 4.4**.: _In the situation of Theorem B there is direct-decomposition_
\[\mathrm{Alt}_{K}(L)= B^{1}\bigoplus\left(\bigoplus_{\begin{subarray}{c}\mathrm{ord}( \sigma^{i})\equiv 2\pmod{4}\\ \mathrm{ord}(\sigma^{i})\neq 2\end{subarray}}\left(\mathcal{U}_{i}\bigoplus \mathcal{V}_{i}\right)\right)\bigoplus\left(\bigoplus_{\begin{subarray}{c} \mathrm{ord}(\sigma^{i})\equiv 1\pmod{2}\end{subarray}}A^{i}\right)\] \[\bigoplus_{\begin{subarray}{c}\mathrm{ord}(\sigma^{i})\equiv 0 \pmod{4}\end{subarray}}\left(\mathcal{V}_{1}^{i}\bigoplus\mathcal{V}_{2}^{i} \bigoplus\mathcal{E}_{\beta-1}^{i}\bigoplus\cdots\bigoplus\mathcal{E}_{1}^{i}\right) \tag{4.4}\]
Proof.: Using Corollaries 4.1, 4.3 and Lemma 2.3 as well as the decomposition (1.4), we can deduce the required decomposition.
**Remark 4.2**.: _As its proof shows, Theorem B as well as its corollaries remain valid for an arbitrary cyclic extension \(L/K\) of degree \(n=2^{\alpha}k\) (\(\alpha\geq 2\)) such that \(-1\) is not a square in \(L\). Similarly, let \(K\) be a field such that \(f(X):=X^{4}+1\) is irreducible in \(K[X]\) (it is not difficult to show that \(K\) has this property if and only if none of \(-1,2\) and
_is a square in \(K\)). Then Theorem B holds true for any cyclic extension \(L/K\) of degree \(n=2^{\alpha}k\). Indeed, if \(\eta_{i}\) is a \(2^{i}\)-root of unity for \(i\geq 1\) then the conditions \(-1\notin K^{2}\) and \(\sigma(\eta_{i})=-\eta_{i}^{-1}\) mean that \(\eta_{i}\notin\{-\pm 1,\pm i\}\), where \(i\) denotes a primitive \(4\)-th root of unity in \(L\). Thus \(\eta\) must have order \(2^{s}\) where \(s\geq 3\). Since \(\eta\in L_{2}\) this would mean that \(L_{2}\) contains an element of order \(8\) and thus a root of \(f\) implying \(f\) has a quadratic factor in \(K[X]\)._
## 5. Proofs of Theorems C and D
### Proof of Theorem C
Proof.: Let \(E_{i}\) ( \(1\leq i\leq\alpha-1\) ) and \(V_{j}\) ( \(1\leq j\leq 2\) ) be as in Lemma 3.1. As in the proof of Theorem B, we have
\[L=V_{1}\oplus V_{2}\oplus E_{\alpha-1}\oplus\cdots\oplus E_{1}.\]
By the hypothesis \(-1\) is not a square in \(K\) from which it easily follows that \(a\geq 2\). Let \(w_{i}\) and \(\eta_{i}\) be as in Lemma 3.2. Note that \(\sigma_{f}^{2}(w_{i})=-w_{i}\) and thus \(w_{i}^{2}\in L_{2}\) but \(w_{i}\notin L_{2}\). Consequently \(w_{i}^{2(q^{2}-1)}=1\) and \(w_{i}^{(q^{2}-1)}=-1\). Since \(\sigma_{f}(w_{i})=w_{i}^{q}\) hence \(\eta_{i}=w_{i}^{q-1}\). It follows that \(\eta_{i}\) is a \(2(q+1)\)-th root of unity but not a \((q+1)\)-th root of unity.
(1) Suppose \(\alpha\leq a+1\). Since \(1\leq i\leq\alpha-1\) therefore \(1\leq i\leq a\). Again by Lemma 3.2, \(f_{b_{i},\sigma_{f}}\) is degenerate if and only if \(\eta_{i}\) is a \(2^{i}\)-th root of unity. Since \(i\leq a\), this would mean that \(\eta_{i}^{q+1}=\eta_{i}^{2^{a}l}=1\), a contradiction. Let \(\mathcal{E}_{i}\) be the subspace of \(A^{1}\) corresponding to \(E_{i}\) under the isomorphism of Remark 3.1. It follows that \(\mathcal{E}_{i}\) is an \(n\)-subspace of dimension \(n/2^{i}\).
(2) Suppose \(\alpha>a+1\). Pick \(i\in[1,\alpha-1]\). If \(1\leq i\leq a\) it follows from part (1) above that \(E_{i}\) is an \(n\)-subspace for \(1\leq i\leq a\). So we assume that \(i\geq a+1\). By the hypothesis \(l=1\), whence \(\eta_{i}^{2^{a+1}}=\eta_{i}^{2(q+1)}=1\). It follows that if \(a+1\leq i\leq\alpha-1\) then \(\eta_{i}^{2^{i}}=1\). Thus in view of Lemma 3.2 all the skew-forms in \(\mathcal{E}_{i}\) are degenerate and in this case by Lemma 2.4, \(\mathcal{E}_{i}\) is an \((n-2)\)-subspace.
Similarly let \(\mathcal{V}_{j}\) be the subspace of \(A^{1}\) corresponding to \(V_{j}\). Then by Lemmas 3.3 and 2.4, \(\mathcal{V}_{j}\) is an \((n-2)\)-subspace.
**Remark 5.1**.: _In Theorem C when \(\alpha>a+1\) and \(l>1\) then \(\mathcal{E}_{i}\) is neither an \(n\)-subspace nor an \((n-2)\)-subspace for \(a+1\leq i\leq\alpha-1\). Indeed, by the definition of \(E_{i}\)_
\[E_{i}=\{b\in L:\sigma_{f}^{n/2^{i}}(b)=-b\}=\{b\in L:b^{q^{n/2^{i}}-1}=-1\}.\]
_Let \(C:=\{b\in L^{\times}:b^{2(q^{n/2^{i}}-1)}=1\}\). Then \(C\) is a cyclic subgroup of \(L^{\times}\). Clearly, \(C=L^{\times}_{n/2^{i}}\mathbin{\vartriangleleft}(E_{i}\setminus\{0\})\). Let \(u\) be a generator of \(C\). It is clear that \(b_{i}=u^{s}\in E_{i}\) if and only if \(s\) is odd. We claim that \(f_{b_{i},\sigma_{f}}\) is degenerate if and only if \(s\) is an odd multiple of \(l\). Indeed, let \(w_{i}\) and \(\eta_{i}\) be as in Lemma 3.2. Then_
\[w_{i}=b_{i}\sigma_{f}^{2}(b_{i})\cdots\sigma_{f}^{n/2^{i}-2}(b_{i})=b_{i}b_{i} ^{q^{2}}\cdots b_{i}^{q^{n/2^{i}-2}}=b_{i}^{\frac{q^{n/2^{i}}-1}{q^{2}-1}},\]
_and_
\[\eta_{i}=w_{i}^{q-1}=b_{i}^{\frac{q^{n/2^{i}}-1}{q+1}}=b_{i}^{t},\]
_where \(t:=\frac{q^{n/2^{i}}-1}{q+1}\). By Lemma 3.2, \(f_{b_{i},\sigma}\) is degenerate if and only if \(\eta_{i}^{2^{i}}=1\). Now from the proof of Theorem C, \(\eta_{i}^{2^{a+1}l}=\eta_{i}^{2(q+1)}=1\) and \(\eta_{i}^{2^{a}l}=\eta^{(q+1)}\neq 1\). Consequently \(f_{b_{i},\sigma_{f}}\) is degenerate if and only if \(\eta_{i}\) is a primitive \(2^{a+1}\)-th root of unity, that is, if and only if,_
\[2^{a+1}=\operatorname{ord}(\eta_{i})=\operatorname{ord}(u^{st})=\frac{ \operatorname{ord}(u)}{\gcd(\operatorname{ord}(u),st)}=\frac{2(q+1)t}{\gcd(2 (q+1)t,st)}=\frac{2^{a+1}l}{\gcd(2^{a+1}l,s)}, \tag{5.1}\]
_or, \(\gcd(2^{a+1}l,s)=l\). In other words, for \(b_{i}=u^{s}\in E_{i}\), \(f_{b_{i},\sigma_{f}}\) is degenerate if and only if \(s\) is an odd multiple of \(l\). Thus, for example, \(f_{u^{l},\sigma_{f}}\) is degenerate while \(f_{u,\sigma_{f}}\) is non-degenerate._
### Proof of Theorem D
Proof.: By [1, Proposition 5.4.11] for every \(n\) there exists exactly one unramified extension \(L\) of \(K=\mathbb{Q}_{p}\) of degree \(n\) obtained by adjoining a primitive \((p^{n}-1)\)-th root of unity, say \(\theta\). Moreover according to [2, Corollary 2], the extension \(L/K\) constitutes a cyclic extension such that \(\operatorname{Gal}(L/K)=\langle\sigma\rangle\) where \(\sigma\) is defined by \(\sigma(\theta)=\theta^{p}\). Since \(-1\) is not a square in \(K\) so \(p=2^{a}l-1\equiv 3\pmod{4}\) by [1, Proposition 3.4.2] and thus \(a\geq 2\).
Let \(E_{i}:=\{b\in L:\sigma^{\frac{n}{2^{i}}}(b)=-b\}\) where \(1\leq i\leq\alpha-1\). The hypothesis \(\alpha\leq a+1\) means that \(1\leq i\leq a\). Let \(w_{i},\eta_{i}\) be as in Lemma 3.2. Again by Lemma 3.2, \(f_{b_{i},\sigma}\) (\(b_{i}\in E_{i}\)) is degenerate if and only if \(\eta_{i}\) is \(2^{i}\)-th root of unity such that \(\sigma(\eta_{i})=-\eta_{i}^{-1}\). As \(2^{i}\mid 2^{a}\mid p+1\mid p^{n}-1\), this would mean that \(\langle\eta_{i}\rangle\leq\langle\theta\rangle\) and consequently, \(\sigma(\eta_{i})=\eta_{i}^{p}\). But then
\[\sigma(\eta_{i})\eta_{i}=\eta_{i}^{p+1}=\eta_{i}^{2^{a}l}=1.\]
It follows that \(f_{b_{i},\sigma}\) is non-degenerate. Hence \(\mathcal{E}_{i}\) is an \(n\)-subspace, where \(\mathcal{E}_{i}\) is the subspace of \(A^{1}\) corresponding to \(E_{i}\).
Similarly let \(\mathcal{V}_{j}\) ( \(1\leq j\leq 2\) ) be the subspace of \(A^{1}\) corresponding to \(V_{j}\). Then by Lemmas 3.3 and 2.4, \(\mathcal{V}_{j}\) is an \((n-2)\)-subspace. The theorem now follows.
**Remark 5.2**.: _In the situation of Theorem \(D\) for \(p=2\) the decomposition (4.4) holds true in view of Remark 4.2._
## 6. A 3-dimensional 4-subspace in \(\mathrm{Alt}_{4}(\mathbb{Q})\)
Let \(K:=\mathbb{Q}\) and \(L\) be the cyclotomic field \(\mathbb{Q}(\eta)\) where \(\eta\) is a primitive 5-th root of unity in \(\mathbb{C}\). Then \(L/K\) is a cyclic extension of degree 4. We will show that the maximum dimension of a 4-subspace inside \(A^{1}\) is 3. Let \(b=x+y\eta+z\eta^{2}+w\eta^{3}\in L\), where \(x,y,z,w\in\mathbb{Q}\). We take the automorphism \(\sigma\) defined by \(\sigma(\eta)=\eta^{3}\) as a generator of \(\mathrm{Gal}(L/K)\). Using the theory of Gauss periods we may find the basis, namely, \(\{1,\eta^{2}+\eta^{3}\}\) for \(L_{2}/\mathbb{Q}\). By Proposition 3.1, \(f_{b,\sigma}\) is degenerate if and only if \(N_{L/L_{2}}(b)\in\mathbb{Q}\), that is, the coefficient of \(\eta^{2}+\eta^{3}\) in \(N_{L/L_{2}}(b)\) is zero. It is straightforward to check that this coefficient is \(-xy+xz+xw-yz+yw-zw\). In this situation we thus obtain the following.
**Proposition 6.1**.: _The maximum dimension of a \(4\)-subspace inside \(A^{1}\) equals to a maximum dimension of a totally anisotropic subspace of \(L\) with respect to the following quadratic form_
\[\mathcal{Q}(x,y,z,w)=xy-xz-xw+yz-yw+zw.\]
Proof.: Clear.
**Theorem 6.1**.: _(Legendre's Theorem)([4, Theorem 1, Chapter 5]) Suppose \(a,b,c\in\mathbb{Z}\) are such that \(abc\) is a non-zero square-free integer. Then the equation \(aX^{2}+bY^{2}+cZ^{2}=0\) has a non-trivial \(Z\)-solution if and only if \((i)\)\(a,b,c\) do not all have the same sign; \((\)\((\)\(i\)\(a)\)\(-\)\(bc\) is a square modulo \(|a|\), \((\)\((\)\(i\)\(b\)\()\)\(-\)\(ac\) is a square modulo \(|b|\) and \((\)\((\)\(i\)\(c\)\()\)\(-\)\(ab\) is a square modulo \(|c|\)._
**Theorem 6.2**.: _The maximum dimension of a \(4\)-subspace in \(A^{1}\) is \(3\)._
Proof.: Let \(U\) be the \(\mathbb{Q}\)-subspace of \(L\) spanned by \(\{\eta+\eta^{2},-1+\eta^{3},1+\eta\}\). Let \(b=c_{1}(\eta+\eta^{2})+c_{2}(-1+\eta^{3})+c_{3}(1+\eta)\). We claim that \(\mathcal{W}:=\{f_{b,\sigma}:b\in U\}\leq A^{1}\) is the desired 4-subspace. Indeed, according to proposition 6.1 we need to show that the quadratic form
\[\mathcal{Q}(c_{1},c_{2},c_{3})=c_{1}^{2}+c_{2}^{2}+c_{3}^{2}+c_{1}c_{3}-3c_{2} c_{3}\]
has no non-trivial integer solution. It can be checked that \(\mathcal{Q}\) reduces to it's diagonal form
\[\mathcal{Q}^{\prime}=c_{1}^{2}+c_{2}^{2}-6c_{3}^{2}.\]
To complete the proof, it suffices to show that \(\mathcal{Q}^{\prime}\) has no non-trivial integer solutions. Based on Theorem 6.1 it is evident that \(\mathcal{Q}^{\prime}\) has no non-trivial integer solutions since \(-ab=-1\) is not square modulo \(|c|=6\).
## 7. Conclusion
Eigenspaces of the elements of the Galois group yield constant rank subspaces in \(\operatorname{Alt}_{K}(L)\). We can always find an \(n\)-subspace of dimension \(n/2\) in \(A^{i}\) for an arbitrary field \(K\) (Remark 4.1). However, this may not be the maximum possible dimension of an \(n\)-subspace in \(A^{1}\) (as is evident from the example in Section 5) unless \(n=2k\) with \(k\) odd (Theorem A) or \(K\) is finite (or more generally \(C^{1}\)[5, Lemma 3] ). Moreover unless \(K\) is finite it is not clear that we get an \(n\)-subspace of maximum dimension of \(\operatorname{Alt}_{n}(K)\) in this way. The question of the maximum dimension of an \(n\)-subspace in \(\operatorname{Alt}_{n}(K)\) is closely related to other invariants for skew-forms including \(d(K,n,1)\) and \(s_{n}(K)\) defined in [7] and [5] respectively. In particular, it is unknown to the authors if there is a \(6\)-subspace in \(\operatorname{Alt}_{6}(\mathbb{Q})\) of dimension four.
## Acknowledgements
The second author gratefully acknowledges support from an NBHM research award.
|
2305.07390 | Revisiting Temporal Blocking Stencil Optimizations | Iterative stencils are used widely across the spectrum of High Performance
Computing (HPC) applications. Many efforts have been put into optimizing
stencil GPU kernels, given the prevalence of GPU-accelerated supercomputers. To
improve the data locality, temporal blocking is an optimization that combines a
batch of time steps to process them together. Under the observation that GPUs
are evolving to resemble CPUs in some aspects, we revisit temporal blocking
optimizations for GPUs. We explore how temporal blocking schemes can be adapted
to the new features in the recent Nvidia GPUs, including large scratchpad
memory, hardware prefetching, and device-wide synchronization. We propose a
novel temporal blocking method, EBISU, which champions low device occupancy to
drive aggressive deep temporal blocking on large tiles that are executed
tile-by-tile. We compare EBISU with state-of-the-art temporal blocking
libraries: STENCILGEN and AN5D. We also compare with state-of-the-art stencil
auto-tuning tools that are equipped with temporal blocking optimizations:
ARTEMIS and DRSTENCIL. Over a wide range of stencil benchmarks, EBISU achieves
speedups up to $2.53$x and a geometric mean speedup of $1.49$x over the best
state-of-the-art performance in each stencil benchmark. | Lingqi Zhang, Mohamed Wahib, Peng Chen, Jintao Meng, Xiao Wang, Toshio Endo, Satoshi Matsuoka | 2023-05-12T11:32:16Z | http://arxiv.org/abs/2305.07390v1 | # Revisiting Temporal Blocking Stencil Optimizations
###### Abstract.
Iterative stencils are used widely across the spectrum of High Performance Computing (HPC) applications. Many efforts have been put into optimizing stencil GPU kernels, given the prevalence of GPU-accelerated supercomputers. To improve the data locality, temporal blocking is an optimization that combines a batch of time steps to process them together. Under the observation that GPUs are evolving to resemble CPUs in some aspects, we revisit temporal blocking optimizations for GPUs. We explore how temporal blocking schemes can be adapted to the new features in the recent Nvidia GPUs, including large scratchpad memory, hardware prefetching, and device-wide synchronization. We propose a novel temporal blocking method, EBISU, which champions low device occupancy to drive aggressive deep temporal blocking on large tiles that are executed tile-by-tile. We compare EBISU with state-of-the-art temporal blocking libraries: STENCILGEN and ANSD. We also compare with state-of-the-art stencil auto-tuning tools that are equipped with temporal blocking optimizations: ARTEMIS and DRSTENCIL. Over a wide range of stencil benchmarks, EBISU achieves speedups up to 2.53x and a geometric mean speedup of 1.49x over the best state-of-the-art performance in each stencil benchmark.
Stencil, Temporal Blocking Optimizations, GPU +
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
+
Footnote †: Corresponding authors.
uses at maximum 96 registers per thread and STENCILGEN (Srivastava et al., 2017) uses at maximum 64 registers per thread for all the benchmarks reported. Yet the limit for registers is 255 in both V100 and A100 (Beng et al., 2017) GPUs. For shared memory usage, AN5D (Kang et al., 2017) consumes at most 34.8 MB per thread block and STENCILGEN (Srivastava et al., 2017) uses at most 33.8 MB per thread blocks. Yet the limit for shared memory is 164 MB in A100 (Beng et al., 2017) GPUs. This conservative manner is in part due to the intention for ensuring a higher occupancy.
In this paper, we take inspiration from the work of Volkov et al. (Volkov et al., 2017); we propose a different approach to occupancy and performance in temporal blocking. We first determine a parallelism setting that is minimal in occupancy while sufficient in instruction level parallelism. We base our approach for temporal blocking on lower occupancy, i.e., we build large tiles running at minimum possible concurrency to be executed tile-by-tile, and accordingly scale up the use of on-chip resources to run the tile at maximum possible performance.
We propose _EBISU_: Epoch (temporal) Blocking for Iterative Stencils, with Ultracompact parallelism. EBISU's design principle is to run the code at the minimum possible parallelism that would saturate the device, and then use the freed resources to scale up the data reuse and reduce the dependencies between tiles. Though the idea is seemingly simple, the challenge is the lack of design principles to achieve scalable optimizations for temporal blocking. In other words, temporal blocking schemes in literature are designed to avoid pressure on resources since resources are scarce in over-subscribed execution; EBISU on the other hand assumes ample of resources that are freed due to running in low occupancy and the goal is to scale the data reuse to all the available resources for a single tile at a time that spans the entire device. We drive EBISU through a cost model that makes the decision on how to scale the use of resources effectively at low occupancy.
The contributions of this paper are as follows:
* We propose the design principle of EBISU: low-occupancy execution of a single-tile at a time while scaling the use of resources to improve data locality.
* We include an analysis of the practical attainable performance to support the design decisions for EBISU. We build on our analysis to identify how various factors contribute to the performance of EBISU.
* We evaluate EBISU across a wide range of stencil benchmarks. Our implementation achieves significant speedup over state-of-the-art libraries and implementation. We achieve a geomean speedup of 1.53x over the top performing state-of-the-art implementations for each stencil benchmark.
## 2. Background
### Stencils
Stencils are characterized by their memory access patterns. We present the pseudo code for the 1D 3-Point, 2D 5-Point and 3D-7-Point Jacobian Stencil in Listing 1, Listing 2, and Listing 3 respectively. We use a 2D Jacobian 5-point (2d5pt) stencil as an example. Figure 1.a illustrates the neighborhood dependencies of the 2d5pt stencil. In order to compute one point, the four adjacent points are necessary. Two blocking methods are widely used to optimize iterative stencils for data locality:
#### 2.1.1. Spatial Blocking
In spatial blocking on GPUs, thread (blocks) load a single tile of the domain to its local memory to improve the data locality among adjacent locations (Beng et al., 2017; Kang et al., 2017). The local memory can be registers (Beng et al., 2017; Kang et al., 2017)(Figure 1.b) and scratchpad memory (Kang et al., 2017; Kang et al., 2017)(Figure 1.c). However, halo layer(s) are still unavoidable.
#### 2.1.2. Temporal Blocking
In iterative stencils, each time step depends on the result of the previous time step. An alternative optimization is to to combine several time steps to expose temporal locality (Kang et al., 2017; Kang et al., 2017). In this case, the temporal dependency is resolved by overlapped tiling (Beng et al., 2017; Kang et al., 2017; Kang et al., 2017) (Figure 2.a) or by applying complex geometry (Kang et al., 2017; Kang et al., 2017) (Figure 2.b, diamond tiling (Beng et al., 2017; Kang et al., 2017) as an example). The main short-coming of overlapped tiling is redundant computation, while the main disadvantage of complex geometry is an adverse effects on cache hits (Kang et al., 2017). Additionally, complex geometry is penalized by the device-wide synchronization necessary to ensure that the result is updated in the global memory.
#### 2.1.3. N-5-D Temporal Blocking
N.5-D blocking (Kang et al., 2017; Kang et al., 2017; Srivastava et al., 2017) is a combination of spatial blocking and overlapped temporal blocking (Kang et al., 2017). Take 3.5-D temporal blocking as an example. We do spatial
Figure 1. Spacial Blocking, using 2D 5-point Jacobian (2d5pt) stencil as an example
Figure 2. Temporal Blocking
tiling in the X and Y dimensions, and then stream in the Z dimension (2.5-D spatial blocking). As we stream over the Z dimension, each XY plane would conduct a series of temporal steps (1-D temporal blocking). This method reduces the overhead of redundant computations in an overlapped temporal blocking schema.
### GPU Architecture
#### 2.2.1. CUDA Programming Model
The CUDA programming model includes: the base execution unit, thread; 32 threads grouped into a block of schedule units, warp; Several warps grouped together into a thread block unit; and thread block units can be grouped into a grid. When mapping the programming model to the GPU architecture, the CUDA driver maps the thread block to a Stream Multiprocessor (SM) and grids to a GPU device. The mapping abides by the rules of dividing the resources among the threads. For example, at most 8 thread blocks and at most 2048 threads can reside concurrently on a stream multiprocessor. Also, the total number of registers and shared memory in a stream multiprocessor also limits the number of thread block that can run concurrently.
#### 2.2.2. Explicit Synchronization
Nvidia introduced cooperative group APIs (Beng et al., 2016) to provide a hierarchical of synchronizations in addition to thread block synchronization from P100 (2016). Among them, the new grid level synchronization provides additional choice for program. Zhang et. al. (Zhang et al., 2017) shows that latencies of these APIs are acceptable that would allow practical use.
#### 2.2.3. Asynchronous Shared Memory Copy
A100 (2020) further introduced APIs (Beng et al., 2016) to copy data from global memory to shared memory, without blocking. Martin et al. (Martin et al., 2017) demonstrated that this API benefits low-arithmetic intensity kernels.
## 3. Ebisu: High Performance Temporal Blocking at Low Occupancy
In this section we give an overview of our temporal blocking method: EBISU (Figure 3 gives an overview) The design of EBISU follows two main principles: minimal parallelism that would saturate the device (the Minimal Parallelism step in Figure 3), and scalability in using resources (the Implementation step in Figure 3). Additionally, EBISU relies on a comprehensive analysis for implementation decisions (the pink steps in Figure 3).
### Saturating the Device at Minimal Parallelism
In EBISU we first tune the parallelism exposed in the kernel to find the minimal combination of occupancy and instruction level parallelism that would saturate the device. The minimal occupancy that we aim for in this paper is 12.5% since further reducing the occupancy for memory-bound codes can start to regress the performance (Zhu et al., 2017). We aim to minimize resources used for increasing the locality. We use Little's Law to identify the minimum parallelism (occupancy) in the code (discussed in Section 6.1). We point out that readers can also rely on auto-tuning tools to empirically figure out the minimal parallelism (Zhu et al., 2017; Zhang et al., 2017; Zhang et al., 2017).
### Scaling the Use of Resources
Despite the relatively large amount on-chip resources, there is a lack in design principles that are able to scale up to take advantage of the large on-chip resources in temporal blocking. We thereby build on a set of existing optimizations to drive a resource-scalable scheme for increasing locality (Section 4).
### Implementation Decisions
We base the decision for implementing EBISU on our analysis for the practical attainable performance (Section 5). The main utility
Figure 3. Overview of EBISU.
of this analysis is to decide whether to implement a device tile (Section 6.3), and the parameterization of spatial and temporal blocking (Section 6.4)).
### Fine-Tuning
After identifying the ideal tiling scheme and parameterization, implementation, we fine-tune the kernel to extract additional performance. For instance, we tune the temporal blocking depth (Section 6.2).
## 4. Efficiently scaling the use of resources
### One Tile At A Time
Beyond the point where the GPU becomes saturated, the workload will inevitably be serialized. We intentionally introduce a method to serialize the execution of tiles, where each individual tile becomes large enough to saturate the GPU. We call this _device tiling_. Alternatively, we can use tiles that are executed in parallel, yet each tile individually saturates a single streaming multiprocessor. We call this _SM tiling_.
In device tiling, we tile the domain such that a single tile can scale up to use the entire on-chip memory capacity of the GPU. Next, we let the tile reside in the on-chip memory while updating the cells for a sufficient number of time steps to amortize the initial loading and final storing overheads. We then store the final result for the tile on the device, and then we move to the next tile, i.e., the entire GPU is dedicated to computing only one single tile at any given time. Figure 4 shows how we do spatial tiling at the device level. We assume \(tile_{x}\times tile_{y}\) to be the thread block tile configuration and \(Dtile_{x}\times Dtile_{y}\) to be the device tile configuration. Thus, \((tile_{x}+halo\cdot 2)\times(tile_{y}+halo\cdot 2)\) is the total on-chip memory consumed at the stream multiprocessor level. \((Dtile_{x}+HALO\cdot 2)\times(Dtile_{y}+HALO\cdot 2)\) is the total on-chip memory consumed at the device level, where \(HALO=rad\cdot t\). Additionally, figure 1.c shows the dependency between thread blocks that we need to resolve. We use the bulk synchronous parallel (BSP) model to exchange the halo region and CUDA's grid level barrier for synchronization. We transpose the halo region that originally did not coalesce to reduce the memory transactions. Note that device tiling is an additional layer on top of SM tiling. Figure 4 shows an example of 2D spatial tiling at device level, and Listing 4 presents the pseudocode of a 2D 5-point Jacobian stencil with device level spatial tiling.
### Circular Multi-Queue
EBISU aims to scale up resource usage. One way to achieve this goal is to scale up to very deep temporal blocking. In this section, we introduce a simple data structure that enables the efficient management of very deep temporal blocking: namely, _circular multi-queue_. We elaborate on our design by first introducing _multi-queue_ for streaming (Section 4.2.1), and then we describe the implementation of the _circular multi-queue_ (Section 4.2.2).
#### 4.2.1. Multi-Queue
We use the 1D 3-Point Jacobian stencil (Listing 1) to illustrate our implementation. Streaming is a typical method to implement temporal blocking. Figure 5.a demonstrates an example of streaming. The parallelogram in the figure represents the tiling in time and spatial dimensions that we process in Figure 5.b. The process of each time step can be abstracted as two functions: _enqueue_ and _dequeue_, which are standard methods in a queue data structure. We additionally add _compute_ for stencil computation. As such, we manage each time step with a queue data structure. Next, we link queues in different time steps together, to become a multi-queue data structure. The data structure description and the pseudocode for multi-queue is in Listing 5.
Multi-queue facilitates seamless transitions between time steps. The dequeue operation (data expiration) for the current time step runs concurrently with the enqueue operation for the next time step. After the execution of a single tile, we reset the multi-queue to its initial state - a process we refer to as'shuffle'. A standard method of conducting a shuffle involves shifting values to their designated locations, as demonstrated in lines 24-27 of Listing 5.
It is important to note that although we base our analysis on a 1D stencil example in this section, it can be simply extended to 2D or 3D stencils by replacing the 1D circular points (domain cells) in Figure 5 to 1D lines (corresponding to 2D stencils) or 2D planes (corresponding to 3D stencils), or even the device tiles discussed in Section 4.1. In the device tiling situation, the sync(); function should be replaced by device (grid) level synchronization. Additionally, we can trade the concurrently processed domain cells for additional
Figure 4. 2D Spatial tiling at the GPU device level.
instruction level parallelism (ILP), which might be required by the parallelism setting (discussed in Section 6.1).
```
1StruedQueue
2RAL:d_/dataarray
3index1://ital
4index1://head
5Queue(RAL-data,indexhead,indextail):
6
7data(dda).d(dda).d(1)(1)(1)
8REAL(dqueque)(1)(1)(1)
9voidexpeue(REALinput){[t1]=input:]
10REALcompute()
11return*a([hd]*b.d[hd+1]+c*d[hd+2]:)//11d3ptitemell
12
13
14semplete.lst depth:
15structMultiQueue('/Multi-queuedatastructure
16REAL:d_/dataarray
17index1://readoutguest
18index1://readoutguest
19index1://rangeofinput-queue
20index1:/rangeofsinglequeue,reservedforlazystreaming
21MultiQueue(RAL-data,indexrange,indexqueue_range):
22{dda(d,r)(range,dqueue)range}
23{d=10+d.depth:t*a)hd[t1]=queue_range_q_r*s:]
24Multiqueue(RAL-data,indexrange):
25QuoteOperator[1](1){returnQueue(d.dds[t].hds[t]*a_r):]
26voidshift([]){//default,movedata
27gr(i:=i-0;i-r;-1;*a(i]*d[i+1]; sync();
28
29defineRANCE(?)
30.global_weld|d|d|s|t|s|t|(REAL=input,REAL=output,...){...
31RAL-buffer[
a long chain of dependencies as the address range increases. An alternative solution is to compute the target address (Listing 6 line 7-8.). The modulo operation is one of the solutions; however, this operator is time consuming. Instead, we extend the ring index to be \(range=2^{n},n\in\mathbb{Z}^{*}\). In this case, we have \(index\%range=index\%(range-1)\). This consumes additional space (Listing 6 line 22).
### Optimizations
#### 4.3.1. Prefetching
Prefetching is a well-documented optimization. Readers can refer to (Wolf et al., 2017) for hints. The new asynchronous shared memory copy API offers another approach for prefetching, with a trade-off of requiring additional shared memory space for buffering.
#### 4.3.2. Lazy Streaming
The naive implementation showed in Figure 5 and Listing 5 clearly suffers from the overhead of frequent synchronization. We propose _lazy streaming_ to alleviate this type of overhead. The basic idea is that we delay the processing of a domain cell until all domain cells required to update the current cell are already updated. Otherwise, we would pack the planes that include the current domain cell and cache it in on-chip memory. As Figure 6 shows, the computation of _location 3_ is postponed until the three points of the previous time steps have been updated.
The benefit of using lazy streaming is not significant in 1D stencils. In 2D or 3D stencils, we replace the points in Figure 6 with 1D or 2D planes for 2D or 3D stencils. The planes usually involve inter-thread dependency, which makes synchronizations unavoidable (warp shuffle when using registers for locality (Bartos et al., 2016; Bartos et al., 2017) or thread block synchronization when using shared memory for locality (Linggi et al., 2017)), when applying _device tiling_ (Section 4.1), device (grid) level synchronization becomes unavoidable, and it has higher overhead in comparison to thread block synchronizations. As illustrated in Listing 7, lazy streaming can ideally reduce the synchronization to one synchronization per tile. The benefit of lazy streaming comes from the number of synchronization it reduced.
It's worth noting that double-buffering (Linggi et al., 2017; Wang et al., 2017) can be viewed as a special case of lazy streaming when only a single queue evolved.
#### 4.3.3. Redundant Register Streaming
The above discussions, which do not specify the on-chip memory type, can apply to both shared memory-based and register based implementations. However, there is one exception: the circular multi-queue cannot be implemented with register arrays since register addresses cannot be determined at compile time.
At low occupancy, we obtain a large number of registers and shared memory per thread. Therefore, by reducing the occupancy, we can afford to redundantly store intermediate data in both the registers and the shared memory. Streaming w/ caching in shared memory is discussed in STENCILGEN (Wang et al., 2017). Streaming w/ caching in the registers is discussed in AN5D (Linggi et al., 2017). We benefit from both components by caching in both shared memory and registers. We can reduce shared memory access times to their minimum by using registers first (in comparison to AN5D) and reducing the necessary synchronizations when using only shared memory (in comparison to STENCILGEN). Additionally, due to data being mostly redundant, we can tune to reduce resource usage in either part of registers or shared memory to reduce the resource burden.
## 5. Practical Attainable Performance
In this section, we analyze the practical attainable performance of temporal blocking by incorporating an the overhead analysis (we derive valid proportion \(\mathbb{V}\) from overhead analysis in Section 5.2) to a roofline-like model (Kang et al., 2017; Linggi et al., 2017) that predicts the attainable performance (\(\mathbb{P}\) in Section 5.1). We project the practical attainable performance \(\mathbb{P}\mathbb{P}\) as:
\[\mathbb{P}\mathbb{P}=\mathbb{P}\times\mathbb{V} \tag{1}\]
The model proposed in this section serves as a guide for implementation design choices in Section 6.
### Attainable Performance
We use the giga-cells updated per second (GCells/s) as the metric for stencil performance (Bartos et al., 2017; Linggi et al., 2017). We consider three pressure points in a stencil kernel: double precision ALUs, cache bandwidth (i.e., shared memory bandwidth in this paper), and device memory bandwidth (GPU global memory in this paper). Note that registers could also be a pressure point in extreme cases of very high order stencils (outside the scope of this paper).
Assuming that the global memory bandwidth is \(\mathbb{B}_{gm}\), the shared memory bandwidth is \(\mathbb{B}_{sm}\), and the compute speed is \(\mathbb{T}\mathbb{H}\mathbb{R}_{cmp}\), the total access time is \(A_{gm}\) and \(A_{sm}\) for global memory and shared memory, respectively. The total amount of computation is \(A_{cmp}\). The memory access time per cell is \(a_{gm}\) and \(a_{sm}\) for global memory and shared memory, respectively; flops per cell is \(a_{cmp}\). The total number of cells in the domain of interest is \(\mathbb{D}_{gm}\). \(\mathbb{D}_{sm}\) and \(\mathbb{D}_{gm}\) for global memory, shared memory, and computation, respectively. The size of the cell (in Bytes) per cell is \(\mathbb{S}_{Cell}\). We can compute the
Figure 6. Lazy streaming for temporal blocking. 1D 3-Point Jacobian stencil with depth–3 as an example. Notations are the same as Figure 5.
runtime of using each component to be:
(2) \[\mathbb{T}_{gm}=\frac{A_{gm}}{\mathbb{B}_{gm}}\times S_{Cell}=\frac{a_{gm} \times\mathbb{D}_{gm}}{\mathbb{B}_{gm}}\times\mathbb{S}_{Cell}\] (3) \[\mathbb{T}_{sm}=\frac{A_{sm}\times t}{\mathbb{B}_{sm}}\times S_{Cell} =\frac{a_{sm}\times\mathbb{D}_{sm}\times t}{\mathbb{B}_{sm}}\times S_{Cell}\] (4) \[\mathbb{T}_{ceom}=\frac{A_{emp}\times t}{\mathbb{T}\mathbb{B}_{ comp}}=\frac{a_{emp}\times\mathbb{D}_{emp}\times t}{\mathbb{T}\mathbb{B}_{ temp}}\] (5)
The total runtime of the stencil is projected as:
\[\mathbb{T}_{stencil}=\max(\mathbb{T}_{gm},\mathbb{T}_{sm},\mathbb{T}_{cmp}) \tag{6}\]
The component \(c\) is the bottleneck if it satisfies:
\[\mathbb{T}_{ce}=\mathbb{T}_{stencil} \tag{7}\]
We project the attainable performance \(\mathbb{P}\) as:
\[\mathbb{P}=\frac{\mathbb{D}_{all}\times t}{\mathbb{T}_{stencil}} \tag{8}\]
Normally, we consider \(\mathbb{D}_{all}=\mathbb{D}_{sm}=\mathbb{D}_{gm}=\mathbb{D}_{cmp}\). However, this is a case-by-case factor that depends on the implementation, i.e., when applying _device tiling_\(\mathbb{D}_{gm}\neq\mathbb{D}_{sm}\).
### Overheads
In this section, we discuss the overheads of different spatial blocking methods used in this paper:
#### 5.2.1. SM Tiling
The main overhead of SM tiling is related to redundant computation in halo. Only a portion of the computation is valid. This valid portion is related to both the spatial and temporal block sizes and the radius of the stencil. In 2D stencils, we have:
\[\mathbb{V}=\frac{tile_{e}-t\times rad}{tile_{x}} \tag{9}\]
In 3D stencils, we have:
\[\mathbb{V}_{SMatile}=\frac{(tile_{e}-t\times rad)\times(tile_{y}-t\times rad)}{tile _{x}\times tile_{y}} \tag{10}\]
Accordingly, we have:
\[\mathbb{P}_{SMatile}=\mathbb{V}_{SMatile}\times\mathbb{P} \tag{11}\]
Accordingly, we have:
\[\mathbb{P}_{Dtile}=\mathbb{V}_{Dtile}\times\mathbb{P} \tag{12}\]
To quantify the overhead, we followed the research of Zhang et al. (Zhang et al., 2017) to test the overheads. The device (grid) level synchronization overhead in A100 is \(:\mathbb{T}_{DSync}=1.2us\).
## 6. Ebisu: Analysis of Design Choices
In this section, we provide a comprehensive analysis to justify our design choices. The analysis is targeted at the A100 GPU, while it can be generalized to any GPU platform by adjusting the model parameters (Table 1 summarizes our findings on design choices).
We use 2D 5-Point (Listing 2) to represent 2D stencils, and 3D 7-Point (Listing 3) to represent 3D stencils for the discussions in this section. Table 2 shows the detailed parameters of both stencils.
### Minimum Necessary Parallelism
The analysis of this section is an extension of Volkov's work on low occupancy at high performance (Zhu et al., 2017). We also generalize the analysis by building on Little's law. Little's saw uses latency \(\mathbb{L}\) and throughput \(\mathtt{THR}\) to infer the concurrency \(\mathbb{C}\) of the given hardware:
\[\mathbb{C}=\mathbb{L}\times\mathtt{THR} \tag{13}\]
The latency \(\mathbb{L}\) of an instruction can be gathered by common microbenchmarks (Zhu et al., 2017; Zhang et al., 2017). The throughput \(\mathtt{THR}\) of instructions is available in Nvidia's CUDA programming guide (Bordes et al., 2017) and documents (Zhu et al., 2017).
As long as the parallelism \(\mathtt{PAR}\) provided by the code is larger than the concurrency provided by the hardware, we consider that the code saturates the hardware:
\[\mathtt{PAR}\geq\mathbb{C} \tag{14}\]
There are two ways of providing parallelism: number of threads (\(N_{threads}\)) and Instruction Level Parallelism (\(ILP\)). So, we have:
\[\mathtt{PAR}=N_{threads}\times ILP \tag{15}\]
Unlike Volkov's analysis, instead of maximizing the parallelism with the combination of \(ILP\) and \(N_{threads}\), we aim to find a minimal combination of \(N_{threads}\) and \(ILP\) that saturates the device:
\[N_{threads}\times ILP=\mathtt{PAR}\geq\mathbb{C}=\mathbb{L}\times\mathtt{THR} \tag{16}\]
To maintain a certain level of parallelism, we can reduce the occupancy (\(N_{threads}\)) and increase \(ILP\) simultaneously. We reduce the occupancy to the point that it will not increase the resources per thread block. In the current generation of GPUs (A100), reducing the occupancy of memory-bound kernels to less than 12.5% will not increase the available register per thread (Bordes et al., 2017). So, we set our aim conservatively at \(Occupancy=12.5\%\) or \(N_{threads}=256\).
In this research, we focus on double precision global memory access, shared memory access, and DFMA, all of which are the basic operations in stencil computation. Based on our experimentation, \(ILP=4\) and \(Occupancy=12.5\%\) (\(N_{threads}=256\)) provide enough parallelism for all three operations. We set this as a basic parallelism combination for our implementation. Note that the numbers above may vary for other GPUs, yet the analysis still holds.
### Desired Depth
We use the attainable performance analysis (Section 5.1) to infer the desired depth. We aim at determining a sufficiently deep temporal blocking size to shift the bottleneck.
In this study, we are less concerned with whether the bottleneck shifts to computation or cache bandwidth. To simplify the discussion, we assume that the optimization goal is shifting the bottleneck from global memory to shared memory. This assumption is true for most of the star-shaped stencils (Zhu et al., 2017). Accordingly, we have:
\[\frac{a_{sm}\times t}{\mathbb{B}_{sm}}\times\mathbb{D}_{sm}\geq\frac{a_{gm}}{ \mathbb{B}_{gm}}\times\mathbb{D}_{gm} \tag{17}\]
#### 6.2.1. Case Study: 2D 5-Point Jacobian Stencil (representing stencils w/o device tiling)
Ideally, we have \(\mathbb{D}_{sm}=\mathbb{D}_{gm}\). In A100, \(\mathbb{B}_{gm}=1555\) GB/s, \(\mathbb{B}_{sm}=19.49\) TB/s. In our 2D 5-point implementation, \(\mathtt{a}_{gm}=2\) (assuming perfect caching), \(\mathtt{a}_{sm}=4\). According to Equation 17, we have \(t\geq 6.3\). In \(t=7\), we measured the performance
of 440 GCells/s. We can fine-tune to achieve slightly better performance at \(t=12\), where we measured 482 GCells/s. There is only a 10% difference in performance. The slight inaccuracy might come from the fact that, on average, the global memory accesses per data point is not perfectly cached.
#### 6.2.2. Case Study: 3D 7-Point Jacobian Stencil (representing stencils w/ device tiling)
In device tiling 3D 7-point stencil, \(\mathbb{D}_{gm}\) must also include the halo region between thread blocks. As such, we have:
\[\mathbb{D}_{gm}=(tile_{x}\timestile_{y})+(tile_{x}+tile_{y})\times 2\times t \times rad \tag{18}\]
We intend to determine a \(t\) that satisfies:
\[\frac{a_{mm}\times\mathbb{D}_{mm}\times t}{\mathbb{D}_{mm}}>\frac{a_{gm} \times\mathbb{D}_{gm}}{\mathbb{B}_{gm}} \tag{19}\]
We assume that \(tile_{x}=tile_{y}=32\). We have \(a_{sm}=4.5\), \(a_{gm}=2\). So we can get \(t>18.34\). In this situation, the on-chip memory per thread block desired for EBISU is 352 KB, which exceeds the capacity of A100 (164 KB).
### Device Tiling or SM Tiling?
Device tiling trades redundant computation for device level synchronization. In this section, we focus on: in EBISU, the performance implications of using one single tile per device (w/ device level synchronization). By comparing the practical attainable performance with the version that is not using one single tile per device (w/o device level synchronization).
#### 6.3.1. Case Study: 2D 5-Point Jacobian Stencil
In 2d5pt, we have \(\mathbb{T}_{stencil}=\mathbb{T}_{sm}\) for the overlapped tilling and the device level tiling. We simplify the discussion by defining a valid proportion \(\mathbb{V}\), i.e., the updated output after excluding the halo. The higher the valid proportion, the higher the performance \(\mathbb{P}\). In overlapped tiling, for 2d5pt we have \(t=7\) (Section 6.2.1) and \(rad=1\). So \(\mathbb{V}_{SMtile}\approx 95\%\)
For device level tiling, we can go as deep as \(t=15\). So, we have: \(\mathbb{T}_{sm}=2.05us\). Because \(\mathbb{T}_{Dsync}=1.2us\). Accordingly, we have \(\mathbb{V}_{Dtile}=\mathbb{T}_{sm}/(\mathbb{T}_{sm}+\mathbb{T}_{Dsync})\approx 63\%\).
So, we have: \(\mathbb{V}_{Dtile}\ll\mathbb{V}_{SMtile}\).
For 2D stencils of other shapes, we get:
\[\mathbb{P}_{Dtile}(2D)\ll\mathbb{P}_{PSMtile}(2D) \tag{20}\]
As a result, in 2D stencils, the overhead of thread block level overlapped tiling is negligible, making device tiling less beneficial. This result stands true for all 2D stencils we studied in A100.
#### 6.3.2. Case Study: 3D 7-Point Jacobian Stencils
In 3d7pt, we cannot shift the bottleneck to shared memory in overlapped (within acceptable overhead) or device tiling. We need to compare the Practical Attainable Performance in both cases to judge.
We have \(\mathbb{V}_{SMtile}=(34-2\times rad\times t)^{2}/34^{2}\). In 3d7pt, we have \(rad=1\), \(t=3\), \(\mathbb{V}_{SMtile}\approx 77\%\). In \(t=3\), we have \(\mathbb{P}_{SMtile}=292\) GCells/s, and \(\mathbb{P}\mathbb{P}_{SMtile}\approx 225\) GCells/s.
On the other hand, for device tiling, we can go as deep as \(t=8\), so we have \(\mathbb{L}(gm)=2.42\). Because \(\mathbb{T}_{Dsync}=1.2\) us. So, \(\mathbb{V}_{Dtile}\approx 67\%\) GCells/s. In \(t=8\) we have \(\mathbb{P}_{Dtile}=365\) GCells/s. Accordingly, we have \(\mathbb{P}_{Dtile}\approx 244\) GCells/s.
So, we have: \(\mathbb{P}_{Dtile}>\mathbb{P}_{SMtile}\) on a 3d7pt stencil.
We measured, for instance, 151 GCells/s for w/o device tiling and 197 GCells/s for w/ device tiling. The experiment results is consistent with the analysis (for 3D stencils of other shapes as well):
\[\mathbb{P}_{Dtile}(3D)>\mathbb{P}_{SMtile}(3D) \tag{21}\]
As a result, for 3D stencils, the overhead of thread block level overlapped tiling is so significant that it prohibits the temporal blocking implementation from going deeper. This result stands true for all 3D stencils we studied in A100.
Based on the analyses above, in EBISU, we only implement device tiling for 3D stencils. The analysis in the following section is built on top of this decision.
### Deeper or Wider?
As the capacity of on-chip memory is limited, there is a trade-off between increasing the width of spatial blocking and increasing the depth of temporal blocking. In this section, we discuss our heuristic for we use for parameter selection in EBISU.
#### 6.4.1. Case Study: 2D 5-Point Jacobian Stencil
Firstly, as Section 6.3.1 showed, the overhead of 2D 5-Point Jacobian Stencil is negligible. Additionally, according to Section 6.2.1, in theory, at depth \(t=7\), we shift the bottleneck from global memory to shared memory.
As such, after the bottleneck is shifted, we aim at wider spatial blocking to reduce the overhead of overlapped tilling as is discussed in Section 5.2.1. Yet, we still need to consider the implementation simplicity. For example, we choose a tiling of size \(tile_{x}=256\) instead of \(tile_{x}=328\), since the latter is hard to implement in CUDA.
#### 6.4.2. Case Study: 3D 7-Point Jacobian Stencil
For simplicity, we assume that the very first plane loaded and the last plane stored have already been amortized. Then, for global memory access, we only focus on the halo region. According to Equation 17, we have:
\[\frac{tile_{x}\timestile_{y}\times a_{mm}}{\mathbb{B}_{sm}}>\frac{(tile_{x}+tile _{y})\times 2\times a_{gm}\times rad}{\mathbb{B}_{gm}} \tag{22}\]
We assume that \(tile_{y}=tile_{x}\). So, we can get:
\[tile_{y}=tile_{x}>\frac{4\times a_{gm}\times\mathbb{B}_{sm}}{a_{sm}\times \mathbb{B}_{gm}}\times rad \tag{23}\]
In our 3d7pt implementation, \(a_{gm}=2\), \(a_{sm}=4.5\). We have \(tile_{y}=tile_{x}\geq 22.3\). For implementation convenience, we use \(32\times 32\) (also fitted to the Minimal Necessary Parallelism that saturates the device as Section 6.1 discussed). As such, after the spatial tiling is large enough for overlapping halo region, we then run the temporal blocking as deep as possible to amortize the overhead of using device (grid) level synchronization.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Type** & **Parallelism Combination** & **SM Tiling** & **Device Tiling** & **Temporal Blocking Strategy** & **Circular Multi-Queue** \\ & (\(N_{threads}\times ILP\)) & (\(tile_{x}\timestile_{y}\)) & & & \\ \hline
**2D stencils** & \(256\times 4\) & \(256\times 4\) & – & Deep enough to shift the bottleneck & Compute \\ \hline
**3D stencils** & \(256\times 4\) & \(32\times 32\) & \((12\times 6)\) & As deep as possible & Shifting \\ \hline \end{tabular}
\end{table}
Table 1. Design choices for EBISU.
## 7. Evaluation
We experiment on a wide range of 2D and 3D stencils (listed in Table 2). The test data are generated by STENCILGEN (Tang et al., 2019). We evaluate the benchmarks on an NVIDIA A100-PCIe GPU device(host CPU: Intel Xeon E5-2650).
### Compile Settings of EBISU
The code is compiled with NVCC-11.5 (CUDA driver V11.5.119) and gcc-4.8.5, using flags -rdc=true -Xptxas -v -std=cc+14. We only generate code for A100 architecture 2. "-rdc=true" flag is necessary for enabling grid level synchronization, so we set it by default. We use cc++14 features, so we add -std=cc++14" flag. -Xptxas -v" is set to gather information on registers.
Footnote 2: setting CUDA ARCHITECTUREes’80’ in CMAKE
### Evaluation Setup
#### 7.2.1. Domain Size
We used the domain sizes listed in Table 2 for EBISU, comparable to those used in the literature (Brandt et al., 2017; Chen et al., 2019; Chen et al., 2019).
#### 7.2.2. Warm-Up and Timing
For all experiments, we do warm-up iterations and then use GPU event APIs to measure one kernel run. We repeat this process ten times and report the peak.
#### 7.2.3. Depth of Temporal Blocking
We only evaluate a single kernel. Therefore, the total number of time steps is equal to the depth of temporal blocking of each implementation in each stencil benchmark. We summarize the depth of temporal blocking in Table 3.
### Comparing with State-Of-The-Art Implementations
We compare EBISU with the state-of-the-art temporal blocking implementations ANSD (Shen et al., 2017) and STENCILGEN (Tang et al., 2019), and the state-of-the-art auto-tuning tools ARTEMIS (Shen et al., 2017) and DRSTENCIL (Shen et al., 2017).
#### 7.3.1. Setting up State-Of-The-Art Libraries
We use the domain sizes reported by each library in the original paper (not adversely change domain sizes). We assume that the libraries can achieve reasonably good performance in the setting used in the original paper. For example, in 2D stencils, ANSD used 163842, while STENCILGEN used 81922. ARTEMIS did not report 2D stencils; we used the same setting as STENCILGEN. Details can be obtained from the original papers (Shen et al., 2017; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019).
Footnote 2: setting CUDA ARCHITECTUREes’80’ in CMAKE
As for timing and warm-up, an SND's original code already does the warm-up, so we use the default setting. We use the same host warm-up and timer function as EBISU to test the kernel performance for STENCILGEN, ARTEMIS, and DRStencil.
The detailed settings are listed as follows:
**STENCILGEN** We used the codes for AD/AE appendix (Shen et al., 2017) of the original paper. We do not change anything inside the kernel.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline
**Stencil** & **Domain Size** & \(a_{sm}\) & \(a_{sm}\) \\
**Order, FLOPs/Cell** & & **w/o RST** & **w/ RST** \\ \hline
2J4Spt [1,10] & 83522 & 6 & 4 \\
2J429pt [2,18] & 80642 & 10 & 6 \\
2J429pt-gol [1,18] & 87842 & 10 & 4 \\
2J425pt [gaussian] [2,25] & 86402 & 26 & 6 \\ \hline
3J4pt (heat) [1,14] & \(2560\times 288\times 384\) & 8 & 4.5 \\
3J41pt [2,26] & \(2560\times 288\times 384\) & 14 & 7 \\
3J41pt [1,34] & \(2560\times 288\times 384\) & 18 & 5.5 \\
3J427pt [1,54] & \(2560\times 288\times 384\) & 28 & 5.5 \\ poisson [1,38] & \(2560\times 288\times 384\) & 20 & 5.5 \\ \hline \end{tabular}
\end{table}
Table 2. Stencil benchmarks. Readers can refers to (Shen et al., 2017; Wang et al., 2019) for details description. We also include ideal shared memory access times per cell, \(a_{sm}\), when applying redundant register streaming (w/ RST) and without it (w/o RST) in the table.
Figure 8. Percent of occupancy achieved and resources used (registers and shared memory) for EBISU and SOTA libraries among all stencil benchmarks.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**Type** & **STENCILGEN** & **ANSD** & **DRSTENCIL** & **ARTEMIS** & **EBISU** \\ \hline
2J42pt & 4 & 10 & 3 & 12 & 12 \\
2J42pt & 4 & 5 & 2 & 6 & 8 \\
2J42pt-gol & 4 & 7 & 2 & 6 & 6 \\
**j2425pt** & 2 & 5 & 2 & 3 & 4 \\
3J42pt & 4 & 6 & 3 & 3 & 8 \\
3J43pt & 2 & 4 & 2 & 1 & 5 \\
**j3417pt** & 2 & 3 & 2 & 2 & 6 \\
**j3427pt** & 2 & 3 & - & 2 & 5 \\
**poisson** & 4 & 3 & 2 & 2 & 6 \\ \hline \end{tabular}
\end{table}
Table 3. Depth of temporal blocking for each stencil implementations in this evaluation.
Figure 7. Speedup of EBISU over the state-of-the-art temporal blocking implementations. We also plot the performance of EBISU (right Y-axis plotted as ’+’ ticks).
**AN5D** AN5D is a code auto-generator tool. We only used the code already generated in their code (Zhou et al., 2017). We put the makefile system to A100 and iterate over all generated codes to find the one with the highest performance for each stencil benchmark. The original code did not include some stencil benchmarks we use. We use the implementations with similar memory access patterns to represent them: gaussian (box2d2r), j3d7pt (star3d1r), j3d13pt (star3d2r), j3d17pt (j3d27pt) and poisson (j3d27pt).
**DRSTENCIL** DRSTENCIL (Zhou et al., 2017) is also an auto-tuning tool. We use the benchmark in the codebase (Zhou et al., 2017). In the paper, the authors included only the implementations of the j3d7pt stencil in the range of 3D stencils. We extend their j3d7pt stencil setting to other 3D stencils for comparison. However, with the j3d7pt setting, DRSTENCIL was unable to generate runnable code in j3d27pt. We report the kernel with the peak performance among the policies that DRSTENCIL iterated over.
**ARTEMIS** ARTEMIS is an auto-tuning tool. We use the benchmark in the codebase (Zhou et al., 2017). We replaced the profiler nvprof (deprecated) with ncu. ARTEMIS (Zhou et al., 2017) only provides samples for 3d7pt and 3d27pt. We extend 3d7pt to all star-shape stencils (including heat and 2d star-shape stencils) and 3d27pt to all box-shape stencils (including poisson, 3d17pt and 2d box-shape stencils). We report the kernel with the peak performance among the policies that ARTEMIS iterated over.
#### 7.3.2. Performance Comparison
Figure 7 shows the speedup of EBISU over state-of-the-art temporal blocking implementations. EBISU shows a clear performance advantage over all of the state-of-the-art temporal blocking libraries, i.e., STENCILGEN and AN5D. It is also faster than the state-of-the-art auto-tuning tool DRSTENCIL and ARTEMIS. EBISU achieves a geomean speedup of over 2.0x when comparing with each state-of-the-art. When comparing EBISU with the best state-of-the-art in each stencil, EBISU achieves a geomean speedup of 1.49x.
#### 7.3.3. Resources
We additionally report occupancy and the resources used for all the benchmarks with the ncu profiler (Figure 8). EBISU is able to use the on-chip resources efficiently despite its low occupancy (12.5%). It is worth noting that, as Table 3 shows, EBISU usually has deeper temporal blocking. However, EBISU does not show significantly higher register pressure than other implementations. EBISU can, on average, do temporal blocking 1.3x deeper than the deepest state-of-the-art implementations. But only use 87% of the registers compared to the most register-consuming state-of-the-art equivalent kernel.
### Performance Breakdown
The remarkable speedup achieved by EBISU in comparison to other SOTA methods can be attributed to a fundamental shift in GPU programming principles. While existing SOTAs typically focus on constraining resources to enhance parallelism, EBISU constrains parallelism to optimize resource utilization. This novel approach enables the implementation of resource-scalable schemes, which ultimately contribute to EBISU's performance.
In this section, we provide a detailed explanation of how the optimizations proposed in earlier sections impact the performance of EBISU. To demystify their effects, we present case studies involving 2D 5-Point Jacobian stencils (representing 2D stencils) and 3D 7-Point Jacobian stencils (representing 3D stencils). Figure 9 displays the roofline plot of various implementations, with the black arrow indicating the incremental implementation of each scheme.
For the roofline analysis, we report the performance as measured in TFLOPS (teraflops). Table 2 shows the relationship between TFLOPS and GCells/s metrics.
#### 7.4.1. Base
The BASE implementation refers to the approach that applies minimal parallelism analysis, as discussed in Section 6.1. In this phase, we prepare the necessary resources for EBISU. It is important to note that in the case of the 3D 7-Point stencil, the BASE implementation already incorporates _device tiling_, similar to the approach employed in the existing research of PERKS (Zhou et al., 2017).
#### 7.4.2. Circular Multi-Queue (CMQ)
CMQ is a foundation for deep temporal blocking. As Figure 9 shows, in 2D stencils, we increase the depth of temporal blocking to move the bottleneck from global memory to shared memory. In 3D stencils, due to the shared memory's limited capacity, we only move the Operation Intensity (OI) from left to right. Either way, we move the OI such that we increase the attainable performance shown in the roofline model.
#### 7.4.3. Prefetching (PRE)
As Figure 9 shows, the PRE scheme has the effect of moving the roofline plot towards the attainable bound. However, it does not directly impact the attainable bound itself.
#### 7.4.4. Lazy Streaming (LST)
The LST scheme aims to reduce synchronizations by using long buffers. By default, we employ LST to minimize device level synchronizations. This section specifically focuses on the impact of LST on reducing thread block synchronizations. As illustrated in Figure 9.a, applying LST to the 2D 5-point stencil brings its performance closer to the attainable bound. However, in the case of the 3D stencil, as shown in Figure 9.b, applying LST may harm performance. This is primarily due to the global memory still being the bottleneck, and the additional on-chip memory space required by LST implementation leads to a shallower temporal blocking. This results in a leftward shift in the OI, which consequently reduces the attainable performance. It is worth nothing that in the final version of EBISU, disabling LST for the 3D 7-point stencil allows for a doubling of the temporal blocking depth, from \(t=8\) to \(t=16\), leading to a performance increase from 2.7 TB/s to 2.9 TB/s. However, when excluding the redundant halo, the performance dips from 2.4 TB/s to 2.3 TB/s. Therefore, this result has been excluded from the discussion
#### 7.4.5. Redundant Register Streaming (RST)
RST's primary goal is to cut down shared memory access time (refer to Table 2). By doing so, we can shift the roofline plot closer to the compute bound from left to right when shared memory is the bottleneck (as shown in Figure 9.a). Also, we leverage RST to cache a portion of the tiling, which helps reduce the amount of data cached in shared memory. This enables us to achieve deeper temporal blocking and move the roofline plots closer to the compute bound from left to right, when global memory remains the bottleneck (as shown in Figure 9.b).
#### 7.4.6. Relations Between Optimizations
The PRE and LST optimizations have the effect of improving performance and bringing it closer to the attainable bound. The RST optimization is designed to shift the roofline plots to the right, to increase the attainable bound.
Red arrows in Figure 9 clearly shows that disabling either of these optimizations results in a degradation of performance.
#### 7.4.7. Practical Attainable Performance
In 2D 5-point stencil, we achieved 4.8 TFLOPS (80% of the attainable bound). In 3D 7-point stencil, we achieved 2.7 TFLOPS (50% of the attainable bound). The big gap is due to the omission of the overheads in roofline model. As we consider overhead in our model (Section 5), we achieved 88% and 80% of PP in 2D 5-point and 3D 7-point stencils respectively. A model that considers the overheads can model the practical attainable performance better. As such, this model contributing to the decision-making also benefits the performance of EBISU.
## 8. Related Works
Apart from the tiling optimizations we covered in Section 2.1, there are many stencil optimizations that are architecture-specific. For example, vectorization (Shi et al., 2017; Wang et al., 2018; Wang et al., 2018); cache optimizations on CPUs (Beng et al., 2016; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). For GPUs (Shi et al., 2017; Wang et al., 2018; Wang et al., 2018), Chen et al. proposed an execution model on top of the shuffle operation on GPU (Chen et al., 2018); Liu et al. uses tensor cores to accelerate low precision stencils (Wang et al., 2018). Rawat et al. also summarized optimizations that can be used in stencil optimization, i.e., streaming, unrolling, prefetching (Wang et al., 2018), and register reorder (Wang et al., 2018).
State-of-the-art implementations are usually built on top of multiple optimizations. For example, wavefront diamond blocking (Beng et al., 2016) is built on top of vectorization, cache optimization, streaming, and diamond tiling, STENCLIGEN (Wang et al., 2018) is built on top of shared memory optimization, streaming, and N.5D tiling.
But combining different optimizations is tedious for implementation. Many researches focus on autocode generation using on domain specific language (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), or compiler-based approaches (Wang et al., 2018; Wang et al., 2018). Some optimizations, especially those related to registers, are difficult to implement manually. Matsumura et al. implemented ANSD (Matsumura et al., 2018) that generates codes using registers effectively.
## 9. Conclusion and Future Work
In this paper we propose, EBISU, a novel temporal blocking approach. EBISU relies on low occupancy and mapping on large tiles over the device. The freed resources are then used to improve the data locality. We compared EBISU with two state-of-the-art temporal blocking implementations and two state-of-the-art autotuning tools. EBISU constantly shows its performance advantage. It achieves a geomean speedup of 1.49x over any of the top alternative state-of-the-art implementations for each stencil benchmark.
This paper focuses on studying how modern GPU characteristics influence the optimization of temporal blocking stencils. Nevertheless, as EBISU proved effective, its optimization approach can be absorbed into production libraries like Halide (Haldie et al., 2018) so that the end user can get the performance with minimal effort.
###### Acknowledgements.
This work was supported by JSPS KAKENHI under Grant Numbers JP22H03600 and JP21K17750. This work was supported by JST, PRESTO Grant Number JPMJPR20MA, Japan. This paper is based on results obtained from JPNP20006 project, commissioned by the New Energy and Industrial Technology Development Organization (NEDO). This manuscript has been co-authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The publisher acknowledges the US government license to provide public access under the DOE Public Access Plan ([http://energy.gov/downloads/doe-public-access-plan/](http://energy.gov/downloads/doe-public-access-plan/)). The authors wish to express their sincere appreciation to Jens Domke, Aleksandr Drozd, Emil Vatai and other RIKEN R-CCS colleagues for their invaluable advice and guidance throughout the course of this research. Finally, the first author would also like to express his gratitude to RIKEN R-CCS for offering the opportunity to undertake this research in an intern program.
Figure 9. Roofline plots for different implementations in Section 4. We plot 2D 5-Point Jacobian stencil implementations to represent 2D stencils and 3D 7-Point Jacobian stencil implementations to represent 3D stencils. The black arrows link the incremental implementations from a _BASE_ implementations. The 3D _BASE_ applies device tiling (Section 4.1). The _LST_ refers to thread block level lazy streaming. Device tiling without lazy streaming will be extremely slow as can be inferred in Section 5.2.2. |
2310.13153 | Discovering Novel Halide Perovskite Alloys using Multi-Fidelity Machine
Learning and Genetic Algorithm | Expanding the pool of stable halide perovskites with attractive
optoelectronic properties is crucial to addressing current limitations in their
performance as photovoltaic (PV) absorbers. In this article, we demonstrate how
a high-throughput density functional theory (DFT) dataset of halide perovskite
alloys can be used to train accurate surrogate models for property prediction
and subsequently perform inverse design using genetic algorithm (GA). Our
dataset consists of decomposition energies, band gaps, and photovoltaic
efficiencies of nearly 800 pure and mixed composition ABX$_3$ compounds from
both the GGA-PBE and HSE06 functionals, and are combined with ~ 100
experimental data points collected from the literature. Multi-fidelity random
forest regression models are trained on the DFT + experimental dataset for each
property using descriptors that one-hot encode composition, phase, and
fidelity, and additionally include well-known elemental or molecular properties
of species at the A, B, and X sites. Rigorously optimized models are deployed
for experiment-level prediction over > 150,000 hypothetical compounds, leading
to thousands of promising materials with low decomposition energy, band gap
between 1 and 2 eV, and efficiency > 15%. Surrogate models are further combined
with GA using an objective function to maintain chemical feasibility, minimize
decomposition energy, maximize PV efficiency, and keep band gap between 1 and 2
eV; hundreds more optimal compositions and phases are thus discovered. We
present an analysis of the screened and inverse-designed materials, visualize
ternary phase diagrams generated for many systems of interest using ML
predictions, and suggest strategies for further improvement and expansion in
the future. | Jiaqi Yang, Panayotis Manganaris, Arun Mannodi-Kanakkithodi | 2023-10-19T20:55:08Z | http://arxiv.org/abs/2310.13153v1 | Discovering Novel Halide Perovskite Alloys using Multi-Fidelity Machine Learning and Genetic Algorithm
###### Abstract
Expanding the pool of stable halide perovskites with attractive optoelectronic properties is crucial to addressing current limitations in their performance as photovoltaic (PV) absorbers. In this article, we demonstrate how a high-throughput density functional theory (DFT) dataset of halide perovskite alloys can be used to train accurate surrogate models for property prediction and subsequently perform inverse design using genetic algorithm (GA). Our dataset consists of decomposition energies, band gaps, and photovoltaic efficiencies of nearly 800 pure and mixed composition ABX\({}_{3}\) compounds from both the GGA-PBE and HSE06 functionals, and are combined with \(\sim\) 100 experimental data points collected from the literature. Multi-fidelity random forest regression models are trained on the DFT + experimental dataset for each property using descriptors that one-hot encode composition, phase, and fidelity, and additionally include well-known elemental or molecular properties of species at the A, B, and X sites. Rigorously optimized models are deployed for experiment-level prediction over > 150,000 hypothetical compounds, leading to thousands of promising materials with low decomposition energy, band gap between 1 and 2 eV, and efficiency > 15%. Surrogate models are further combined with GA using an objective function to maintain chemical feasibility, minimize decomposition energy, maximize PV efficiency, and keep band gap between 1 and 2 eV; hundreds more optimal compositions and phases are thus discovered. We present an analysis of the screened and inverse-designed materials, visualize ternary phase diagrams generated for many systems of interest using ML predictions, and suggest strategies for further improvement and expansion in the future.
## I Introduction
The importance of engineering and optimizing halide perovskites (HaPs) for use as absorbers in next-generation solar cells cannot be overstated. There is a deluge of experimental and computational work on this topic emerging every single day [2; 3; 4; 5; 6], even if such attempts appear to be plateauing, incremental improvements are important and motivate more comprehensive investigations. Record power conversion efficiencies were recently reported by multiple sources for perovskite-based tandem solar cells [7; 8]. Curiously, there seems to be ever more room for better performance by tailoring the ABX\({}_{3}\) composition (for canonical 3D crystalline perovskites) in terms of the number and nature of species that inhabit the A/B/X sites, dopants, phase stability, interfaces, and defects [9]. Of course, HaPs may also adopt the double perovskite structure or a 2D phase via the use of organic spacers, and be used in both single-junction or tandem solar cells with wide or low band gap semiconductors.
Our research group has made several contributions to the HaP literature over the last few years, reporting updates on discovering promising new materials using density functional theory (DFT)-based computational screening and machine learning (ML) [9; 10; 11; 12]. In 2022, we published a major study on using DFT-ML to design novel B-site mixed ABX\({}_{3}\) HaPs with desired stability, band gap, photovoltaic (PV) figure of merit, and defect tolerance [10]. In a review paper in 2022, we covered several similar efforts from the literature utilizing high-throughput (HT) computations and ML models trained on small, medium, or large datasets of perovskite properties, to drive the discovery and understanding of HaPs [9]. Recently, we published another comprehensive study on generating one of the largest known DFT datasets of pseudo-cubic HaP alloys, with mixing allowed at A, B, or X sites, from multiple DFT functionals, resulting in (a) a thorough analysis of how common cation and anion choices and types of mixing affect the stability and optoelectronic properties, (b) an understanding of how different levels of theory in DFT reproduce experimentally measured properties, and (c) an open-access dataset that can be utilized by anybody for data mining and ML endeavors [11]. We followed up this study by extending the DFT dataset to a series of non-cubic HaPs in multiple prototype phases, using multiple semi-local and non-local DFT functionals, as well as applying varying degrees of octahedral distortions and strain and different types of ionic ordering in alloys [12].
In the present contribution, we build upon our prior work and present multi-fidelity ML regression models [13] trained on our existing multi-phase multi-functional HaP dataset of \(\sim 10^{3}\) points or so, leading to accurate predictions across hundreds of thousands of possible compounds, screening of promising candidates, and inverse design using genetic algorithm (GA) to expand the scope of materials selection beyond HT-screening. We create a fusion dataset of HaPs containing three properties, namely
the decomposition energy, electronic band gap, and spectroscopic limited maximum efficiency (SLME) [14], estimated from two types of DFT functionals--the semi-local GGA-PBE (570 data points), and non-local hybrid HSE06 with spin-orbit coupling (SOC) (347 data points). This dataset of 917 DFT points is further enhanced with 97 experimental data points collected from the literature [15; 16], reporting HaP compositions and their measured band gap and photo-conversion efficiency (PCE). **Figure 1(a)** shows a standard pseudo-cubic 2\(\times\)2\(\times\)2 perovskite supercell, and **Figure 1(b)** shows the chemical space considered in our work in terms of A, B, and X species, with MA and FA representing organic molecules methylammonium and formidinium, respectively. **Figure 1(c)** further shows the format of the dataset, with every compound represented in terms of a 14-dimensional composition vector (capturing fractions of the 5 A species, 6 B species, and 3 X species in any compound), a 36-dimensional "elemental properties" vector [10; 11] (12 weight-averaged properties each for A, B, and X site species, such as ionic radius, electron affinity, etc.), and one of four prototype perovskite phases it could adopt--cubic, tetragonal, orthorhombic, or hexagonal. Additional columns represent the source of the data (PBE, HSE, or experiment), and the properties of interest: decomposition energy or AH, band gap or E\({}_{gap}\), and SLME (or PCE from experiment).
The use of "multi-fidelity" learning [13; 17; 18] (or multi-task learning [19; 20]) is motivated by the fact that different DFT functionals work well for different HaP compositions, and accuracy compared with experiments is not as trivial as one would imagine. While GGA-PBE, including variants such as PBEsol (improved PBE for solids [21]) and PBE-D3 (explicit van der Waals corrections [22]), reproduces lattice parameters and stability reasonably well, it is typically inaccurate for electronic, optical, and defect properties [9]. Hybrid HSE06 is much better for optoelectronic properties, but is expensive, especially when SOC is incorporated for the relativistic effects of heavy atoms such as Pb [10; 11]. For hybrid organic-inorganic perovskites (HOIPs) such as MAPbI3 and MAPbBr3, PBE without SOC often reproduces the experimental band gap as well as HSE+SOC. While HSE+SOC should work for inorganic HaPs in general, this is not always the case, as the mixing fraction \(\alpha\) in HSE06 (default value of \(\alpha\) = 0.25) could itself be tuned; e.g., it was shown that for CsPbI3, \(\alpha\) = 0.41 reproduces the band gap more accurately [23], while \(\alpha\) = 0.50 works best for cubic FAPbI3 [12]. We posit that the true relationship between the perovskite chemistry, DFT functional, and experimental properties, is enormously complex, and by learning from a fusion dataset containing properties from multiple levels of theory as well as from experiments, comprehensive experiment-fidelity predictions could be achieved across a wide chemical space. Such models would exploit inherent correlations between DFT and experiments, as well as expand the reach of experimental predictions beyond the range of chemistries for which measured data is currently available, provided DFT data is indeed available in these unexplored spaces.
The remainder of this manuscript presents details of the ML approaches and subsequent screening and inverse design, followed by a systematic discussion of the results. The overall methodology is shown in **Figure 1(d)**, going from compiling the DFT+Expt dataset of properties and descriptors to training several single- and multi-fidelity regression models, resulting in enumeration, prediction and screening of promising candidates and inverse design of new compounds using GA, closing the experiment-fidelity perovskite design loop. We discuss the accuracy and merits of all single- and multi-fidelity regression
Figure 1: (a) An example pseudo-cubic 2x2x2 HaP supercell. (b) The chemical space of ABX\({}_{3}\) HaPs studied in this work. (c) The perovskite dataset formatted for multi-fidelity ML, including all inputs (composition, elemental properties, phase, and fidelity) and outputs (decomposition energy, band gap, and PV efficiency or SLME). (d) The perovskite design workflow showing data formatting, regression optimization, prediction, GA, and screening steps.
models based on random forest regression, and furthermore, present an analysis of the best compounds from screening and GA, in terms of the frequencies of occurrence of various chemical species and different types of mixing at cation or anion sites. All data and models are openly available to the community and will serve not only the discovery of novel HaPs for optoelectronic applications, but also provide a fertile playground for testing of a variety of ML techniques.
## Methods
### Compiling the HaP Dataset
The entire HaP dataset is pictured in **Figure 2**, divided in terms of the source of data (PBE, HSE, or Expt) and the perovskite phase (cubic, tetragonal, orthorhombic, or hexagonal). This data is compiled from our recent publications [10; 11; 12]. A majority of the data is for the cubic phase as a result of our initial high-throughput investigation [10; 11], whereas the non-cubic data was generated in follow-up work [12]. 570 data points are from PBE, 347 from HSE, and 97 from experiments [15; 16], resulting in a combined dataset of 1014 points. The decomposition energy (\(\Delta\)H) shows how likely the ABX\({}_{3}\) compound is to decompose to AX and BX\({}_{2}\) phases, with a mixing entropy contribution included as well, giving a convenient per formula unit (p.f.u.) metric for perovskite stability [10; 11]. E\({}_{gap}\) comes from accurate PBE and HSE electronic structure computations using dense k-point meshes, and from the experimental literature where techniques ranging from UV-vis absorption to photoluminescence spectroscopy have been applied [15; 16]. SLME is derived at 5\(\mu\)m sample thickness from the DFT-computed optical absorption spectrum based on previously developed approaches [14; 24], and is combined with measured PCE values reported from solar cells based on different HaP compositions. All DFT data are restricted to HaP compositions with any A, B, or X constituent (shown in **Figure 1(b)**) occurring only in fractions of _n_/8 (_n_ = 0, 1, 2,... 8), such that geometry optimization and subsequent electronic and optical calculations could be performed using the special quasirandom structures (SQS) approach [25] in 2\(\times\)2\(\times\)2 (cubic) or 2\(\times\)2\(\times\)1 (tetra, ortho, hex) supercells. Other specific DFT details and data analysis can be found in past publications [10; 11; 12].
The E\({}_{gap}\) vs \(\Delta\)H plot in **Figure 2(a)** shows that while there are dozens of compounds from all phases that lie in the \(\Delta\)H < 0.5 eV (a relaxed threshold) and 1 eV \(<\) E\({}_{gap}\) < 2 eV range, a majority of the compounds, primarily cubic, are in the undesirable ranges. **Figure 2(b)** shows that PV efficiencies peak around E\({}_{gap}\)\(\sim\) 1.5 eV and the favorable region (SLME/PCE > 15%) is dominated by cubic compounds, though that might just be a factor of the current dataset not containing as many non-cubic structures. From both plots, it is evident that only 10 to 20% of the entire dataset of 1014 points show desired stability and optoelectronic properties. The compounds in this dataset cover the 14-dimensional ABX\({}_{3}\) chemical space adequately (including 5 A species, 6 B species, and 3 X species), in terms of how often and in what mixing fraction any particular species appears in the compound [11; 12]. In reality, this space is practically infinite, as mixing fractions could be as low or high as possible (and not just _n_/8 fractions), and any number of ions could be mixed together to create high entropy perovskites. Thus, a major motivation of this work is that learning from a modest dataset of the order of 10\({}^{3}\) points, one could, in theory, make predictions for 10
Figure 2: The perovskite dataset divided in terms of source of data (DFT-PBE, DFT-HSE, or experiment) and perovskite phase (cubic, tetragonal, orthorhombic, and hexagonal). (a) Band gap plotted against decomposition energy. (b) PV efficiency plotted against the band gap.
to \(10^{6}\) points (or beyond) which would include all intermediate compositions and mixing fractions missing from the original dataset, and perform more comprehensive screening and design. **Table I** presents details of the PBE, HSE, and experimental datasets in terms of the number of data points for each phase and the available properties.
### Training Single- and Multi-Fidelity Surrogate Models
The dataset is formatted as shown in **Figure 1(c)** and fed into a random forest regression (RFR) algorithm for rigorous training and optimization of predictive models for each property. Standard data science and ML practices are applied: an 80-20 train-test split is used for the entire dataset, 5-fold cross-validation is applied, and grid-based hyperparameter optimization is performed. Relevant modules are imported from the Scikit-learn library [26] and all our code is available on Github [27]. Four types of descriptors are used as input (**X**) to the RFR models for predicting property **Y** (AH, E\({}_{gap}\), SLME/PCE):
1. 14-dimensional composition vector: This encodes the HaP composition in terms of the fraction (between 0 and 1) of every A/B/X species in the compound.
2. 36-dimensional elemental properties vector: This uses 12 distinct well-known properties (the full list is provided in Table S1 in the SI) to represent the A, B, and X site constituents, using a weighted fraction when there is mixing at any site.
3. 4-dimensional phase vector: This one-hot encodes whether the perovskite phase is cubic, tetragonal, orthorhombic, or hexagonal.
4. 0-dim, 2-dim, or 3-dim fidelity vector: This one-hot encodes the source of the data (PBE, HSE, or Expt) [19, 13].
Three different types of RFR models are trained:
1. PBE, HSE, and Expt. single-fidelity (SF) models: The source of the data is not an input here, so a total of 54 dimensions are used as X and separate models are trained to predict AH, E\({}_{gap}\) and SLME each from PBE and HSE, as well as E\({}_{gap}\) and PCE just from experiments. These models are subsequently referred to as PBE-sf, HSE-sf, and Expt-sf, respectively.
2. PBE+HSE multi-fidelity (MF) models: A 2-dim data source vector is included to yield a 56-dimensional X, used as input to predict the three properties simultaneously from both functionals based on the dataset of 917 DFT points. The advantage here is that the reach and prediction accuracy of the smaller and more expensive HSE dataset can be enhanced by utilizing its correlations with the supposedly inferior but larger PBE dataset. These predictions will be referred to as PBE-mf1 and HSE-mf1, respectively.
3. PBE+HSE+Expt MF models: A 3-dim data source vector is included to yield a 57-dimensional X, which is used as input to predict E\({}_{gap}\) and SLME or PCE simultaneously from PBE, HSE, and Expt based on the combined DFT-Expt dataset of 1014 points. A clear advantage is that the applicability of the much smaller experimental dataset can be enhanced by utilizing its correlations with the PBE and HSE data. These predictions will be referred to as PBE-mf2, HSE-mf2, and Expt-mf2, respectively.
RFR results after training and optimization are visualized in terms of parity plots between ground truth values (actual PBE, HSE, or Expt) on the x-axis and ML predictions on the y-axis, separately for each property and each source of data. It should be noted that the ultimate test of the model's predictive power is its performance on unseen data points, and ideally, one must look at the quality of prediction for every data point when it is considered as part of the test set. To achieve this, we adopt the following ensemble-based strategy: every SF and MF RFR model is trained 5000 times across each dataset, with an 80-20 split, such that every single data point is considered in the test set approximately 1000 times. Effective test set predictions are then achieved for all points as a mean over their \(\sim\) 1000 test predictions (also yielding the standard deviation, which serves as a rough uncertainty in prediction for any point). Thus, final parity plots, which will be presented and discussed later, show only test predictions for all PBE, HSE, and Expt points.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Data Fidelity** & **Total Data Points** & **Phases (Data Points)** & **Properties** \\ \hline PBE & 570 & Cubic (469), Tetra (34), Ortho (37), Hex (30) & AH, E\({}_{x}\), SLME \\ HSE & 347 & Cubic (246), Tetra (34), Ortho (37), Hex (30) & AH, E\({}_{g}\), SLME \\ Expt. & 97 & Phases assigned from DFT-ML Predictions & E\({}_{g}\), PCE \\ \hline \end{tabular}
\end{table}
Table 1: Description of the perovskite dataset used for training ML models in this work, in terms of total number of data points, data points per perovskite phase, and the properties available.
### Enumeration, Prediction, and Screening
Since only a small subset of possible ABX\({}_{3}\) compositions is used for the DFT+ Expt dataset, we generate a much larger dataset of "hypothetical" HaP compositions by populating the 14-dim composition vector with fractions of n/8, such that the A components sum up to 1, B components sum up to 1, and X components sum up to 3. A similar approach was applied in other works as well [10; 28]. To keep this combinatorial dataset somewhat tractable, we restrict it to only one type of mixing at a time (i.e., there will not be mixing at A site and B or X sites simultaneously) and only mixing fractions of _n/8_. This leads to a total of 37,785 unique compositions including thousands of A-site mixed, B-site mixed, and X-site mixed compounds each, which is transformed to 151,140 data points considering them in four phases each. All compounds are converted into descriptors and fed to the best RFR models, eventually resulting in prediction of their PBE/HSE-fidelity AH as well as experiment-fidelity E\({}_{gap}\) and PV efficiency. Screening is performed to obtain the subset of these > 150,000 compounds with low decomposition energy, band gap between 1 and 2 eV, and PV efficiency > 15%.
### Inverse Design using GA
Finally, we move beyond the constraints of the restricted chemical space considered for enumeration, to perform a more efficient design of HaP compositions in any phase that satisfy conditions of DFT stability and experiment-level optoelectronic properties. A constrained or clamped Genetic Algorithm (GA) is used for this purpose [29; 30; 31], using a procedure wherein hundreds of arbitrary ABX\({}_{3}\) compositions are generated, with any kind of mixing allowed at A, B, or X sites, in any of the four phases, and a complex objective function is defined for each compound taking the following factors into account:
1. Chemical feasibility: Constituent fractions at A, B, and X sites should respectively sum up to 1, 1, and 3.
2. Suitability of modeling via DFT: Mixing fractions are restricted to be n/8, n/27, or n/64 (where is a positive integer), such that any composition could be simulated using a 2\(\times\)2\(\times\)2, 3\(\times\)3\(\times\)3, or 4\(\times\)4\(\times\)4 supercell.
3. Stability: AH should be minimized to maintain maximum likelihood of perovskite formability. The HSE-level AH prediction is used here.
4. Optoelectronic properties: E\({}_{gap}\) should be in between 1 and 2 eV and PV efficiency should be as high as possible, with both predictions at experiment-fidelity.
During any GA run, new HaP compositions are generated using concepts such as crossover, elitism, and mutation, so as to minimize the objective function, yielding a plot between GA generation and objective function, eventually resulting in the best material that satisfies all criteria. Running this several hundred times leads to a massive list of attractive materials that further improve on the compounds derived from enumeration \(\rightarrow\) prediction \(\rightarrow\) screening. Here, we run GA multiple times with different types of inputs, such as restricting the phase to be cubic or tetragonal, restricting the B-site to consider only Pb, Sn, and Ge, or to not consider Pb at all in a bid to achieve Pb-free perovskites, etc. Final results can be visualized for different families of HaPs in terms of PV efficiency vs band gap plots for hundreds of stable compounds. The "geneticalgorithm2" package is used for running the GA models [32].
## Results and Discussion
### DFT-Only Single-Fidelity and Multi-Fidelity RFR Models
**Figure 3** shows the PBE-sf and HSE-sf models for \(\Delta\)H, E\({}_{gap}\), and PV efficiency, with PBE and HSE data points presented on the same plot. The parity plots show effective test predictions, averaged over an ensemble of 5000 models, for all 917 data points from PBE and HSE. High accuracy is achieved for both AH and E\({}_{gap}\), with RMSE values of 0.17 eV and 0.23 eV respectively for PBE, and 0.20 eV and 0.30 eV respectively for HSE. These are errors of 5% or less considering the range of values of both properties across the PBE and HSE datasets, indicating a 95% accuracy. Given that there are between 5 and 12 atoms p.f.u. in any HaP compound, the AH RMSE converts to \(\sim\) 20-40 meV/atom, which are highly competitive with other ML formation energy predictions in the literature [33; 34]. The same can be said for E\({}_{gap}\) prediction errors between 0.2 and 0.3 eV as well [35; 36]. Predictions errors for PBE and HSE SLME are higher on average, with RMSE values of 2.12% and 1.78% respectively; these errors are \(\sim\) 10% of the total range of values, and show that the composition-based descriptors may not be sufficient for highly accurate prediction of PV efficiency. This is not a surprise, as the optical absorption spectra are likely sensitive to
the perovskite structure and would thus affect the SLME prediction. Furthermore--especially when experimental data will be included in training--there are many factors beyond the perovskite composition alone, including the solar cell architecture, interfaces, and external conditions, that influence the PV efficiency. Nevertheless, the focus of current work is on determining upper limit efficiencies as a function of HaP compositions and eventually screening materials which are likely to show high efficiencies.
Next, the DFT-only MF models are trained for each property by adding 2 descriptors indicating whether the data source is PBE or HSE, leading to PBE-mf1 and HSE-mf1 predictions. Results are presented in **Figure 4**, showing an improvement in performance over the SF models, with models once again picturing averaged test-only predictions over 5000 models. PBE and HSE \(\Delta\)H models show RMSE values of 0.15 eV and 0.12 eV respectively. E\({}_{gap}\) predictions also represent slight improvement, with PBE RMSE of 0.17 eV and HSE RMSE of 0.20 eV, while SLME predictions show marked improvement with RMSE of 1.78% for PBE and 1.35% for HSE. While \(\Delta\)H and E\({}_{gap}\) values in the HSE dataset cover a wide range of values, it can be seen from **Figures 2(b)**, 3, and 4 that the HSE SLME values clearly occur in a smaller range than the PBE SLME values, which is an artifact of the nature of this dataset and has been explained in prior work [11]. As such, we expect the MF models to theoretically expand the reach of the HSE predictions to new regions in the chemical space that might show higher PV efficiencies, something that may not be possible with SF models given the interpolative nature of regression models. Generally, the PBE and HSE \(\Delta\)H values, both SF and MF1 predictions, have a good correlation and can be equally trusted, but given the low test prediction RMSE of 0.12 eV, the HSE-mf1 \(\Delta\)H model should provide the best way for new prediction and screening of the bulk stability of HaPs.
Figure 4: Multi-fidelity random forest regression models trained for decomposition energy, band gap, and PV efficiency, based on a combined DFT-PBE + DFT-HSE dataset. Parity plots capture effective averaged test predictions (over 5000 models) for every data point, and RMSE values are shown for PBE and HSE.
Figure 3: Single-fidelity random forest regression models trained for decomposition energy, band gap, and PV efficiency, separately on the DFT-PBE and DFT-HSE datasets. Parity plots capture effective averaged test predictions (over 5000 models) for every data point, and RMSE values are shown separately for PBE and HSE.
### RFR Models for Experiment-Fidelity Predictions
Finally, multiple approaches are used for making predictions on the experimental data points, and to eventually achieve accurate and general experiment-fidelity predictions. An important shortcoming of the dataset had to be overcome here: while we could collect experimental data on nearly 100 points from the literature which included their chemical composition, band gap, and PV efficiency (and some other solar cell-relevant quantities [15]), there was almost never sufficient information available about the corresponding perovskite phase. An exhaustive search of every composition in the experimental literature to determine its preferred phase is beyond this scope of the current study, though it can and should be performed in the future. Furthermore, many studies do not explicitly report the likely phase of the material, though diffraction patterns and spectroscopic measurements are reported, making it difficult to definitively assign phase information to any compound, which makes a computational estimate a better approach. As such, we apply the following strategy to create a 4-dimensional phase vector (corresponding to cubic, tetragonal, orthorhombic, or hexagonal phase) for every experimental data point:
1. The PBE-sf \(\Delta\)H is predicted for all 100 compounds considered in all 4 phases.
2. Predictions are made over an ensemble of 5000 models as described earlier, and the most stable phase (which of the four phases has the lowest predicted PBE \(\Delta\)H?) is noted each time.
3. A score between 0 and 1 is assigned to each of the 4 dimensions of the phase vector based on the relative frequency of occurrence of any phase as the most stable. For instance, for the compound FASnI\({}_{3}\), the cubic phase fraction is found to be 0.57 and the hexagonal phase fraction is 0.41.
4. With all 54 dimensions (including composition, phase, and elemental property vectors) now available for all experimental data points, single-fidelity models could be trained, and this data can be added to the DFT data along with a 3-dimensional fidelity vector to train the new MF models.
In the absence of any experimental data, one might wonder how the DFT-predicted E\({}_{gap}\) and SLME compare with measured values. A comparison of DFT vs experiments has been performed for a small set of compounds in past work, and the general understanding is that PBE and HSE errors are quite large for a lot of compounds [10; 11; 12]. To examine this effect further, we used the PBE-sf, HSE-sf, PBE-mf1, and HSE-mf1 models to make predictions on all 97 experimental data points, using the 54-dimensional inputs described above. The DFT-ML SF and MF predictions of E\({}_{gap}\) are plotted against actual experimental values in **Figure 5(a)** and **(b)** respectively: it can be seen that PBE-sf predictions have the lowest RMSE of 0.4 eV, with all other predictions lying in the \(\sim\) 0.6 eV RMSE range. These errors are clearly too high, and certain compounds are especially poorly predicted, with errors larger than 1.5 eV at times. Corresponding parity plots comparing PBE-sf, HSE-sf, PBE-mf1, and HSE-mf1 predictions of SLME with experimental PCE are presented in **Figure S1**, showing that there is essential no correlation between the DFT-ML and experimental values of PV efficiency.
Next, SF models were trained for the experimental E\({}_{gap}\) and PCE, and effective test predictions for all 97 points based on an ensemble of 5000 models are presented in **Figure 6**. E\({}_{gap}\) predictions show a very low RMSE of 0.11 eV, though a caveat is
Figure 5: DFT-ML predictions compared to experimental values for (a) single-fidelity PBE and HSE models, and (b) DFT-only multi-fidelity models. Expt vs DFT RMSE values are shown separately for PBE and HSE.
the lower range of values in the experimental dataset, whereas RMSE on PCE predictions is much higher at 3.72%. Although the Expt-sf models could already be used for making experiment-fidelity predictions on hundreds of thousands of hypothetical compounds, it would be limited and erroneous given the narrow chemical space in the experimental dataset compared to the DFT dataset--where explicit care was taken to involve all chemical species in roughly equal proportions across all the compounds. Thus, a DFT-Expt MF model might serve high-throughput predictions better.
PBE-mf2, HSE-mf2, and Expt-mf2 prediction performances are pictured in **Figure 7**. PBE and HSE E\({}_{gap}\) predictions show similar RMSE of 0.17 eV and 0.19 eV respectively, similar to the MF1 models. The same is true for SLME predictions where the PBE RMSE is 1.72% and the HSE RMSE is 1.29%. Expt-mf2 E\({}_{gap}\) RMSE is 0.14 eV and PCE RMSE is 3.62%. These predictions are again quite similar to the Expt-sf performances, but are vastly more useful for their applicability across new regions of the HaP chemical space. We believe that the MF2 models learn inherent relationships between Expt, PBE, and HSE values, and are thus capable of making experiment-fidelity predictions for chemistries that have been studied from PBE and/or HSE. Table 2 lists the test RMSE values for all three properties from many different RFR models, with the values in bold showing the lowest errors for any specific property at any fidelity.
Figure 6: Experiment-only single-fidelity RFR predictions for (a) band gap, and (b) PV efficiency, with effective test predictions and corresponding RMSE shown.
Figure 7: Multi-fidelity random forest regression models trained for the band gap and PV efficiency from DFT-PBE, DFT-HSE, and experiment. Parity plots capture effective test predictions averaged over 5000 models, and RMSE values are shown separately for PBE, HSE, and Expt.
### Understanding Feature Importance
**Figure S2** shows the Pearson coefficient of linear correlation [37] plotted between the 56-dimensional vector representing every compound in the DFT-mf1 dataset (PBE + HSE data), against the three properties of interest. The primary observations here are very similar to past publications [10; 11; 12]. Increasing the K fraction will lead to an increase in \(\Delta\)H and thus make the compound less stable, while the FA fraction has the exact opposite effect. \(\Delta\)H is highly negatively correlated with features such as the ionic radius and atomic number of the A-site cation, showing that larger cations (to a point) are best for perovskite stability. While Ca, Sr, and Ba lead to a heavy increase in E\({}_{gap}\), it is very negatively correlated with the electron affinity, ionization energy, and electronegativity of the B-site cations. The SLME shows the opposite trends to what is found for E\({}_{gap}\), with increase in the B-site electron affinity, ionization energy, and electronegativity leading to an increase in efficiency. Interestingly, we generally find low correlation between the phase/fidelity and the properties, except for the positive correlation between the cubic phase and both the \(\Delta\)H and the E\({}_{gap}\), which is likely a consequence of the dominance of cubic structures in the dataset that means a majority of the unstable compounds as well as a majority of the wide band gap compounds are cubic.
To understand the contributions of various features better, we plotted the feature importance values for all descriptor dimensions as obtained from the best RFR models, in **Figure S3**. It can be seen that the main contributions to \(\Delta\)H are from the A and X species, as well as from the elemental properties that highlight the size of the A-site cations and X-site anions--similar to the observations from **Figure S2**. For E\({}_{gap}\) and PV efficiency, there is an obvious dominance from the B-site electronegativity followed by the electron affinity and ionization energy, showing that the optoelectronic properties are heavily determined by the B-site cations (and a little bit from the X-site anions) and have almost no sensitivity to the A-site species. **Figure S4** shows the E\({}_{gap}\) and PV efficiency for the entire dataset of 1014 points plotted against the B-site electronegativity, showing a rough correlation, especially for the DFT data.
### Prediction and Screening at Experiment-Fidelity
Next, predictions are made for the expanded set of 151,140 compounds populated across the same HaP chemical space. It should be noted that for any given compound, PBE or HSE computations would at least require at least 12 to 24 hours of CPU time (on approximately 128 cores) for estimating the three properties, which would correspond to over 200 years of computing time for > 150,000 compounds. Experimental investigation of all these materials would take even longer. The ML predictions, on the other hand, require only seconds for several compounds, and can be made for hundreds of thousands of compounds in mere minutes. **Figure S5** along with **Figures S5** and **S6** show the HSE-mf1 \(\Delta\)H, Expt-mf2 E\({}_{gap}\), and Expt-mf2 PCE predicted for the 37,785 unique compositions in four different phases, as \(\Delta\)H vs E\({}_{gap}\) and E\({}_{gap}\) vs PV efficiency plots.
It can immediately be seen that (i) general shape and distribution of the plots is the same as the DFT-Expt dataset pictured in **Figure 2**, and (ii) there are now hundreds of more compounds that exist in the desirable ranges of \(\Delta\)H < 0.2 eV, E\({}_{gap}\) between 1 and 2 eV, and PCE > 15%. We find that 3610 compounds out of the 151,140 fulfil each of these criteria, which represents 2.4% of the entire population. 1180 of these compounds are cubic, 856 tetragonal, 889 orthorhombic, and 685 hexagonal. Further,
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**RFR Model** & \(\Delta\)**H RMSE (eV)** & \(\text{E}_{gap}\) **RMSE (eV)** & **PV Efficiency RMSE (\%)** \\ \hline PBE-sf & 0.17 & 0.23 & 2.12 \\ \hline HSE-sf & 0.20 & 0.30 & 1.78 \\ \hline Expt-sf & - & 0.11 & 3.65 \\ \hline PBE-sf vs Expt. & - & 0.40 & 5.45 \\ \hline HSE-sf vs Expt. & - & 0.60 & 5.48 \\ \hline HSE-sf vs Expt, shifted & - & 0.31 & - \\ \hline PBE-mf1 & **0.15** & 0.17 & 1.78 \\ \hline HSE-mf1 & **0.12** & 0.20 & 1.35 \\ \hline PBE-mf1 vs Expt. & - & 0.59 & 5.28 \\ \hline HSE-mf1 vs Expt. & - & 0.62 & 5.50 \\ \hline PBE-mf1 vs Expt., shifted & - & 0.34 & - \\ \hline HSE-mf1 vs Expt., shifted & - & 0.36 & - \\ \hline PBE-mf2 & - & **0.17** & **1.72** \\ \hline HSE-mf2 & - & **0.19** & **1.29** \\ \hline Expt-mf2 & - & **0.09** & **2.24** \\ \hline \end{tabular}
\end{table}
Table 2: Root mean square error (RMSE) in test set prediction for the three properties from multiple single- and multi-fidelity models.
a majority of these compounds are B-site mixed (2036 in numbers) while 1364 compounds are A-site mixed, and only about 200 compounds have halogen mixing. **Figures S5** and **S6** further show that the shapes of the plots look generally similar for different phases, and there are some empty regions which will likely be covered by other intermediate compositions we have not considered in this enumeration exercise. Thus, we are able to use the multi-fidelity DFT-Expt predictions to screen > 3600 compounds with multiple desired properties. A further examination of the chemical space of the screened compounds, presented in **Figure 9**, shows that FA followed by Cs and MA are the most prevalent A-site cations, Pb, Ge, and Sn are most common at the B-site, while nearly all the compounds are Iodides. This is consistent with the most typical HaP compositions used in high efficiency solar cells arising from FA-MA-Cs mixing at the A-site and a predominance of Pb and I at the other sites. **Table III** shows the total number of screened compounds divided in terms of the predicted perovskite phase and type of mixing (pure, A-mixed, B-mixed, or X-mixed).
PV efficiency plotted against E\({}_{gap}\) for compounds obtained from nearly 3000 different GA runs. It should be noted here that the GA models utilize the HSE-mf1 prediction for \(\Delta\)H and Expt-mf2 predictions for E\({}_{gap}\) and PCE, and mixing is allowed to happen in many different fractions, often simultaneously at A, B, and X sites. Every GA run requires 2 to 5 minutes on average for the 14-dimensional composition space with pre-trained RFR models for property prediction. We run GA models for four different subsets of the chemical space (D1, D2, D3, and D4), and show results for them in **Figure 10(b)**. These datasets have been explained in terms of their constituent chemical species in **Table IV**; e.g., D2 represents Pb-free perovskites whereas D4 represents the much smaller space of (Cs-MA-FA)PbI\({}_{3}\) perovskites.
Since many different constraints are applied in the objective function, the best compounds obtained from a GA run may not be completely stoichiometrically accurate or meet every property target. As can be seen in **Figure 10(b)**, while all compounds essentially fulfill the E\({}_{gap}\) requirement (as well as the stability requirement, which is not plotted here), PCE is not always higher than 15%. We find that across all the GA runs, there are a total of 1703 chemically meaningful ABX\({}_{3}\) alloys which satisfy conditions for all three properties; 484 of these compounds are cubic, 428 are tetragonal, 431 are orthorhombic, and 360 are hexagonal phase compounds. We also find that the total number of promising compounds increases when the chemical space becomes less complex--that is, a 5-dimensional space given by Cs-MA-FA-Pb-I leads to many more compounds with the targeted properties than the 14-dimensional space of K-Rb-Cs-MA-FA-Ca-Sr-Ba-Ge-Sn-Pb-I-Br-Cl. This is both a consequence of the difficulty of optimization in a high-dimensional space and the ease of finding attractive materials in the Cs-MA-FA-Pb-I space which are already among the best for stability and high efficiency. **Figure 10(b)** further shows that there are scores of Pb-free HaPs (D2) with high PV efficiencies, which opens up an important avenue towards eliminating Pb from solar cell perovskites. Many of the compounds are 14-dim, which means they are allowed to contain any species, while the remaining compounds are restricted to contain the most common species found in the experimental literature, namely Cs, MA, and FA at A-site, Pb, Sn, and Ge at the B-site, and Br/I at the X-site.
An examination of the frequencies of occurrence in the GA-screened list of 1703 compounds is presented for the chemical spaces D1, D2, D3, and D4 in **Figure 11**, with the results somewhat more interesting than the compounds screened from pure enumeration and prediction. In D1 and D2, we find FA and MA are the most prevalent at the A-site, followed by Cs, with Rb and K clearly preferred in smaller mixing fractions. Sn, Ge, and Pb are again the most common B-site cations, and Br makes a considerable appearance at X although I remains the king. In the total absence of Pb, Sn and Ge majorly dominate the B-site. **Figure 11(a)** and **(b)** reveal that the best combination of properties could potentially be achieved using some combination of FA and MA at the A-site with small quantity of Cs, Sn and Ge at the B-site, and I with potentially small quantities of Br or Cl at the X-site. Further, the lower-dimensional chemical spaces D3 and D4 in **Figure 11(c)** and **(d)** respectively show a preference for FA, Pb, and I, with Cs and Br being useful in smaller fractions, but Sn and Ge still holding their own.
Figure 10: (a) An example GA run presented in terms of the objective function plotted against the number of generations, showing the property and chemical feasibility constraints. (b) Predicted (experiment-fidelity) PV efficiency plotted against band gap for \(\sim\) 3000 compounds from several GA runs; all compositions are chemically meaningful and have decomposition energy < 0.2 eV p.f.u.
### Visualizing Composition-Property Spaces using ML Predictions
Using ML predictions made across the set of 151,140 HaP compounds in different phases, it is now possible to visualize how the phase stability and computed properties vary in specific families of compounds, such as with increasing Sn concentration in Pb-Sn mixed compounds. Here, we pick a few interesting and important chemical spaces and visualize their property variations using ternary diagrams. **Figure 12** shows the Expt-mf2 E\({}_{gap}\) and Expt-mf2 PV efficiency for the chemical spaces Cs(Ge-Sn-Pb)I\({}_{3}\), MA(Ge-Sn-Pb)I\({}_{3}\), and FA(Ge-Sn-Pb)I\({}_{3}\), plotted as ternary color maps with Ge, Sn, and Pb representing the three sides of the triangle. It can be seen that in the Cs(Ge-Sn-Pb)I\({}_{3}\) series, maximum E\({}_{gap}\) > 1.5 eV is shown by Pb-dominated compositions, minimum E\({}_{gap}\) \(\sim\) 1.1 eV by Sn-dominated compositions, whereas Ge generally leads to more intermediate E\({}_{gap}\) values. MA(Ge-Sn-Pb)I\({}_{3}\) compounds show very similar E\({}_{gap}\) trends where in FA(Ge-Sn-Pb)I\({}_{3}\) compounds, a dominance of Pb or Ge leads to higher E\({}_{gap}\) values in the \(\sim\) 1.8 eV range. The highest PV efficiencies are obtained by majority Pb compositions with some Sn and Ge added in all three series of compounds, leading to values > 12% for MA(Ge-Sn-Pb)I\({}_{3}\), > 15% for Cs(Ge-Sn-Pb)I\({}_{3}\), and \(\sim\) 18% for FA(Ge-Sn-Pb)I\({}_{3}\). It can also be seen that majority Ge compositions in FA(Ge-Sn-Pb)I\({}_{3}\) lead to much lower PV efficiencies around 13%, so Pb-based compositions may still be crucial to achieve high efficiencies.
**Figures S7** to **S12** further show ternary plots for the phase stability (cubic vs tetra vs ortho vs hex), the HSE-mf1 \(\Delta\)H, and the Expt-mf2 E\({}_{gap}\) and PV efficiency, across the chemical spaces Cs(Ge-Sn-Pb)I\({}_{3}\), MA(Ge-Sn-Pb)I\({}_{3}\), FA(Ge-Sn-Pb)I\({}_{3}\), Cs(Ge-Sn-Pb)Br\({}_{3}\), MA(Ge-Sn-Pb)Br\({}_{3}\), and FA(Ge-Sn-Pb)Br\({}_{3}\). Such diagrams could be trivially plotted for any binary, ternary, or even higher-dimensional chemical spaces of interest, and can be navigated to determine the likely phase and bulk stability of any composition as well as their optoelectronic properties. It can be seen that the Cs-iodides generally prefer the tetragonal phase while the Cs-bromides prefer the orthorhombic phase. Nearly all compositions are stable w.r.t. decomposition in either series, and the highest PV efficiencies \(\sim\) 13% for the bromides are achieved with Sn-dominance at the B-site. The MA-based iodides and bromides are nearly always tetragonal and show good bulk stability. Maximum efficiencies are obtained from mixed compounds with intermediate Pb fractions. Finally, the FA-based compounds are almost always stable in the hexagonal phase, except for a small area of cubic stability in the iodides. Sn-dominant FA-bromides provide a peak PV efficiency around 10%, but the real pay-off comes from Pb-dominant iodides. The entire FA HaP space is highly stable against decomposition with \(\Delta\)H values < -1 eV p.f.u.
## Conclusions
In summary, several single-fidelity and multi-fidelity random forest regression models were trained for predicting the decomposition energy, band gap, and photovoltaic efficiency of halide perovskites at both DFT and experiment fidelity, resulting in the discovery of hundreds of promising new candidates via high-throughput screening and genetic algorithm. This work is based on an innovative approach of fusing experimental data from the literature with multiple types of DFT computed properties, and the unique representation of every material in terms of its composition and phase, well-known elemental or molecular properties of species at the cation and anion sites, and one-hot encoding fidelity information. All DFT data, optimized RFR models, and predictions made on hundreds of thousands of hypothetical compounds, are openly available to the community. GA enabled the efficient design of compositions beyond the limits of brute-force enumeration in certain mixing fractions, and provides an avenue
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**Chemical Space** & **Description** & **Screened Compounds** & **Cubic** & **Tetragonal** & **Orthorhombic** & **Hexagonal** \\ \hline D1 & (K-Rb-Cs-MA-FA)(Ca-Sr-Ba-Ge-Sn-Pb)(I-Br-Cl)\({}_{3}\) & 226 & 80 & 51 & 58 & 37 \\ D2 & Pb-free, (K-Rb-Cs-MA-FA)(Ca-Sr-Ba-Ge-Sn)(I-Br-Cl)\({}_{3}\) & 297 & 90 & 70 & 71 & 66 \\ D3 & (Cs-MA-FA)(Ge-Sn-Pb)(I-Br)\({}_{3}\) & 381 & 114 & 107 & 102 & 58 \\ D4 & (Cs-MA-FA)PbI\({}_{3}\) & 799 & 200 & 200 & 200 & 199 \\ \hline \end{tabular}
\end{table}
Table 4: The number of GA-screened compounds obtained for different chemical sub-spaces (D1, D2, D3, and D4), divided in terms of the perovskite phase.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Phase** & **All Screened Compounds** & **Pure** & **A-mixed** & **B-mixed** & **X-mixed** \\ \hline All & 3610 & 9 & 1364 & 2036 & 201 \\ Cubic & 1180 & 3 & 473 & 626 & 78 \\ Tetra & 856 & 2 & 290 & 510 & 54 \\ Ortho & 889 & 2 & 361 & 470 & 56 \\ Hex & 685 & 2 & 240 & 430 & 13 \\ \hline \end{tabular}
\end{table}
Table 3: Number of ML-screened compounds with all properties in desirable ranges, namely \(\Delta\)H (HSE-mf1) < 0.2 eV, E\({}_{gap}\) (Expt-mf2) between 1 and 2 eV, and PCE (Expt-mf2) > 15%. The screened compounds are divided based on their predicted phase and type of mixing.
for continuous discovery and improvement with infusion of more data and expanding the chemical space. Composition-property distributions are visualized in terms of ternary diagrams generated using ML predictions, revealing the most promising regions of B-site mixed (with Pb, Sn, and Ge) Cs/MA/FA iodides and bromides, and such exercises could easily be repeated for any chemical sub-spaces of interest. Limitations of the current work arise in terms of the experimental data only covering a small range of band gaps, any compound only represented as a single data point although it may posses dozens of polymorphs with varying properties, known issues with the DFT functionals, and the lack of consideration of other essential properties such as defect behavior and electron/hole mobilities. Each of these factors will be explored in future work; some of our ongoing work includes additional benchmarking of functionals for HaPs and the use of crystal graph-based neural networks for property prediction based on entire crystal structures as input. We anticipate that our datasets will serve many data mining and ML efforts, both within our group and from the community in general, and significant improvements will further be made in terms of data and ML approaches.
## Conflicts of interest
There are no conflicts to declare.
## Data availability
Tabulated data is included as spreadsheets in the supporting documents. All DFT data and ML models, including python scripts in Jupyter notebooks, can be found on Github: [https://github.com/mannodairun/perovs_mfml_ga](https://github.com/mannodairun/perovs_mfml_ga).
## Acknowledgements
This work was performed at Purdue University, under startup account F.10023800.05.002 from the Materials Engineering department. This research used resources of the National Energy Research Scientific Computing Center (NERSC), the Laboratory Computing Resource Center (LCRC) at Argonne National Laboratory, and the Rosen Center for Advanced Computing (RCAC)
Figure 11: Frequencies of occurrence of the 14 chemical species within the list of 1703 promising compounds obtained from GA, for subsets of the chemical space: (a) D1, (b) D2, (c) D3, and (d) D4.
clusters at Purdue.
|
2301.05070 | Wildfire Smoke Detection with Computer Vision | Wildfires are becoming more frequent and their effects more devastating every
day. Climate change has directly and indirectly affected the occurrence of
these, as well as social phenomena have increased the vulnerability of people.
Consequently, and given the inevitable occurrence of these, it is important to
have early warning systems that allow a timely and effective response.
Artificial intelligence, machine learning and Computer Vision offer an
effective and achievable alternative for opportune detection of wildfires and
thus reduce the risk of disasters. YOLOv7 offers a simple, fast, and efficient
algorithm for training object detection models which can be used in early
detection of smoke columns in the initial stage wildfires. The developed model
showed promising results, achieving a score of 0.74 in the F1 curve when the
confidence level is 0.298, that is, a higher score at lower confidence levels
was obtained. This means when the conditions are favorable for false positives.
The metrics demonstrates the resilience and effectiveness of the model in
detecting smoke columns. | Eldan R. Daniel | 2023-01-12T15:12:56Z | http://arxiv.org/abs/2301.05070v1 | # Wildfire Smoke Detection by Computer Vision
###### Abstract
Wildfires are becoming more frequent and their effects more devastating every day. Climate change has directly and indirectly affected the occurrence of these, as well as social phenomena have increased the vulnerability of people. Consequently, and given the inevitable occurrence of these, it is important to have early warning systems that allow a timely and effective response.
Artificial intelligence, machine learning and Computer Vision offer an effective and achievable alternative for opporture detection of wildfires and thus reduce the risk of disasters. YOLOv7 offers a simple, fast, and efficient algorithm for training object detection models which can be used in early detection of smoke columns in the initial stage wildfires.
The developed model showed promising results, achieving a score of 0.74 in the F1 curve when the confidence level is 0.298, that is, a higher score at lower confidence levels was obtained. This means when the conditions are favorable for false positives. The metrics demonstrates the resilience and effectiveness of the model in detecting smoke columns.
Early Warning, Object Detection, Artificial Intelligence, Computer Vision, YOLO.
## I Introduction
A wildfire is a fire that, whatever its origin and with danger or damage to people, property, or the environment, spreads uncontrolled in rural areas, through woody, bushy or herbaceous vegetation, alive or dead. In other words, it is an unjustified and uncontrolled fire in which the fuels are plants and which, in its propagation, can destroy everything in its path ("Wildfires in Chile - CONAF").
In the last 10 years there have been 67,567 Wildfires, affecting an area of 1,246,922 hectares of grassland, scrubland, forest plantations, native forest, agricultural land, among others.
Climate change has increased the risk of Wildfires both directly and indirectly (Borunda, A.). Although the causality of fires is 99.7% human, the conditions for the generation of these fires are higher than they would be without climate change.
Given this scenario, it is significant to have early warning systems that, in the event of an inevitable occurrence of a forest fire, make it possible to activate and deploy the necessary resources for its rapid control and extinction, thus preserving the lives of people, their property and the environment.
## II Forest Fire Detection Systems
Wildfires are incidents with a high destructive potential and a sudden growth, even more so when weather conditions allow it. Therefore, is very important to apply a rapid firefighting strategy that prevents fires from growing in extent and severity.
The early detection of fires is essential to initiate procedures that culminate in firefighting. Among them is the notification of the start of the fire to the Regional Coordination Center of CONAF (CENCOR) who, in turn, with the respective technical background, analyze the situation and generate the dispatch of relevant land and/or air resources.
### Mobile Terrestrial Detection
The task consists of moving surveillance people to a given area, either by vehicle or on foot. This practice is quite common in Chile in forestry companies, where it is used to supervise work activities.
### Fixed Terrestrial Detection
This is the most widely used form of detection in Chile. It consists of having a person observing from metal or wooden towers that are between 15 and 30 meters high, or from lower booths known as detection posts.
### Airborne Detection
This detection method uses aircraft, usually single-engine high-wing aircraft, to detect fires from the air. The pilot is accompanied by an observer, who oversees doing the observation. This technique makes possible to observe a large amount of area in an abbreviated time and provides accurate and detailed information about the detected fire and the area over which it is flown. However, its operating cost is high.
### Detection with television systems
This method uses television cameras to transmit their signal via microwaves to screens at a command post, such as in a vehicle in the field or at a coordination center. There, specialists analyze the situation based on what they see on the screen.
### Satellite Systems
In some parts of the world, due to the lack of forest fire protection organizations or detection systems, the only way to know what is happening is to use low orbit satellite images, such as those provided by the Aqua and Terra satellites.
## III Object Detection by Computer Vision
Computer vision, also known as artificial vision or technical vision, is a scientific discipline that involves techniques for acquiring, processing, analyzing and understanding images of the real world to produce numerical or symbolic information that can be processed by computers (J. Morris, 1995). Just as humans use our eyes and brains to make sense of the world around us, computer vision seeks to create the same effect by allowing a computer to perceive and understand an image or sequence of images and act accordingly given the situation. This understanding is achieved through fields as diverse as geometry, statistics, physics and other disciplines. Data collection is achieved in a variety of ways, such as image sequences viewed from multiple cameras or multidimensional data from medical scanners.
Real-time object detection is a particularly important topic in computer vision, as it is often a necessary component in computer vision systems. Some of its current applications are object tracking, public safety and active surveillance, autonomous vehicle driving, robotics, medical image analysis, among others.
Computing devices that run real-time object detection processes usually use CPUs or GPUs for their tasks, however, nowadays the computational capacity has improved exponentially with the Neural Processing Units (NPU) developed by different manufacturers.
These devices focus on accelerating operations through several types of algorithms, one of the most widely used being the multilayer perceptron or Multilayer Perceptron (MLP), an artificial neural network formed by multiple layers in such a way that it has the ability to solve problems that are not linearly separable.
## IV Yolo
The object detection algorithm used in the present work is YOLO (You only look once), developed by Wang, Chien-Yao et. al, whose latest version was recently released in July 2022.
YOLO is an algorithm that uses neural networks to provide real-time object detection. It is an algorithm known for its speed and accuracy and YOLO is currently used in a variety of applications such as traffic signal detection, people accounting, detection of available spaces in private parking lots, remote animal surveillance, among others.
### _Operation of YOLO_
The YOLO algorithm works by using three techniques:
* Intersection over Union (IOU).
* Regression of the bounding box.
* Residual blocks.
### _Residual blocks_
The analyzed image, which can be a frame of a sequence (video), is divided into several grids. Each grid has a dimension SxS.
The following image shows an example of grids.
Each cell will detect the objects that appear inside them. For example, if an object appears inside a given cell, the cell will perform processing on its own and separately from the others.
### _Regression of the bounding box_
A bounding box is an outline that highlights an object within an image or cell. Each box has a height, a width, a class (what we are looking for: car, dog, traffic light, fire smoke) and a centroid. The following image shows an example of a bounding box.
YOLO uses a single bounding box regression to predict the items listed above.
### _Intersection over Union (IOU)_
Intersection over union is a phenomenon in object detection that describes how blocks overlap in an image, where block is understood as the set of cells where the detected object is located.
YOLO uses IOU to provide an output block surrounding the detected object. Each grid cell is responsible for predicting the bounding boxes and their confidence score.
Fig. 1: Example of residual block, source: guidetomlandai.com
Fig. 2: Example of bounding box, source: appsolomatascience.com
### _Output result_
YOLO combines the three techniques for accurate detection. First, having the SxS grid of the analyzed image allows to evaluate each section individually and be able to detect the bounding boxes and their respective confidence scores.
For each bounding box, the class of detected object is set and finally, using IOU, the frame is adjusted to ensure that the detection frame covers the entire real object in the output image.
## V Creation of the Model
To detect objects YOLO algorithm requires a model trained with the class or classes of the search elements. For this it is important to establish specifically where, how and when the model will operate to detect fires, for which the following criteria are established:
### _Location of the observer_
The analyzed images by the model and used for fire detection were obtained from distant sources, with a wide view of valley areas, forests and/or mountain ranges, above level and with unpredictable atmospheric conditions.
Such conditions of observations are those that we could identify in an observation tower or fire watch. It should be considered that the resolution of these can be varied and not uniform, depending on the capture device used (webcam, HD camera).
### _Type of wildfire to be detected_
As the objective of the system is to detect fires in their initial stage, we will discard any images with fire and concentrate on smoke plumes and their development, ideally taken from cameras in different scenarios.
### _Redundancy of training images_
To generate greater variability and resilience to the model, modifications have been made to part of the image dataset to increase the amount of material for training.
In this regard, the following characteristics were applied to the dataset:
1. _Mirror effect_: The images were duplicated with a horizontal rotation. This allows to have training material for different wind conditions.
2. _Exposure_: Duplicate images were generated with changes in exposure between -15% and +15%. This allows improving the visibility of the smoke plume in images that may have been taken with different levels of ambient humidity, which at greater distances distorts the focus and sharpness of the image.
Also, the redundancy modifications and the labeling of the images in the dataset were made in the Roboflow app, a computer vision web software that provides many functions for upload, label, augmentation, export, train and testing models.
## VI Sources of Information
To increase the effectiveness of the model, it is important to train it with images that are as similar as possible to the scenarios where it will be implemented. In view of the above, different sources of information were selected to obtain images with a wide range of
Fig. 4: Diagram of the YOLO algorithm, source: guidetomlandai.com
Fig. 3: Example of Intersection over Union, source: miro.medium.com
geographic environments to generate a resilient model that can be implemented in different locations.
### _High Performance Wireless Research and Education Network (HPWREN)_
The High-Performance Wireless Research and Education Network is a University of California partnership project led by the San Diego Supercomputing Center and the Institute for Geophysics and Planetary Physics at Scripps Institution of Oceanography.
HPWREN works as a collaborative cyber infrastructure connected to the Internet. The project has a vast network of cameras in the State of California, USA, which have been used for wildfire observation.
In particular, the HPWREN images were obtained from the _AI for Mankind_ project, founded by Wei Shung Chung.
### _Social Networks_
Wildfires are high-impact emergencies and are considered by society as public interest events. Therefore, a search for images of Wildfires was made on the Twitter platform using the _hashtag_ "Wildfire" in Spanish, English, Turkish, Greek, Russian and Portuguese. This allowed access to a variety of images with different types of geography and relatively recent, allowing the generation of an updated model training.
### _Images created with Artificial Intelligence_
In an innovative way, the well-known artificial intelligences _Dall-E_ from OpenAI and _Stable Diffusion_ from StabilityAI were used to generate images using the following input phrase: _"Wildfire smoke in early stage as seen from an observation tower or high and distant point"_.
### _Self-made computer Images_
To complement the dataset with smoke columns originating in different places, images were generated by superimposing layers with the Photoshop application. For this purpose, base images of cameras and observation towers without smoke were selected and new images were artificially created with different types of smoke originating from different points.
## VII Model Training
YOLOv7 is a deep learning-based object detection algorithm that uses a convolutional neural network to detect and classify objects in images and videos.
To train the algorithm, a set of labeled images containing the objects to be detected are needed. The images must be divided in two datasets: a training set and a test set. The training set is used to train the neural network and the test set is used to evaluate the performance of the model once trained.
Training process consists in showing the neural network a set of labeled images and to make it learn to detect and classify the objects in them. To do this, a technique called _backpropagation_ is used, which involves adjusting weights of the neural network based on the errors made in classifying the objects in the images. This process is repeated many times, using different training images each time, until the model reaches an acceptable level of accuracy.
Once trained, the model can be used to detect and classify objects in new images and videos. In general, the larger the training set and the better labeled the images are, the better the model performs in object detection tasks.
The model training dataset contains 1,520 baseline images of smoke plumes in different conditions and viewed from different perspectives, incorporating varied geographic settings to improve model resilience.
Applying the redundancy characteristics, the dataset was strengthened to 2,712 images, distributed as follows:
### _Training Set_
Set of 2,405 images to train the neural network of the algorithm to classify the smoke in them. All the images in the dataset contain a bounding box with the exact location of the object to be detected, in this case, the smoke plumes.
### _Validation Set_
Set of 228 images on which the model is evaluated after training. This set is of relevance for the evaluation metric, as it is the first indicator of model performance during the training.
### _Test Set_
Set of 79 images that are unknown to the neural network and were used neither for training nor for validation. It is used to assess the performance of the model against new scenarios. Its metrics are considered the most important because it establishes a performance indicator against the desired scenarios.
## VIII Training Parameters
Model training requires computational power. The higher the computational capacity, faster training process will be done, which in turn will allow a deeper learning process, achieving better performance results.
Fig. 6: Forest fire smoke image created with Dall-E
The model training process was performed using a pre-trained base model arranged by the YOLOv7 algorithm on the Google Colab platform, using an Nvidia A100-SXM4 GPU with 40 Gb of memory.
### _Batch Size_
_Batch size_ is a parameter used in the training process of a machine learning model. It refers to the number of training samples to be processed before updating the model weights.
For example, if the batch size is 32, it means that the model will process 32 training samples at a time and then adjust their weights accordingly. It will then process another batch of 32 samples and adjust the weights again, and so on until all training samples are processed.
Batch size is a parameter that can significantly affect model performance during training. Too small batch size can make training slower as more weight updates are performed, but it can also improve model accuracy. Otherwise, too large batch size can make training faster, but can also reduce model accuracy. Therefore, it is important to choose an appropriate batch size based on the needs of the model and the data set.
The final model is the result of four training phases with different batch sizes.
### _EPOCH or training iterations_
An epoch is a complete iteration through the entire training set during the training process of a machine learning model. For example, if the training set has 1,000 samples and the batch size is 32, it will take 32 iterations to complete one epoch, since 32 x 32 = 1,000-.
During each epoch, the model processes the training samples in batches and adjusts their weights accordingly. At the end of each epoch, the model's performance is evaluated using a test data set and used to assess the model's progress.
The number of epochs used during model training is another parameter that can significantly affect model performance. Too small number of epochs can result in an under-fitted model, while too large number can result in an over-fitted model. Therefore, it is important to choose an appropriate number of epochs based on the needs of the model and the data set, the available resources and time.
The smoke detection model was trained in four sessions of 300 epochs and a final session of 500 epochs, with a total duration of 32.15 hours.
## IX Evaluation Metrics
### _Mean average precision (mAP)_
mAP@_5 is a performance measure commonly used in object detection tasks that refers to the average detection accuracy mAP (_mean Average Precision_) for different values of the _Intersection over Union (IoU)_ threshold.
The mAP detection accuracy refers to the average accuracy of an object detection model in correctly detecting and classifying objects in a set of test images. It is calculated by comparing the model predictions with the truth labels of the objects in the test images and measuring the average accuracy across all images.
The IoU threshold refers to the ratio of overlap between the model prediction and the truth label of an object in an image. For example, if the IoU threshold is 0.5, it means that the model prediction is considered correct only if the overlap between the prediction and the truth label is 50% or more.
### _F1 Curve_
The F1 curve is a tool commonly used in classification tasks to evaluate the performance of a model. It is used to evaluate the accuracy and recall of a model at different classification thresholds.
Accuracy refers to the proportion of correct model predictions out of the total predictions made. Recall refers to the proportion of correct model predictions over the total number of positive cases in the data set.
The F1 curve is calculated using the formula:
\[F1=2*\frac{(Accuracy*Recall)}{(Accuracy+Recall)}\]
This formula combines accuracy and recall in a single measure and is useful when it is important to balance both metrics.
To draw the F1 curve, the classification threshold is varied, and the accuracy and recall are calculated for each threshold. The accuracy and recall values are then plotted on a graph and connected by a line. The result is a curve showing how accuracy and recall vary as the classification threshold changes. The F1 curve is useful for evaluating model performance at different thresholds and for choosing the optimal threshold for the model.
## XI Evaluation of the Model
### _Model N\({}^{\text{o}}\) 1_
The first trained model shows a mean average mAP accuracy of 0.379, that is 37.9% correct on the test set.
Regarding the F1 curve, the model obtained a score of 0.44 when the confidence value is set at 0.215.
Fig. 7: PR Curve Model No. 1 - Own elaboration
The above results are considered deficient, since their best performance does not exceed 50% effectiveness, and occurs when the confidence value of the model is low, therefore, it has a high tendency to generate false positives.
The confidence level is always a relevant factor in model training, because the lower the confidence level is maintained with good results, it is a sign of resilient learning and resistance to false positives.
### _Model No. 2_
The second trained model obtained a mean average mAP accuracy of 0.684, that is 68.4% correct on the test set. The result implies a significant improvement over the first model and is mainly because the weights of the previously trained neural network were used for the new model, collecting the previous learning.
Regarding the F1 curve, the model obtained a score of 0.69 when the confidence value is set at 0.313.
This result is much better than the previous one, in that it obtains 69% accuracy even when the confidence value is low, that is when the model is more susceptible to false positives.
### _Model No. 3_
For the training of Model No. 3, a cleaning of the dataset was performed, eliminating images that were considered ambiguous to the human eye or were far from the objective of what the model is required to learn to detect. This change allowed to improve the training time, however, there were no significant changes in the results, keeping the same values of model N\({}^{\circ}\) 2.
### _Model No. 4_
Model No. 4 was trained with different parameters than those used previously. For the previous cases, batch sizes of 64 and 32 with 300 iterations were used.
For this case a batch size of 16 was used and 500 iterations were performed. This increased the training time considerably and while it improved the results, it was not a significant increase in the first instance.
In relation to the MAP, a score of 0.698 was obtained, only slightly higher than the previous result.
Fig. 11: Curve F1 Model No. 4 - Own elaboration
Fig. 8: Curve F1 Model No. 1 - Own elaboration
Fig. 10: PR Curve Model No. 4 - Own elaboration
Fig. 9: PR Curve Model No. 2 - Own elaboration
However, in relation to the F1 curve, the model showed significantly better results, reaching a score of 0.74 when the confidence level is 0.298, a higher score was obtained and at lower confidence levels, when conditions are advantageous to false positives. This demonstrates the resilience and effectiveness of the model in detecting smoke plumes.
On the other hand, this model proved to make predictions with greater confidence than the previous ones, mainly because it considers the learning from the previous models.
## XII System Installation and Implementation
To perform inference, the trained model must be loaded into an inference application: The first step is to load the trained model into an inference application, such as TensorFlow or PyTorch. This requires providing the path to the model file and loading it into memory.
Then, if the input image differs from the parameters expected by the model it is necessary to preprocess the input image. This may include resizing the image to the dimension expected by the model, normalizing the pixel values, and converting the image to a tensor.
Once the input image is ready, you can run the model using the model inference method and provide the input image as input. This will return the model predictions in the form of a tensor.
Model predictions are often in tensor form and can be difficult to interpret directly. Therefore, it is necessary to process the predictions to obtain useful information, such as the coordinates of the bounding boxes of the detected objects and the corresponding object classes.
Once the predictions have been processed, it is possible to visualize them by overlaying the object labels on the input image or by displaying the predictions in tabular form. This can help to evaluate the performance of the model and to understand how it works.
A tensor is a mathematical object used in the field of artificial intelligence and object detection to represent and manipulate multidimensional data. Tensors are fundamental elements in data processing and are widely used in machine learning and data analysis.
A tensor can be viewed as a generalization of a matrix, which is a two-dimensional data structure used to represent and manipulate data sets. Like a matrix, a tensor can have more than one dimension, and each dimension is known as an axis. Tensor can be used to represent data in many different forms, such as images, videos, audios and texts.
In the area of artificial intelligence and object detection, tensors are used to process and analyze large amounts of input data, such as images or videos, and to produce output results, such as class labels or predictions. Tensors are also used in natural language processing and machine translation, among other applications.
To use the model in video cameras, either in real time or by obtaining images from them, the capture device must be connected to a processing device. This can be a computer or a Raspberry Pi.
It is important to point out that the model does not need to be implemented in the same device that captures the images from the camera, since the architecture designed to meet the objectives of the model is built using the client-server mode, where the clients correspond to one or several sources of information while the server
Fig. 14: Model No. 4 applied to smoke image with 91% success rate
Fig. 12: Test lot Model N° 1 - Own elaboration
Fig. 13: Test lot Model N° 4 - Own elaboration
corresponds to the source where the model is executed and the inferences are made.
For the test model, a home computer with an Nvidia RTX 3060 graphics processing card with 16GB of memory was used, using Windows 11 operating system with the Anaconda data analytics environment installed.
The PyTorch library package was installed on the computer and through a FrontEnd designed with Flask in Python, a web site was generated to capture free access images from Chilean airfield cameras through scraping to perform tests.
## 13 Xiii. Model Improvement
Although the trained model presents an acceptable result, the latest tests indicated that to improve it, it is necessary to make a series of changes, which are detailed below:
1. Use a larger and better labeled training set: often, the larger the training set and the better labeled the images are, the better the performance of the model.
2. Adjust model hyperparameters: there are several hyperparameters that can affect model performance, such as the batch size and the number of epochs used during training. Adjusting these hyperparameters can improve model performance.
3. Use a more complex neural network architecture: using a neural network with more layers or with more units in each layer can improve model performance, but it can also increase training time and the need for more training data.
4. Use regularization techniques: Regularization is a technique used to avoid overfitting the model and improve its generalization. Some common regularization techniques include L1 and L2 regularization, _dropout_ and _early stopping_.
5. Use advanced optimization techniques: There are several advanced optimization techniques that can improve model performance, such as stochastic gradient descent (SGD), Adam and _Adagrad_. Using these techniques can improve training speed and accuracy.
## 14 Xiv. Conclusions
Undoubtedly, the phenomenon of Wildfires will increase. On the one hand, due to climate change and, on the other, to social phenomena such as migration, the displacement of families from the city to the countryside, and intentionality, among others, which will significantly increase vulnerability to this type of anthropogenic event, both in terms of occurrence and severity.
Given this scenario, it is important that authorities, civil society, and people in general become aware of the seriousness of this situation and adopt preventive behaviors that contribute to mitigating the effects of fires through self-care practices such as preventive forestry.
On the other hand, in the face of the inevitable occurrence of forest emergencies, having early warning systems in place will help reduce response times and thus ensure that forest emergencies can be controlled by first responders in less time, thereby reducing their effects on people, their property and the environment.
The present model offers an alternative that complements early warning systems, both at the state and private levels, through science and technology, using the tools that Artificial Intelligence offers and that can be implemented in a simple way and with minimal knowledge of computer science and programming.
Although the current model has an acceptable performance, to improve it, it is necessary to have a larger and better labeled training set that allows the neural network to learn more and better scenarios of forest fire occurrence in the initial stage. Likewise, it is necessary to rescue the learning of the previous models in the training process, adjusting the parameters so in each learning cycle the efficiency is maximized.
The resources required by the system are fully achievable by the organizations with a low cost vs. benefit, it does not require a large number of people in its use since it works mainly in an automated way and the investment in infrastructure (cameras, internet, towers, masts, etc.), is quickly amortized if compared against the cost of maintenance of conventional systems (observation towers with their respective towers, respectively).
Although this technology is not intended to replace the role of human beings in the detection of Wildfires, it does seek to position itself as an important support element in the efforts to prevent and mitigate the adverse effects that may be generated.
## 15 Acknowledgments
The present work would not be possible without the support of Dwyer, B., Nelson, J. from Roboflow Computer Vision, who trusted in this project and sponsored it, granting in their platform features that allowed to build a bigger dataset, of better quality, applying preprocessing tasks and increasing features, thank you very much.
|
2306.07818 | A Primal-Dual-Critic Algorithm for Offline Constrained Reinforcement
Learning | Offline constrained reinforcement learning (RL) aims to learn a policy that
maximizes the expected cumulative reward subject to constraints on expected
cumulative cost using an existing dataset. In this paper, we propose
Primal-Dual-Critic Algorithm (PDCA), a novel algorithm for offline constrained
RL with general function approximation. PDCA runs a primal-dual algorithm on
the Lagrangian function estimated by critics. The primal player employs a
no-regret policy optimization oracle to maximize the Lagrangian estimate and
the dual player acts greedily to minimize the Lagrangian estimate. We show that
PDCA can successfully find a near saddle point of the Lagrangian, which is
nearly optimal for the constrained RL problem. Unlike previous work that
requires concentrability and a strong Bellman completeness assumption, PDCA
only requires concentrability and realizability assumptions for
sample-efficient learning. | Kihyuk Hong, Yuhang Li, Ambuj Tewari | 2023-06-13T14:50:03Z | http://arxiv.org/abs/2306.07818v2 | # A Primal-Dual-Critic Algorithm for
###### Abstract
Offline constrained reinforcement learning (RL) aims to learn a policy that maximizes the expected cumulative reward subject to constraints on expected value of cost functions using an existing dataset. In this paper, we propose Primal-Dual-Critic Algorithm (PDCA), a novel algorithm for offline constrained RL with general function approximation. PDCA runs a primal-dual algorithm on the Lagrangian function estimated by critics. The primal player employs a no-regret policy optimization oracle to maximize the Lagrangian estimate given any choices of the critics and the dual player. The dual player employs a no-regret online linear optimization oracle to minimize the Lagrangian estimate given any choices of the critics and the primal player. We show that PDCA can successfully find a near saddle point of the Lagrangian, which is nearly optimal for the constrained RL problem. Unlike previous work that requires concentrability and strong Bellman completeness assumptions, PDCA only requires concentrability and value function/marginalized importance weight realizability assumptions.
## 1 Introduction
Offline constrained reinforcement learning (RL) aims to learn a decision making policy that performs well while satisfying safety constraints given a dataset of trajectories collected from historical experiments. It enjoys the benefits of offline RL [25]: not requiring interaction with the environment enables real-world applications where collecting interaction data is expensive (e.g., robotics [21, 26]) or dangerous (e.g., healthcare [31]). It also enjoys the benefits of constrained RL [1]: being able to specify constraints to the behavior of the agent enables real-world applications with safety concerns (e.g., smart grid [32], robotics [16]).
Offline constrained RL with function approximation (e.g., deep RL) is of particular interest because function approximation can encode inductive biases to allow sample-efficient learning in large state spaces. Offline _constrained_ RL with function approximation naturally requires the kinds of assumptions that are required for sample-efficient offline _unconstrained_ RL with function approximation:
Representational assumptionFor sample-efficient learning in large state spaces, RL with value function approximation requires the learner's function class to have the representation power to model the value functions of policies. For example, all-policy value function realizability assumption requires the value functions of candidate policies to be contained in the value function class. Another example of a representational assumption is Bellman completeness assumption, which requires the function class to be closed under the Bellman operator.
Data coverage assumptionA major challenge in offline RL is distribution shift. Distribution shift refers to the mismatch of the state-action distributions induced by candidate policies from the distribution in the offline dataset, which makes the assessment of the candidate policies difficult. To address distribution shift, offline RL requires the offline dataset to have good coverage over the state-action distributions induced by a certain set of policies. The most commonly used notion of data coverage is concentrability [28, 29], which is the norm of the ratio of state-action distribution induced by a policy to the state-action distribution induced by the behavior policy that generated the offline |
2302.07337 | Graph Attention Multi-Agent Fleet Autonomy for Advanced Air Mobility | Autonomous mobility is emerging as a new disruptive mode of urban
transportation for moving cargo and passengers. However, designing scalable
autonomous fleet coordination schemes to accommodate fast-growing mobility
systems is challenging primarily due to the increasing heterogeneity of the
fleets, time-varying demand patterns, service area expansions, and
communication limitations. We introduce the concept of partially observable
advanced air mobility games to coordinate a fleet of aerial vehicles by
accounting for the heterogeneity of the interacting agents and the
self-interested nature inherent to commercial mobility fleets. To model the
complex interactions among the agents and the observation uncertainty in the
mobility networks, we propose a novel heterogeneous graph attention
encoder-decoder (HetGAT Enc-Dec) neural network-based stochastic policy. We
train the policy by leveraging deep multi-agent reinforcement learning,
allowing decentralized decision-making for the agents using their local
observations. Through extensive experimentation, we show that the learned
policy generalizes to various fleet compositions, demand patterns, and
observation topologies. Further, fleets operating under the HetGAT Enc-Dec
policy outperform other state-of-the-art graph neural network policies by
achieving the highest fleet reward and fulfillment ratios in on-demand mobility
networks. | Malintha Fernando, Ransalu Senanayake, Heeyoul Choi, Martin Swany | 2023-02-14T20:48:00Z | http://arxiv.org/abs/2302.07337v3 | # Graph Attention Multi-Agent Fleet Autonomy
###### Abstract
Autonomous mobility is emerging as a new mode of urban transportation for moving cargo and passengers. However, such fleet coordination schemes face significant challenges in scaling to accommodate fast-growing fleet sizes that vary in their operational range, capacity, and communication capabilities. We introduce the concept of partially observable advanced air mobility games to coordinate a fleet of aerial vehicle agents accounting for their heterogeneity and self-interest inherent to commercial mobility fleets. We propose a novel heterogeneous graph attention-based encoder-decoder (HetGAT Enc-Dec) neural network to construct a generalizable stochastic policy stemming from the inter- and intra-agent relations within the mobility system. We train our policy by leveraging deep multi-agent reinforcement learning, allowing decentralized decision-making for the agents using their local observations. Through extensive experimentation, we show that the fleets operating under the HetGAT Enc-Dec policy outperform other state-of-the-art graph neural network-based policies by achieving the highest fleet reward and fulfillment ratios in an on-demand mobility network.
## I Introduction
The latest advancements in aerial robotics and electrification are paving the way for a new disruptive direction of transportation: _Advanced Air Mobility (AAM)_. Compared to commercial airliners, AAM focuses on transporting cargo and passengers using electric-powered Unmanned Aerial Vehicles (UAV) that operate at low altitudes over short distances [1]. With an appealing node-to-node navigation structure that overreach already exhausted and poorly maintained path-based ground transportation networks, AAM is currently emerging as a sustainable and efficient alternative to solve the _last-mile delivery problem_ in retail and logistics sectors e.g., Amazon, Zipline [2].
Thanks to their vast operational space, superior maneuverability, relative affordability, efficiency, and autonomous collision-avoiding capabilities, the AAM fleets face lesser risks than ground-based counterparts in scaling up, with the potential to spawn up an array of novel commercial applications. Further, AAM gives rise to numerous inherent research opportunities; decision-making under _elastic_ fleet sizes, stochastic communication, maximizing returns in high _owner-to-vehicle_ affinity, and heterogeneous fleets, to name a few. Due to the projected rapid growth in the UAV sector, and the central Air Traffic Control (ATC) systems' inability to keep up with the demand, there is a tremendous appeal for decentralized traffic control to reduce the reliance on centralized coordination [1, 3]. By coupling these high-level desiderata with the appeal toward scalable autonomy in large-scale commercial mobility fleets, we propose a decentralized, multi-agent game-theoretic framework for coordinating AAM fleets under _partial-observations_. The game-theoretic paradigm allows us to incorporate the _self-interest_ of the agents to favor one's revenue in high-affinity commercial fleets.
We build on a novel _Heterogeneous Graph Attention encoder-decoder_ (HetGAT Enc-Dec) policy architecture by subsuming the inter- and intra-agent relations within the mobility network. We show that the proposed approach is highly generalizable to varying fleets, environments, demand patterns, and observational topologies, thus rendering it suitable for coordinating AAM fleets. Additionally, the partial-observation characteristics of our work eliminates the requirement to aggregate the global system state, which is often infeasible for fleets operating over large geographic regions with communication limitations [4]. We additionally introduce an _intrinsic_ fleet rebalancing mask based on a vehicles' local observations
Fig. 1: The System overview. The top-left images show an on-demand AAM network with multiple service-providing depots, UAV agents, and clients. Each vehicle node interacts with its neighbors in the observable range. The red, green, and black lines represent different interactions between pairs of nodes. We use a Heterogeneous Interaction Graph (HIG) constructed using the agents’ local observations to train a generalizable stochastic policy built on an encoder-decoder heterogeneous graph attention neural network architecture. The yellow, green, and blue colors indicate the different meta-type nodes in the mobility network, i.e., depots, vehicles, and clients. This framework leverages centralized training and decentralized execution of multi-agent reinforcement learning. Drones flying digital arts: DALL-E, OpenAI ©.
that improves the policy's performances under varying demand patterns.
We train the stochastic policy for the agents using centralized training-decentralized execution (CTDE) multi-agent reinforcement learning (MARL) to improve its ability to relate to different vehicle and depot types in a heterogeneous mobility network (Fig. 1). The main contributions of this work can be identified as,
* formulating AAM as a _partially observable stochastic game_ (POSG) to incorporate the agent interactions within a heterogeneous mobility network with _hierarchical timescales_ (Section IV),
* proposing a novel, generalizable encoder-decoder HetGAT architecture for multi-agent mobility and _on-demand_ fleet rebalancing (Section V),
* evaluating the performances of the AAM game against different policy architectures in a mixed-mobility environment under varying demand patterns.
To the best of the authors' knowledge, HetGAT-based MARL has not yet been studied in the on-demand mobility context.
## II Related Work
### _Autonomous Mobility Fleet Coordination_
Current autonomous mobility fleet coordination spans multiple research areas; autonomous mobility on-demand (AMoD) [5], multi-robot dynamic task allocation [6], drone-assisted delivery [7] and robot pickup and delivery systems [8]. Many AMoD solutions consider a centralized policy that coordination the vehicles for catering individual [9, 10] or shared rides [11, 12]-a notion that is distant to AAM due to safety and dedicated unique infrastructural constraints. Gammelli et al. [10] presented a Graph Neural Network (GNN)-based centralized policy AMoD where the authors show the learned policy generalizes to different service areas and supports area expansion. The autonomous mobility fleet redistribution under congestion has also been studied with Q-learning [9] and optimization -based [13] approaches. Especially, Gueriau et al. [9] proposes simultaneous pickup, delivery and rebalancing achieved through RL agents that comprise elastic fleets. However, the agents' action space has been largely simplified to enforce agents to select the closest ride requests consistently. Additionally, model predictive control has also been leveraged to solve AMoD; with a composite, weighted utility function to maximize the fleets' and the riders' rewards [14], and single occupancy vehicles with explicit system delay modelling [5]. In contrast to most of the AMoD literature, our work differs in its ability to optimize the fulfillment rate by maximizing the heterogeneous agents' rewards.
Multi-agent pickup and delivery [15] has recently received the spotlight as a viable direction for coordinating warehouse and mobility fleets. In [16, 17] authors propose a hybrid approach for the multi-agent pickup and delivery problem which simultaneously addresses the path planning. The latter work further combines the drone package delivery with public transit systems for conserving energy. Choi et al. [18] proposes a drone multi-package delivery with focus on battery and payload constraints, albeit overlooking the on-demand prospective. In [19] authors propose a drone swarm redistribution approach using a centralized policy.
The dynamic task allocation (DTA) introduces temporal constraints to otherwise spatially-constrained conventional task allocation algorithms. In [6, 20] authors propose multi-robot dynamic task allocation approaches, with the former considering a drone package delivery task under temporal uncertainty. However, the oversight of robots' movements makes them better suited for in-place task completion, contrary to mobility applications.
### _Graph Attention Neural Networks_
Graph attention neural networks (GAT) [21] lies at the conjunction of graph neural networks [22] - which learns shareable convolution operators for graph structured data, and the _attention_ mechanism in neural networks. Briefly, the attention mechanism computes a compatibility score to weigh the input features according to their prominence for the learning task. In [23], authors achieved state-of-the-art results in machine translation, a sequential decision-making task by only leveraging attention mechanisms. In a similar line of work, GAT has shown success in computing sequential routing plans [24] proving their robustness in combinatorial optimization. The Heterogeneous Graph Attention (HetGAT) further extends the expressiveness of GAT to integrate more complicated graph structures, where the nodes may contain varying size feature spaces. Following the success of GAT, HetGAT is also emerging as a powerful approach in parallel research directions; multi-robot task allocation [20], sequential traffic speed prediction [25]. Further, in [26] authors discuss a HetGAT-based multi-agent approach for training an electric vehicle charging pricing policy.
## III Background
### _Partially Observable Stochastic Games_
Stochastic games extend the Markov decision processes (MDP) to the multiple agents setting [27], where the agents' interactions induce stochasticity from the simultaneous action selection. The partially observable stochastic games (POSG) consider that the agents only receive the local observation about the environment instead of the full-environment state. Primarily it differs from the seemingly similar decentralized-Partially Observable Markov Decision Processes (Dec-POMDP) by allowing the agents to act in their self-interest; whereas the agents in the latter share identical reward functions [28].
**Definition 1**.: We define a POSG as an eight-tuple \(\langle N\), \(S\), \(\mathbb{T}\), \(\{R_{i\in{1,\ldots,N}}\}\), \(\{A_{i\in{1,\ldots,N}}\}\), \(\gamma\), \(\{O_{i\in{1,\ldots,N}}\}\), \(\mathbb{O}\). Here \(N\) denotes the number of agents in the game, \(S\) is the full state space, and \(A_{i}\) is the action space of agent \(i\). For a given action profile \(\mathbf{A}=a_{1}\times\cdots\times a_{N}\), \(\forall a_{i}\in A_{i}\), the state \(S\) changes according to the state transition function \(\mathbb{T}\) such that \(\mathbb{T}:S\times\mathbf{A}\to S^{\prime}\). \(R_{i}\) is the reward function for agent \(i\), \(\gamma\in[0,1]\) is a
discount factor, \(O_{i}\) is the local observation available to agent \(i\), and \(\mathbb{O}\) is an observation function.
The observation function maps the full state space to the agents' local observations given the agents' action profile. More specifically, \(\mathbb{O}:S\times\mathbf{A}\rightarrow\Delta O\), where \(O\) is the joint observation space of the agents. The objective of a POSG is to find an optimal policy \(\pi_{i}\) which maximizes the agent \(i\)'s _expected cumulative discounted reward_, \(J(\pi_{i})=\mathbb{E}_{a_{i}\sim\pi_{i}}[\sum_{i}^{\mathsf{T}}R_{i}(s^{t},a_{i} ^{t})]\) using their local observations. In this work we consider _general-sum_ rewards that allow us to synthesize reward functions that are arbitrarily related to the game, especially to preserve the mixed competitive-cooperative nature of the game.
### _Policy Gradient Deep Reinforcement Learning_
Deep reinforcement learning (DRL) focuses on solving stochastic games and MDP by finding an optimal policy \(\pi_{\theta_{i}}(a_{i}^{t}|o_{i}^{t})\) characterized by a set of hyperparameters \(\theta\) using the agents' experiences acquired during a training process. The policy gradient (PG) methods have shown success in DRL in generalizing to larger state and action spaces compared to Q-function approximation [29, 30]. The PG methods essentially samples actions from a stochastic policy instead, and optimizes its parameters in the direction of the vanilla policy gradient \(J(\pi_{\theta_{i}})\) where,
\[J\big{(}\pi_{\theta_{1}}\big{)}=\hat{\mathbb{E}}_{a_{i}\sim\pi_{\theta_{i}}}[ \triangledown_{\theta_{i}}\log\pi_{\theta_{i}}(a_{i}|o_{i})Q^{\pi_{i}}(S,a_{i} )]. \tag{1}\]
Here, \(\hat{\mathbb{E}}_{a_{i}\sim\pi_{\theta_{i}}}(.)\) and \(Q^{\pi_{i}}(S,a_{i})\) denote the empirical expectation and the action-value function for agent \(i\), respectively. In this work we use a new class of PG methods known as proximal policy optimization (PPO) that introduce a _clipped surrogate objective_ and an advantage estimator instead of \(Q^{\pi_{i}}(S,a_{i})\) in Eq. 1[31]. This has shown to stabilize the learning process in many DRL tasks, including multi-agent settings [32]. We use actor-critic DRL for tuning the policy parameters, where the _critic_ network approximates the _value_ function providing estimations to the _actor_ which estimates the stochastic policy.
## IV Partially Observable AAM Game
### _Hierarchical Timescales_
The agents in a mobility network distinguish themselves from most of the multi-agent continuous control tasks as they require a) hierarchical action execution and b) asynchronous action selection. For instance, two vehicles might not complete their journeys at the same time, thus leading one robot to select an action while the other is completing a delivery trajectory. This stems from the hierarchical timescales inherent to most robotic systems, including UAVs [33] where the execution of a high-level action relies on low-level control and trajectory planning. Thus, we advocate that the multi-agent-based robot decision-making and training frameworks must respect such constraints in collecting observations and action execution for harnessing the maximum effect from the training algorithms and for computational efficiency. In this work, we follow a hierarchical timescale to accommodate the vehicles' movements and the decision-making realistically by introducing the notion of _active timesteps_.
The mobility network in this work evolves in small, discrete timesteps \(\Delta t\). Consider an identity function that indicates a vehicle \(v\)'s availability at time \(t\) such that \(\mathds{1}_{avail}(v^{t})=1\) is when \(v\) is available to undertake payloads or \(\mathds{1}_{avail}(v^{t})=0\) is when it is committed to delivering a payload thus unavailable. We only consider a timestep \(t\) as an _active timestep_ if it results in a change of the vehicle's availability function such that, a timestep \(t\) is active iff \(\mathds{1}_{avail}(v^{t})\neq\mathds{1}_{avail}(v^{t-\Delta t})\). Throughout this work, we consider the vehicles' action selections and observations only occur at active timesteps, leaving the local trajectory execution and UAV control to take place intermediary.
### _Partially Observable Stochastic AAM_
Let \(\mathcal{D}\), \(\mathcal{C}\) and \(\mathcal{V}\) denote a set of stationary _depots_, _clients_ and a fleet of heterogeneous UAVs. The depots may resemble warehouses or designated pickup locations for some payloads that need to deliver to the client locations. Let \(x_{t}^{t},x_{d},x_{c}\in\mathbb{R}^{2}\) be the locations of a vehicle \(v\in\mathcal{V}\), depot \(d\in\mathcal{D}\) and a client \(c\in\mathcal{C}\). Let \(p^{lc}\in\mathcal{P}_{l}\) denote a payload request which requires that some payload needs to be delivered to \(c\in\mathcal{C}\) from depot \(d\in\mathcal{D}\), and \(\mathcal{P}_{l}^{t}\) is the state of the payload queue at \(d\) at time \(t\). At any given time, the system may contain an arbitrary number of payloads in the queues that must deliver to the clients. We enforce the partial observability constraint on the UAV by limiting its communication to its neighboring UAVs and the depots in the environment to observe the current states (Fig. 2(a)). Following the partial observability, we define the time-varying neighborhood of a vehicle as \(\mathcal{N}_{v_{i}}^{t}\) comprising its observable set of vehicles \(\mathcal{V}_{i}^{t}\) and the depots \(\mathcal{D}_{i}^{t}\) at the active timestep \(t\), thus \(\mathcal{N}_{v_{i}}^{t}\in\mathcal{D}\cup\mathcal{V}\). Complementing the GNN literature, we define the observations of any vehicle \(i\) in this work as a tuple \(O_{i}=\langle\mathcal{G}_{i}^{t},\mathbf{h}_{i}{}^{t}\rangle\), where \(\mathcal{G}_{i}^{t}\) is a time-varying heterogeneous interaction graph (HIG) construed by the neighborhood \(\mathcal{N}_{v_{i}}^{t}\). Specifically, the nodes of \(\mathcal{G}_{i}^{t}\) are the elements of \(\mathcal{N}_{v_{i}}^{t}\), and the edges define the interactions among them. Further, \(\mathbf{h}_{i}^{t}\) is the features associated with the vehicle and the depot nodes in the time-varying neighborhood.
First, a vehicle \(v_{i}\in\mathcal{V}\) where \(\mathds{1}_{avail}(v_{i}^{t})=1\) chooses a depot \(d_{l}\in\mathcal{D}\) given its local observations (Fig. 2(b)), and communicates the selection to \(d_{l}\). Let \(\mathrm{Cap}(v_{i})\) be the maximum capacity of vehicle \(i\). We categorize the payloads by size, such that a vehicle with capacity \(\mathrm{Cap}(v_{i})\) can only fulfill payload requests of size \(\mathrm{Cap}(p)\), where \(\mathrm{Cap}(p)\leq\mathrm{Cap}(v_{i})\). The depot assigns the agent a payload \(p^{lc}\) from the payload queue \(\mathcal{P}_{l}^{t}\) using an assignment function by taking the robot's maximum capacity into account, such that \(\Psi:\mathrm{Cap}(v_{i})\times\mathcal{P}_{l}^{t}\to p^{lc}\) for \(p^{lc}\in\mathcal{P}_{l}^{t}\), where \(c\in\mathcal{C}\). Then the depot removes the payload request from the queue \(\mathcal{P}_{l}^{t+\Delta t}=\mathcal{P}_{l}^{t}\setminus p^{lc}\) (Fig. 2(c)).
Upon the payload assignment, \(v\) switches itself as unavailable \(\mathds{1}(v_{i}^{t+\Delta t})=0\), visits the chosen depot \(d_{l}\) to pickup the payload, and travels to the client location \(c\in\mathcal{C}\) to drop off the payload. After dropping the payload at the client the vehicle
marks itself as \(\mathds{1}(v_{i}^{t+\tau+\Delta t})=1\) (Fig.2(d)). Let \(\tau_{1}\), \(\tau_{2}\) denote the travel times that it takes for \(i\) to reach the origin depot from the initial location, origin depot to the client location and, \(\tau=\tau_{1}+\tau_{2}\). As \(v_{i}\) completes a journey at time \(t+\tau\), it is awarded a unique _net_ reward computed from the payload request and the robot's initial state. In case the vehicle chooses a depot that does not carry any suitable payloads, we terminate its trajectory at the \(d_{l}\) and assign a negative net reward, and set \(\mathds{1}_{avail}(v_{i}^{t+\tau_{1}+\Delta t})=1\).
We formally define each vehicle \(v_{i}\)'s objective as,
\[\text{Maximize} \mathbb{E}_{a_{i}\sim\pi_{i}}\Big{[}\sum_{t=0}^{T}\mathds{1}_{ avail}(v_{i}^{t})\gamma^{t}R_{i}(s^{t},a_{i}^{t})\Big{]}, \tag{2a}\] \[\text{Subject to} a_{i}^{t}\sim\pi_{i}(A_{i}|s_{i}^{t}),A_{i}=\mathcal{D},\] (2b) \[O_{i}^{t}=\langle\mathcal{G}_{i}^{t},\mathbf{h}_{i}^{t}\rangle,\] (2c) \[O_{i}^{t+\tau}=\langle\mathcal{G}_{i}^{t+\tau},\mathbf{h}_{i+ \tau}^{t}\rangle\] (2d) \[p^{lc}\leftarrow\Psi(v_{i},p_{d}^{t})\] (2e) \[\text{Cap}(v_{i})\leq\text{Cap}(p^{lc})\] (2f) \[r_{i}\gets R_{i}:x_{v_{i}}^{t}\times h_{p^{lc}},\] (2g) \[\mathds{1}_{avail}(v_{i}^{t})=\mathds{1}_{avail}(v_{i}^{t+\tau_ {1}+\tau_{2}})=1, \tag{2h}\]
where, and \(t+\tau\leq T\) is the planning horizon, \(h_{p^{lc}}\) is the features of the assigned payload. In this work, we seek a stochastic policy generalizable for all the vehicle types in the fleet. Additionally, the policy must scale to a varying number of depots or vehicles in the fleet to accommodate the dynamic addition or removal of entities from the system, making it applicable to dynamically changing fleet sizes and environments resembling real-world mobility applications.
## V Graph Attention MARL For Solving AAM
We start by constructing the HIG subsuming the interactions among different type entities (meta-types) in the mobility network. Each edge in the HIG represents a specific relation between two meta-type nodes and belongs to a set of semantic relations we consider in this work. From a GNN perspective, we are interested in learning _asymmetric_ relational operators that project the features of each interacting node to a high dimensional space considering their pairwise neighbors' features for a richer representation. In this work, we use an encoder to compute such representations for the meta-type nodes. Those representations are further processed through a decoder considering more low-level interactions for the decision-making task; i.e.; interactions among node-wise feature representations with a value function output node. Consequently, we introduce an additional low-level interaction graph for this purpose, namely the heterogeneous decoder graph (HDG). We compute the probabilities associated with choosing each depot in the stochastic policy using the HDG outputs.
### _The Heterogeneous Mobility Network_
We first introduce three main _meta-types_ present in the mobility network and their distinguishing feature spaces.
#### V-A1 Depots
The mobility network consists of \(L\) depots \(\mathcal{D}=\{d_{1},\ldots,d_{L}\}\) which populate themselves with the payload requests coming from clients at arbitrary time intervals. Following the mobility literature [5, 10], we define the arrival of payloads as Poisson point processes associated with the depots, parameterized by their expected arrival rates \(\lambda_{l}\), \(\forall d_{l}\in\mathcal{D}\) per unit interval. Additionally, let \(\bar{\alpha}_{l}\in[\alpha_{min},\alpha_{max}]\) denote the expected size of a payload requested at depot \(d_{l}\). Thus, we represent the future space of a depot \(h_{d}\) using the fixed feature vector \(h_{d_{l}}^{t}=[\mathrm{Location}(d_{l}),\lambda_{l},\bar{\alpha}_{l}]\in \mathbb{R}^{4}\), where \(\mathrm{Location}(d_{l})\in\mathbb{R}^{2}\) is the location of the depot in the environment. The features \(\bar{\alpha}_{l}\), \(\lambda_{l}\) can be considered as the vehicle agents' prior knowledge on the depots in the mobility network, resembling that of human taxi drivers, which helps during the decision-making process even when they are not fully observable.
#### V-A2 Payloads
A payload characterizes a single deliverable that a vehicle has to undertake at any given depot. Essentially, a depot \(d_{l}\) may contain multiple payloads that must be delivered to the clients in a queue at any given time. We denote the collection of payloads currently available at a depot \(d_{l}\) as \(\mathcal{P}_{l}^{t}=\{p_{i}^{lc}|i=1,\ldots,p\_max\}\), where \(c\in\mathcal{C}\) is any
Fig. 2: Different stages of payload fulfillment by a single UAV agent. **(a)** The local observation space of agent \(i\) at time \(t\). The color images and solid black lines show agents’ observable neighbors, and their communication links. The blue, magenta and green color bars denote each type of payload at the depots. **(b)** Agent \(i\) selects a depot using its policy \(\pi\) using the observations and communicates its selection. **(c)** The depot assigns a payload to the agent from its currently active payload requests set, \(\mathcal{P}_{l}\). Note the amount of green color payloads is reducing. **(d)** Agent fulfills the payload request by traveling to the chosen depot and next to the assigned client \(c\). Here \(\tau=\tau_{1}+\tau_{2}\) denotes the total travel time.
destination client and \(p\_max\in\mathbb{N}^{+}\) is the maximum number of payload requests handled by a depot. We maintain \(p\_max\) a constant throughout this work. Therefore considering all the depots, \(\mathcal{P}^{t}=\cup_{l=1..L}\mathcal{P}^{t}_{l}\) and \(0\leq|\mathcal{P}^{t}|\leq L.p\_max\). As the vehicles deliver the payloads to the clients upon assignments, the payloads are removed from the payload queue. The new payload requests coming from the clients are inserted into the corresponding payload queue \(\mathcal{P}_{l}\) in the order of their arrival. As mentioned earlier, the payloads may be added to the queue with an expected rate \(\lambda_{l}\). Each payload request \(p^{lc}_{i}\in\mathcal{P}^{t}_{l}\) has a precomputed payoff \(\mathrm{Payoff}(p^{lc}_{i})\). We represent the feature space of a payload as a \(4\times 1\) vector by concatenating its specified payoff, client destination and the required vehicle capacity. Thus, \(h_{p}=[\mathrm{Payoff}(p^{lc}),x_{c},\mathrm{Cap}(p^{lc}_{i})]\in\mathbb{R}^{4}\). The payload assignment function of a depot \(\Psi(v_{i},\mathcal{P}^{t}_{l})\) simply returns the next suitable payload for the vehicle \(i\) from the payload queue. This allows us to minimize the time a payload may spend in the waiting queue depending on the vehicles' availability.
#### Iii-B3 Vehicles
Let \(\dot{x}^{t}_{v_{i}}\) be the next stop of a vehicle such that \(\dot{x}^{t}_{v_{i}}\in\mathcal{D}\cup\mathcal{C}\setminus x^{t}_{v_{i}}\), where \(x^{t}_{v_{i}}\) is the current location of the robot at the active timestep which consequentially is a client or a depot location. We define the feature space of a vehicle \(v\) as the vector \(h^{t}_{v}=[x^{t}_{v_{i}},\dot{x}^{t}_{v_{i}},\mathrm{Cap}(v_{i})]\).
### _Time-Varying Heterogeneous Interaction Graph_
We introduce a set of five relations that subsumes the interactions among the meta-type objects: \(\Phi=\{\mathrm{has,\ visits,\ depends,\ assigned\_to,\ communicates}\}\). Further, we define a vehicle \(v_{i}\)'s observable neighborhood \(\mathcal{N}^{t}_{v_{i}}=\{\mathcal{V}^{t}_{i},\mathcal{D}^{t}_{i}\}\) as the set of its closest vehicles: \(\{v_{j}|\forall j\)\(\mathrm{Distance}(v_{i},v_{j})\leq\mathrm{Distance}(v_{i},v_{k\_v})\}\), and the depots \(\mathcal{D}^{t}_{i}=\{d_{l}|\forall\mathrm{Distance}(v_{i},d_{l})\leq \mathrm{Distance}(v_{i},d_{k\_d})\}\). Here \(v_{k\_v}\), \(d_{k\_d}\) denotes the \(k\)th closest vehicle and the depot respectively. Considering the payloads in each \(d_{l}\in\bar{\mathcal{D}}^{t}_{i}\), we define the set of all observable payloads as \(\mathcal{P}^{t}_{i}=\{\mathcal{P}^{t}_{l}|\forall d_{l}\in\mathcal{D}^{t}_{i}\}\). By using these definitions, we construct the HIG of a vehicle \(v_{i}\), \(\mathcal{G}^{t}_{i}\) for timestep \(t\) following the steps listed in Algorithm 1. Fig. 3 depicts the interactions among each meta-type in the HIG.
The communicates edge represents the interaction between any two vehicles in the \(\mathcal{V}^{t}_{i}\) allowing the vehicles to incorporate each others features into the decisions. Through lines 8-15 in Algorithm 1, we connect each vehicle node to the depots that contain matching payloads for the vehicles in the neighborhood (including \(i\)) using visits type edges. The intuition behind connecting each observable vehicle narrows down to a simple notion: each robot in the neighborhood may observe what \(i\) observes. Although the neighboring vehicles may not share the same observations exactly due to the localized partial observability, this gives \(i\) the best estimation to its neighbors vantage points enabling it to consider them in the competitive action-selection. Similarly, we connect payload type objects to their associated depots through the relation has. These incoming edges allow aggregating the features from other meta-type objects resulting in richer depot node representations in deeper layers in the graph neural network passing down until the final depot selection stage. Note that any meta-type that does not have an incoming edge is not passed through the convolution layers in graph neural networks. Thus, we add self-edge connection \(\mathrm{depends}\), to project the payloads' current feature space to the required representation that aligns with that of the other meta-types.
```
1Inputs:\(\mathcal{N}^{t}_{v_{i}}=\{\mathcal{V}^{t}_{i},\mathcal{D}^{t}_{i}\}\), \(\mathcal{P}^{t}_{i}\), \(\mathcal{D}\)
2Output:\(\mathcal{G}^{t}_{i}\)
3for\(v_{i}\in\mathcal{V}^{t}_{i}\)do
4for\(d_{l}\in\mathcal{D}\)do
5 Add Edge (\(v_{i}\), visits, \(d_{l}\))
6 end for
7for\(v_{j}\in\mathcal{V}^{t}_{i}\)do
8 Add Edge (\(v_{i}\), communicates, \(v_{j}\))
9 end for
10
11 end for
12for\(d_{l}\in\mathcal{D}^{t}_{i}\)do
13for\(p^{lc}_{i}\in\mathcal{P}^{t}_{i}\)do
14for\(p^{lc}_{i}\in\mathcal{P}^{t}_{i}\)do
15for\(v_{i}\in\mathcal{V}^{t}_{i}\)do
16if\(\left[\begin{array}{c}\mathrm{Cap}(p^{lc}_{i})\leq\mathrm{Cap}(v_{i})\end{array}\right]\)then
17 Add Edge (\(p^{lc}_{i}\), assigned_to, \(v_{i}\))
18 end for
19
20 end for
21 Add Edge (\(p^{lc}_{i}\), \(\mathrm{depends}\), \(p^{lc}_{i}\))
22 end for
23Create graph \(\mathcal{G}^{t}_{i}\) with edges.
```
**Algorithm 1**Constructing the HIG: \(\mathcal{G}^{t}_{i}\)
### _Representation Learning with HetGAT_
As mentioned before in GNN we seek a high-dimensional embedding for the node features in an input graph. Similarly, a HetGAT layer intakes the initial features \(h_{i}\) of some node \(i\) in the HIG \(\mathcal{G}\) to project them to a desired space \(h^{\prime}_{i}\) by applying
Fig. 3: The meta-graph representing the abstract interactions among vehicle, depot and payload meta-type objects. The vehicle and payload types has self-edges that connects the objects of these types to themselves.
node-wise _message passing_, _aggregation_ and _attention_ operations. Since we are operating on the HIG and a feature set observed by a vehicle at a given timestep, we drop the timestep \(t\) and agent indices for brevity. In the message passing, each node propagates its feature vector to the neighboring nodes \(\mathcal{N}_{i}\) following the directionality ascribed in the relation preserving the asymmetry. The features are then multiplied with relation-specific weight matrices to project them into the required high-dimensional space. As the weight matrices are relation specific, they are generalizable to different input graph sizes, in contrast to fully-connected networks that depend on the input. Note that \(\mathcal{N}_{i}\) is the first order neighborhood of some node \(i\) (including itself) in \(\mathcal{G}\), which is different from the observational neighborhood of a vehicle \(\mathcal{N}_{v}\).
Let \(\mathrm{Type}(i,j)\) denote the type of edge between \(i,j\) where \(\mathrm{Type}(i,j)=\phi\in\Phi\), and \(j\in\mathcal{N}_{i}\). To allow projecting feature spaces of different sizes into \(h^{\prime}_{i}\) the weight matrices are shared in an edge specific manner. For example, \(W_{\phi}\) is a projection weight matrix shared among the nodes participating in relation \(\phi\). For any \(\mathrm{Type}(i,j)=\phi\), where \(j\in\mathcal{N}_{i}\), we define \(W_{\phi}\)'s dimensions as \(|h^{\prime}_{i}|\times|h_{j}|\). The node-wise message passing in a single HetGAT layer can be summarized as,
\[\bar{h}_{i}^{\phi}=\sigma\Big{[}\sum_{\begin{subarray}{c}j\in\mathcal{N}_{i} \\ \mathrm{Type}(i,j)=\phi\end{subarray}}\beta_{ij}W_{\phi}h_{j}\Big{]}, \tag{3}\]
where \(\beta_{ij}\) is a node-wise attention coefficient, and \(\sigma\) is a non-linear activation function. A node \(i\) may have incoming messages over different edges; i.e., a depot type node receives messages over \(\mathrm{has}\), and \(\mathrm{visits}\) type edges. In such cases we aggregate each feature message using a rotational invariant operation \(\mathrm{Agg}\). Thus, we denote the outgoing feature space \(h^{\prime}_{i}\) as
\[h^{\prime}_{i}=\mathrm{Agg}\Big{(}\bar{h}_{i}^{\phi_{1}}\ldots\bar{h}_{i}^{ \phi_{n}}\Big{)}, \tag{4}\]
where \(n\) is the distinct incoming edge types for node \(i\). In this work we use Leaky ReLU activation for \(\sigma\), and mean aggregation for \(\mathrm{Agg}\). The node-wise attention weights \(\beta_{ij}\) emphasizes the importance of the neighbor \(j\)'s features to \(i\) for decision-making. Briefly, HetGAT learns an attention _coefficient_\(e_{ij}\) via a fully-connected layer \(\mathrm{fc}\) parameterized by an edge-specific weight matrix, and LeakyReLU activation, \(\mathrm{fc}:\mathbb{R}^{2|h^{\prime}_{i}|}{\rightarrow}\mathbb{R}\). Thus, for a given relational edge type \(\phi\)
\[e_{ij}=\mathrm{fc}\big{(}W_{\phi}h_{i},W_{\phi}h_{j}\big{)}. \tag{5}\]
Finally, the attention coefficients are normalized over the neighborhood \(\mathcal{N}_{i}\) using the softmax function as
\[\beta_{ij}=\frac{\exp(e_{ij})}{\sum_{k\in\mathcal{N}_{i}}\exp(e_{ik})}. \tag{6}\]
We stack multiple HetGAT layers to learn higher-order node representations for the nodes in the input HIG. We argue that through convolution and attention, HetGAT can highlight the most prominent features for choosing depots that can maximize its expected reward, using Eq. 2 when coupled with policy gradient reinforcement learning.
With the added ability to digest heterogeneous nodes, we believe that HetGAT-based approaches are vastly capable of learning to solve many heterogeneous robot fleet coordination tasks.
### _Graph Attention Policy Architecture_
We construct a _generalizable_ stochastic policy that allows the vehicle agents to make decisions in choosing depots underpinned by a heterogeneous graph attention network. We consider two criteria to comprise the generalizability of a policy in mobility: a) deployability on mobility networks that contain different numbers of vehicles and depots to one that it is trained on, and b) its ability to make decisions that can maximize rewards when deployed on robots with different properties, i.e., capacity. The former facilitates transferring the learned policy to different cities or expanding the service area with minimal reconfiguration.
Primarily, a HetGAT's generalizability attributes to its sharing of graph convolution and attention operators, which allows extending the cardinality of the input HIGs to account for a number of scenarios; i.e., a) time-varying observability and b) addition or removal of new vehicles to cater dynamic demand patterns. We introduce a novel encoder-decoder HetGAT architecture for learning the stochastic policy.
#### Iii-D1 Encoder
The encoder intakes the HIG \(\mathcal{G}_{i}^{t}\) we constructed previously and the features associated with its nodes. The graph is then passed through 2 multi-head attention (MHA) and a single head attention (SHA) output layers. As presented in [21], a MHA layer computes \(k_{\beta}\) independent attention weights and concatenates the aggregated features in the outgoing feature space \(h^{\prime}_{i}\) resulting an output dimensionality \(k_{\beta}|h^{\prime}_{i}|\) compared to single-head attention (SHA) discussed in Eq. 4. Fig. 4 shows the proposed encoder architecture. For each meta-type node in the output representation \(\tilde{h}_{d}\), \(\tilde{h}_{v}\) and \(\tilde{h}_{p}\) we use \(\mathbb{R}^{64}\) vectors. In addition to meta-type node embeddings, the encoder outputs a _graph embedding_ node shown in grey color **g** by averaging each meta-type node and concatenating them together, where \(h_{\textbf{g}}\in\mathbb{R}^{|\tilde{h}_{u}|+|\tilde{h}_{d}|+|\tilde{h}_{p}|}\).
Fig. 4: The HetGAT encoder architecture. Following Fig. 3 the blue, yellow and green colors represent vehicle, depot and payload meta-type objects in the input HIG. Each relation type is represented in corresponding colors. The HIG is first sent through multiple multi-head attention (MHA) layers and finally a single-head attention (SHA) layer. The graph node embedding is represented in grey color by stacking the _mean nodes_ of each meta-type outputs embeddings.
#### V-B2 Decoder
Let \(\mathbf{g}\), \(\mathbf{val}\) be the graph embedding node and a newly introduced value node. We summarize the steps of constructing the heterogeneous decoder graph (HDG) in Algorithm 2. The decoder accepts the HDG along with the graph embedding \(\vec{h}_{\mathbf{g}}\), depot embeddings \(\vec{h}_{d_{l}}\), and \(\mathbf{val}\) a zero vector for \(\mathbf{val}\) initialization. The decoder processes the HDG with two HetGAT layers where the first layer has MHA and an output layer with SHA. We provide the details of chosen output feature dimensions of each HetGAT layer in Appendix A. The value node embeddings \(\vec{h}_{val}\) are further processed through a fully connected layer \(\mathrm{fc\_val}\) to obtain the value function output. Importantly, we do not follow the feature aggregation for output graph embedding and depot embedding steps with a nonlinear activation at the final layer. Instead, each depot embedding is dot multiplied with the graph node embedding to compute the output query values \(q_{l}=\sigma(\vec{q}_{d}^{T}\vec{q}_{\mathbf{g}})\) for all \(d_{l}\in\mathcal{D}\). Finally, we calculate probabilities associated with each depot in the stochastic policy by using the softmax function over all \(q_{l}\).
```
1Inputs:\(\mathbf{g}\), \(\mathcal{D}\), \(\mathbf{val}\)
2Output:\(\mathcal{G}_{dec}\)
3Add Edge (\(\mathbf{g}\), \(\text{g\_contributes\_val}\), \(\mathbf{val}\))
4for\(d_{l}\in\mathcal{D}\)do
5Add Edge (\(d_{l}\), \(\text{d\_contributes\_g}\), \(\mathbf{g}\))
6Add Edge (\(d_{l}\), \(\text{d\_contributes\_val}\), \(\mathbf{val}\))
7for\(d_{m}\in\mathcal{D}\)do
8Add Edge (\(d_{l}\), \(\text{d\_near\_d}\), \(d_{m}\))
9
10
11 end for
12
13Create graph \(\mathcal{G}_{dec}\) with edges.
```
**Algorithm 2**Constructing the HDG: \(\mathcal{G}_{dec}\)
#### V-B3 Fleet Rebalancing Mask
In the absence of suitable payloads nearby, one must favor farther away depots to avoid certain penalization and maximize the rewards. Following this notion, we resemble low-demand mobility environments to a stochastic variant of _reward-collecting travelling salesman_ (RC-TSP); where one's reward depends on a decaying set of rewards scattered in the environment with the added difficulty of accounting for the others' actions. In [24], authors show that masking is beneficial in solving RC-TSP to prevent visiting an already visited node. We, therefore, introduce a fleet rebalancing mask computed using local observations to 1) explore farther away depots in low-demand environments and 2) prevent one from choosing depots in the observable range that does not carry suitable payloads.
Formally, we mask the query values of each depot that is in the observation range, but does not contain a suitable payload such that, \(q_{l}=-\infty\) for all \(d_{l}\not\in\{d_{l}|\forall p\in\mathcal{P}_{l},\mathrm{Cap}(p)\leq\mathrm{Cap }(v_{i}),\forall l,\}\). From a mobility perspective, we believe that this is akin to an intrinsic _fleet rebalancing_ mechanism.
## VI Experiments and Results
### _Simulation Environment_
We implemented the AAM environment on PettingZoo framework [34] and the MARL on Ray RLlib reinforcement learning library [35] to scale the learning process. For implementing the graph neural networks we used the Deep Graph Library (DGL) with a PyTorch back-end. We trained our system on an NVIDIA A100 GPU and an AMD EPYC 7713 processor for 10 hours. 1
Footnote 1: It is also possible to train the HetGAT Enc-Dec policy on a desktop computer with an NVIDIA RTX 3090 GPU and an Intel 12700K CPU in a reasonable time.
For evaluating the proposed approach, we consider a custom AAM environment with mixed depot-depot delivery and depot-client fulfillment requests with a destination client set that is inclusive of the depots \(\mathcal{C}^{\prime}\) = \(\mathcal{D}\cup\mathcal{C}\). We categorize the payloads and vehicles into three different sizes thus \(\mathrm{Cap}(v)\), \(\mathrm{Cap}(p^{lc})\), \(\in\{1,2,3\}\), where \(\mathrm{Cap}(v)=3\) denotes the largest of the UAVs that can carry any payload, and \(\mathrm{Cap}(v)=1\) indicates the smallest that can only carry payloads of size 1. In the experiments, we do not consider the scenario of a UAV carrying multiple payloads at once. During the experiments, we used \(\text{p\_max}=5\) as the maximum payload queue length of a depot. The vehicles use a constant velocity trajectory to navigate to their destinations. We consider a simulation episode of 400 \(\Delta t\) timesteps, and a horizon length \(\mathbf{T}=50\) active timesteps where the agents made decisions for the training. During the training, we skip non-active timesteps to improve efficiency and prevent the training algorithm from collecting unrelated observations which can yield undesirable results. The environment consists of a 24 \(\times\) 24 area where the chosen number of depot and client nodes are positioned in each quadrant in approximately equal numbers.
#### Vi-A1 Populating Payloads
We uniformly choose \(\lambda_{l}\), the expected payload request arrival rate of a depot \(d_{l}\) from three rate parameters \(\in\{0.01,0.05,0.025\}\). We sample from a Poisson distribution using the corresponding expected arrival rate and
Fig. 5: The HetGAT decoder architecture. The critic value function shares layers with the actor network, yet the value branch is only used by the critic network. The final graph and the depot embeddings are multiplied together to output the action-values of choosing a depot \(q_{d_{l}}\).
a fixed interval of 50 timesteps to accumulate the incoming requests to a depot. For every incoming request \(p^{l}\). we assign a capacity \(\mathrm{Cap}(p^{l\cdot})\in\{1,2,3\}\) by sampling from the normal distribution \(\mathrm{Normal}(\bar{\alpha}_{l},0.1)\), where \(\bar{\alpha}_{l}\) is the expected size of a payload request at \(l\). We assume that the payload requests that arrive at a depot require it to deliver to other depots and clients that are closer to it with a higher probability than those that are much further away, resembling a realistic fulfillment scenario. Thus, we draw the destination \(c\in\mathcal{C}^{\prime}\setminus d_{l}\) using a normal distribution for a payload request \(p^{lc}\).
#### Iii-A2 Reward Function
We calculate the payoff of a payload \(\mathrm{Payoff}(p^{lc})\) as a nonlinear function building on the taxi fare computation scheme proposed by Yang et al. [36]. This discourages the vehicles from always selecting depots with relatively longer rides in sought of higher returns which can cause some actions to dominate in the POSG. Thus,
\[\mathrm{Payoff}(p^{lc})=\mathbf{q_{1}}||x_{l}-x_{c}||^{2}+\mathbf{q_{2}}||x_{l }-x_{c}||+\mathbf{q_{3}}\mathrm{Cap}(p^{lc}), \tag{7}\]
for \(d_{l}\in\mathcal{D}\), \(c\in\mathcal{C}^{\prime}\), \(\mathbf{q_{1}}<0\), and \(\mathbf{q_{2}},\mathbf{q_{3}}>0\). Further, we observed that this nonlinear fare calculation stabilizes the training process significantly. In other words, we incentivize a vehicle on the delivery distance in a concave fashion, thus selecting farther depots is not always preferred, and using the payload size as a _flag fall cost_. The reward of \(v_{i}\) choosing a depot is the difference between the payoff specified in the payload assigned to it by the depot and the vehicle's travel cost to reach the depot (Eq. 8).
\[r_{i}^{t}=\begin{cases}\mathrm{Payoff}(p^{lc})-\mathbf{q_{4}}||x_{v_{i}}-x_{d _{l}}||,&\text{if $d_{l}$ is valid,}\\ 0&\text{if $d_{l}$ is invalid and $x_{d_{l}}=x_{v_{i}}$.}\\ -5&\text{otherwise.}\end{cases} \tag{8}\]
Considering a maximum trip distance of 30, we set \(\mathbf{q_{1}}=-0.0167\), \(\mathbf{q_{2}}=1\), \(\mathbf{q_{3}}=2\), and \(\mathbf{q_{4}}=0.2\).
### _One-Shot Training_
In contrast to AMoD, which learns a central coordination policy [10], we learn a generalizable stochastic policy that the agents execute in a decentralized manner. However, we observed that the lack of central coordination can cause the training to skew during the on-demand decision-making, where the agents tend to focus on the depots that get populated at higher rates, thus causing imbalance and an increased delay in fulfillment. Therefore, we follow a stepwise approach to mitigate this behavior: we first train the vehicle agents for a limited number of payloads that are populated for a single interval in the Poisson process-which we refer to as _one-shot_ population, for 100 timesteps. This encourages the agents to maximize their rewards by fulfilling as many payload requests thus, requiring the robots to take more exploring actions to minimize the effect of partial observability. Next, we deploy the learned policy in the on-demand environment with payload repopulation with 400 timestep simulation.
For the training, we used a fixed-size heterogeneous vehicle fleet that comprises 6 vehicles, 2 from each capacity and a fixed observation range of \(k_{r}\) = 5 and \(k_{d}\) = 5. We randomized the expected arrival rates and the expected payload size associated with the depots during the training to prevent the model from overfitting and to generalize better to different environments. We compare our approach to two
Fig. 6: The simulation environment with 2 vehicles (green circles), 8 clients (grey circles denoted \(C_{0}\) through \(C_{7}\)), and 6 depots (triangles denote \(A\) through \(F\)). The top vehicle has a smaller capacity of 1 and the lower with a larger capacity 3. The texts under the depots specify the different types of payloads available at each depot. Their colors represent the expected arrival rate of incoming requests at each depot, greener indicating a lower rate. The vehicles change color from green to red when they carry a payload. **(a)** Both the vehicles are moving toward their chosen origin depots. **(b)** An active timestep when the larger vehicle reached the origin \(C\), and picked up a payload (thus switching its color to red). **(c)** The next active timestep: the smaller vehicle reaches \(A\) that does not contain a suitable payload and, accumulates a reward of \(-5\) according to Eq. 8. **(d)** The next immediate active timestep: the larger vehicle reaches the destination \(F\) and aggregates a positive reward as the smaller vehicle is still on the move. **(e)**: Both the vehicles are empty at \(E\) and \(F\). The accumulated rewards are displayed in the next frame.
Fig. 7: Training performances of HetGAT Enc-Dec architecture compared to HetGAT and HetGCN after 250000 active timesteps. A heterogeneous fleet of 6 vehicles, 2 vehicles from each category is used for the training.
other approaches suitable for defining generalizable deep policy architectures: pure HetGAT and Heterogeneous Graph Convolutional networks (HetGCN). Fig. 7 shows the training performances of each graph convolution policy architecture measured by the total fleet reward, the sum of all the agents' rewards. We trained the system under the CTDE MARL paradigm, where all the vehicles learn a single shared policy during the training time, thus requiring it to generalize to different capacities. The HetGAT Enc-Dec policy achieved the highest fleet rewards, followed by HetGAT and HetGCN. For a thorough discussion on CTDE MARL, we refer our readers to [37]. For the HetGAT policy, we only used a module resembling encoder architecture, however, with scalar outputs for the depot representations \(\vec{h}_{d}\), used as the action-value function outputs. Except for the \(\vec{h}_{d}\), we chose a similar number of layers, the attention heads for the encoder module of the HetGAT Enc-Dec and the HetGAT to better highlight the added performance of the former. Despite being limited in its ability to generalize to different mobility networks, we also experimented with a policy built on Long-Short Term Memory (LSTM) networks. However, its training performances were significantly worse compared to the other methods, thus we exclude in in the experiments. An in detail description of all the neural network architectures and training parameters we used are given in Appendix A. For the experiments, we simulated the system for 20 episodes with an episode length of 400 timesteps.
Fig. 8 shows the performances of different policies measured by the total fleet reward (Fig. 11(b)) and the payload fulfillment percentage in a one-shot payload population environment. Here, the fleet comprises the same combination we used for the training for comparison. We observed that the proposed HetGAT Enc-Dec outperforms HetGAT and HetGCN NN architectures in the fleet's total reward and fulfilling the most payload deliveries (Fig. 8(a)). We attribute this to the policy's ability to choose the closest depots with suitable payloads for the corresponding vehicle. The policy with the fleet rebalancing mask gains an additional edge.
### _Transference to On-Demand Mobility_
To evaluate the transferability of the stochastic policy learned with one-shot training to on-demand mobility environments, we change the simulation environment by repopulating the payloads as mentioned in the previous section. To assess the policy's capability to handle spatial and temporal asymmetries in the on-demand mobility environments, we simulate high- and low-yielding environments, where the latter has previously unseen, halved payload arrival rates. In Fig. 9, we report the percentages of payloads completed and the rewards received by vehicle agents for the two considering environments. The agent fleets operating on the HetGAT Enc-Dec policies recorded the highest rewards in both scenarios with the masked policy recording roughly equal or higher fleet rewards. In contrast, the HetGAT and HetGCN -based policies have sought for fulfilling a higher number of payloads at the expense of individual reward gain (see Fig. 9(b), 9(d)).
This confirms that our HetGAT Enc-Dec policy learns to maximize the agent's and the fleet's collective rewards, successfully reflecting the self-interest of the agents, while maintaining a high fulfillment rate compared to the other generalizable policy architectures. We believe that this behavior can greatly benefit high-affinity, commercial AAM and AMoD fleets where the vehicles must maximize the owners' revenue, while operating under partial observations.
### _Generalizability to Varying Fleets and Environments_
We experimented with different fleet combinations, depots, and clients to evaluate the trained policy's generalizability to varying mobility networks considering arrival rate imbalance, fleet combination, service area and a vehicle's observational range. In Table I we report the results by changing the fleet size, its vehicle combinations and the service area. For the experiments we simulated the system in the on-demand mode with the same rate parameters for 400 timesteps. The "Fleet", "Rew. \(V\)" columns represent the number of vehicles from capacity in the fleet as a tuple, and the average reward of a vehicle in each type. For the final experiment, we doubled the arrival rates to simulate the effect of _increased demand from the clients_.
Fig. 8: Total fleet reward (a) and fulfillment rates (b) in an one-shot population scenario against different generalizable policy models.
The results show that when increasing the number of depots in the environment while keeping the fleet size a constant, all the vehicles receive higher rewards, thus increasing the fleets' collective utility; mainly because a robot doesn't need to travel as far to find suitable payloads thanks to the abundance of resources. Additionally, larger vehicles can obtain higher rewards compared to smaller ones due to their ability to attend to more payload types. Therefore, as one might expect, replacing smaller vehicles with larger ones increase the fleets' reward (I, Row 3). Additionally, by adding more vehicles to the fleet, we can cater to the heightened demand caused by newly added depots. We experience a slight drop in the fleets' reward when introducing more client nodes to the system who do not contribute with payload requests, but only act as the destinations (Recall that this is a mixed-mobility network, where the deliveries can happen between any two depot or depot and client nodes). We account this reduction to the inability of the vehicles to pick up new payloads at their delivery destinations, as oppose to depot-depot deliveries where the destination may contain suitable payloads for the vehicles. However, when the newly introduced nodes cause a surge in the number of payloads, fleets' collected rewards was observed to increase. This observation conforms with the real-world notion that a service area with low-demand, scattered destinations can yield low utility for the drivers.
### _Policy Generalizability Comparisons_
We evaluated the HetGAT Enc-Dec policy's performances against the fleet sizes, service area, payload arrival imbalances and the observability. Throughout the experiments we maintained a 1:1:1 ratio of vehicles from each type in the fleet, \(k_{r}=5\) vehicle observability, and 50% depot observation.
Fig. 10(a)-10(b) show the fleets' collective reward, and the fulfillment rate when changing the fleet size in a service area with 10 depots. As the fleet size increases we observe a generally downward collective reward that can be explained by the increased competition within the fleet. I.e., When the environment is saturated with the vehicles, one's near by payloads are getting fulfilled sooner, thus causing it to travel farther in sought of suitable payloads. Fig. 10(b) shows that addition of new vehicles increases the fulfillment ratio due to the competition. Fig. 10(c)-10(d) shows that adding more depots causes the vehicles to obtain higher rewards, and the environment saturates much later. The masked HetGAT Enc-Dec achieved the highest collective reward, and the fulfillment rate in both the environments compared to the other generalizable policy architectures, by only requiring a smaller number of vehicles to saturate the environment. _In real-world fleets, this characteristic of our approach directly translates to lower operational costs, and subsequently higher revenue margins_.
We compared the agents' performances in low-yielding environments for the same fleet combinations. Fig. 11(a) shows introducing more agents to resource limited environments further degrades the fleet reward in all four generalizable policy models. The rewards of the agents executing our masked HetGAT Enc-Dec approach shown to degrade more gracefully compared to the others while achieving the highest fulfillment ratios (Fig. 11). By increasing the vehicles' observation range to 80% of the available closest depots improved the masked HetGAT Enc-Dec policy's performances significantly (Fig. 11(c)-11(d)), highlighting its ability to include the information
Fig. 10: Fleet reward and fulfillment rate for different fleet sizes. **Top:** An environment with 10 depots, 12 clients with closest \(k_{d}\)=5 observation topology. **Bottom:** An environment with larger service area of 15 depots, 12 clients with closest \(k_{d}\)=8 observation topology. Only 50% error in rewards is showed for HetGAT Enc-Dec and HetGCN policies for clarity.
Fig. 9: **Top:** Total fleet reward and the percentage of total payloads fulfilled in two on-demand mobility scenarios: top: normal environment rate parameters (0.01, 0.05, 0.025). **Bottom:** lower-yielding environment with rate parameters (0.005, 0.025, 0.0125). The two environments received a highest average arrivals of 103.5 and 69.7 payload requests respectively. In the total payload requests arrivals, 49.76%, 35.62% and 14.6% corresponds to category 1, 2 and 3 type payloads. The fleet size is kept fixed.
from more number of previously unseen depots into the decision-making. Interesting, we also noticed that despite not using the attention mechanism, HetGCN to outperform HetGAT in scalability experiments (Fig. 10, 11).
### _Adaptability to Varying Observation Topologies_
We evaluate the fleets' reward and the fulfillment rate against different observation topologies of the proposed HetGAT Enc-Dec. Importantly, we kept the number of observed vehicles fixed while increasing the visibility of the depots in the mobility network: a realistic consideration as it is often desirable to achieve more rewards with as little disclosure of the other vehicle locations due to privacy concerns. Fig. 12 shows that our masked HetGAT Enc-Dec policy to increase the fleets' reward exponentially as the observability reaches 100%, a contrasting difference to the other policies, which were reluctant to incorporate additional information beyond range enforced at the training time. This showcases our approach's ability to handle time-varying observational topologies, which often arise in AAM due to the stochasticity in wireless networks. Briefly, to maximize the agents' rewards in low-yielding environments, we advocate 1) operating the vehicle agents under the masked HetGAT Enc-Dec policy and 2) revealing more depot information to the agents.
## VII Discussion and Conclusion
We present a novel, generalizable, multi-agent fleet autonomy for coordinating heterogeneous mobility fleets in a decentralized manner under partial observations building on HetGAT and encoder-decoder neural networks. Extensive experiments conducted under different fleet combinations, service areas, observational topologies, and fulfillment request arrival rates showed that agents fleets operating under HetGAT Enc-Dec policies outperform the other generalizable policy architectures. The novel fleet rebalancing mask further improved the ability of our method to perform in low-yielding on-demand mobility networks and especially to incorporate the observational topologies beyond that was used in the training time into the decision-making. The two policy architectures we proposed achieved the highest fleet reward using the minimum number of vehicles while maximizing the fulfillment ratios: a highly sought-after characteristic for commercial mobility fleets.
|
2301.06579 | Angular adaptivity in P0 space and reduced tolerance solves for
Boltzmann transport | Previously we developed an adaptive method in angle, based on solving in Haar
wavelet space with a matrix-free multigrid for Boltzmann transport problems.
This method scalably mapped to the underlying P$^0$ space during every
matrix-free matrix-vector product, however the multigrid method itself was not
scalable in the streaming limit.
To tackle this we recently built an iterative method based on using an ideal
restriction multigrid with frozen GMRES polynomials (AIRG) for Boltzmann
transport that showed scalable work with uniform P$^0$ angle in the streaming
and scattering limits. This paper details the practical requirements of using
this new iterative method with angular adaptivity. Hence we modify our angular
adaptivity to occur directly in P$^0$ space, rather than the Haar space. We
then develop a modified stabilisation term for our FEM method that results in
scalable growth in the number of non-zeros in the streaming operator with P$^0$
adaptivity. We can therefore combine the use of this iterative method with
P$^0$ angular adaptivity to solve problems in both the scattering and streaming
limits, with close to fixed work and memory use.
We also present a CF splitting for multigrid methods based on element
agglomeration combined with angular adaptivity, that can produce a
semi-coarsening in the streaming limit without access to the matrix entries.
The equivalence between our adapted P$^0$ and Haar wavelet spaces also allows
us to introduce a robust convergence test for our iterative method when using
regular adaptivity. This allows the early termination of the solve in each
adapt step, reducing the cost of producing an adapted angular discretisation. | S. Dargaville, R. P. Smedley-Stevenson, P. N. Smith, C. C. Pain | 2023-01-16T19:35:46Z | http://arxiv.org/abs/2301.06579v1 | # Angular adaptivity in P\({}^{0}\) space and reduced tolerance solves for Boltzmann transport+
###### Abstract
Previously we developed an adaptive method in angle, based on solving in Haar wavelet space with a matrix-free multigrid for Boltzmann transport problems. This method scalably mapped to the underlying P\({}^{0}\) space during every matrix-free matrix-vector product, however the multigrid method itself was not scalable in the streaming limit.
To tackle this we recently built an iterative method based on using an ideal restriction multigrid with frozen GMRES polynomials (AIRG) for Boltzmann transport that showed scalable work with uniform P\({}^{0}\) angle in the streaming and scattering limits. This paper details the practical requirements of using this new iterative method with angular adaptivity. Hence we modify our angular adaptivity to occur directly in P\({}^{0}\) space, rather than the Haar space. We then develop a modified stabilisation term for our FEM method that results in scalable growth in the number of non-zeros in the streaming operator with P\({}^{0}\) adaptivity. We can therefore combine the use of this iterative method with P\({}^{0}\) angular adaptivity to solve problems in both the scattering and streaming limits, with close to fixed work and memory use.
We also present a CF splitting for multigrid methods based on element agglomeration combined with angular adaptivity, that can produce a semi-coarsening in the streaming limit without access to the matrix entries. The equivalence between our adapted P\({}^{0}\) and Haar wavelet spaces also allows us to introduce a robust convergence test for our iterative method when using regular adaptivity. This allows the early termination of the solve in each adapt step, reducing the cost of producing an adapted angular discretisation.
keywords: Radiation transport, Boltzmann, Angular adaptivity, Haar wavelets, AIRG
## 1 Introduction
In this work we consider the mono-energetic steady-state form of the Boltzmann Transport Equation (BTE) written as
\[\mathbf{\Omega}\cdot\nabla_{\mathbf{r}}\mathbf{\psi}(\mathbf{r},\mathbf{\Omega})+\sigma_{\rm t} \mathbf{\psi}(\mathbf{r},\mathbf{\Omega})-\int_{\mathbf{\Omega}^{\prime}}\sigma_{\rm s}(\mathbf{r },\mathbf{\Omega}^{\prime}\to\mathbf{\Omega})\mathbf{\psi}(\mathbf{r},\mathbf{\Omega}^{\prime}) \mathrm{d}\mathbf{\Omega}^{\prime}=S_{\rm e}(\mathbf{r},\mathbf{\Omega}), \tag{1}\]
where the number of particles moving in direction \(\mathbf{\Omega}\), at spatial position \(\mathbf{r}\) is given by the angular flux, \(\mathbf{\psi}(\mathbf{r},\mathbf{\Omega})\). The macroscopic total and scattering cross section are given by \(\sigma_{\rm t}\) and \(\sigma_{\rm s}\), respectively, with an external source of \(S_{\rm e}\).
Solving (1) can be difficult given the dimensionality of the problem; previously we presented methods for building adapted angular discretisations for the BTE [1; 2; 3]. This allowed angular resolution to be focused where important in space/energy. [1] used anisotropic adaptivity on the sphere in a Haar wavelet space, which was built on top of an underlying nested P\({}^{0}\) space. The angular matrices in Haar wavelet space cannot be formed scalably, as the number of non-zeros (nnzs) grows non-linearly with angular refinement. As such, we built a matrix-free multigrid method to solve the adapted problems, which mapped the solution into the underlying P\({}^{0}\) space in \(\mathcal{O}(n)\) and performed a matrix-vector product with the P\({}^{0}\) angular matrices, which is scalable, before mapping back to Haar space.
This method resulted in a reduction in time to solve and memory use when compared to uniform discretisations. We showed that our adaptive process was scalable, enabling the use of up to 15 levels of hierarchical refinement in angle; unfortunately in the streaming limit the matrix-free multigrid could not solve the linear systems with fixed work. This is due to the BTE becoming hyperbolic in the limit of no scattering and we deliberately chose not to use Gauss-Seidel/sweep based smoothers, in an effort to build an iterative method that could scale well on unstructured grids in parallel.
Recently we showed [4] that by solving in P\({}^{0}\) space directly, we could build an iterative method without sweeps with excellent performance in the streaming limit of the BTE. The work in this paper is therefore based on building an adapted angular discretisations in P\({}^{0}\) space and combining this with our new iterative method, enabling the efficient use of adapted angular discretisations in both streaming and scattering problems.
Here we present four contributions: the first is a sparsity-controlled adaptive P\({}^{0}\) discretisation for the BTE; the second is the use of an adapted P\({}^{0}\) discretisation with our AIRG-based iterative method; the third is a method for selecting coarse (C) and fine (F) points for a multigrid hierarchy that can be combined with angular adaptivity to provide semi-coarsenings without requiring matrix entries; and finally a convergence test for iterative methods that decreases the cost of forming an adaptive angular discretisation with regular adaptivity.
## 2 Discretisations
The spatial and angular discretisations used in this work are based on that we used previously and we describe them below [5; 6; 7; 8; 1; 4].
### Angular discretisation
We use a P\({}^{0}\) DG FEM in angle, with the lowest resolution given by one P\({}^{0}\) element per octant (similar to an S\({}_{2}\) discretisation). Subsequent levels of refinement come from dividing each element in four, at the halfway points of the azimuth angle and cosine of the polar angle. All the elements at a given level of refinement therefore have constant area and we normalise our constant basis functions so the uniform (and hence diagonal) angular mass matrix is the identity. This P\({}^{0}\) discretisation is similar to an S\({}_{n}\) product quadrature and features elements that cluster around the poles with uniform refinement. This is not a desirable feature for a uniform discretisation of the BTE, but the nested nature of the refinement means it is simple to build an adapted angular discretisation and hence refinement around the poles only occurs if it is required.
We can build a Haar wavelet discretisation on top of this P\({}^{0}\) space, with the hierarchy in the nested elements replaced with a hierarchy in the wavelet basis functions. These two discretisations are exactly equivalent. As in [4] we solve in P\({}^{0}\) space as we can form an assembled copy of the streaming/removal matrix that has fixed sparsity with angular refinement (see Section 2.2). The equivalence between the P\({}^{0}\) and Haar spaces is still very useful as it allows us to easily tag which angular elements require refinement/de-refinement and hence it enables the reduced tolerance solves discussed below.
Figure 1: Angular domains showing angular adaptivity focused around a single “important” direction, namely \(\mu\in[0,1]\) and \(\omega\in[1.47976,1.661832]\) in a 2D simulation. The P\({}^{0}\) angular discretisation is on the \(r=1\) sphere, but has been projected onto faceted polyhedra for ease of visualisation. The camera is pointed in the \(-z\) direction.
We noted in [1] that solving in \(\mathrm{P}^{0}\) space can easily be used as part of the adaptive process we described previously. Fig. 1 shows an example of an adapted \(\mathrm{P}^{0}\) discretisation at a single node point with several levels of refinement, defined _a priori_ in the \(+y\) direction. In this work we only show regular adaptivity (based on achieving uniform global accuracy in the solution) given our iterative methods are agnostic to how the adapted discretisations are formed; regular adaptivity is not an optimal method for many problems (particularly in the streaming limit), please see [1; 3] for examples using _a priori_ and goal based adaptivity. There have also been a number of other authors who have used adaptivity in angle in Boltzmann applications; these include sparse grid methods [9], \(\mathrm{S}_{n}\) methods [10; 11; 12; 13; 14; 15], FEM methods [16; 17; 18; 19; 20; 21], and specifically finite element/wavelet methods [22; 23; 24; 25; 26].
Our adaptive process starts with a uniform angular discretisation (typically level 1) and performs a solve in \(\mathrm{P}^{0}\) space. The solution is then mapped into Haar wavelet space in \(\mathcal{O}(n)\) and an error metric is formed. Regular adaptivity results in a discretisation that minimises the global error in the problem and is simple to perform. In wavelet space, the size of the wavelet coefficients is influenced by both the size of the flux and the smoothness of the underlying function; thresholding the wavelet coefficients with a fixed value is therefore sufficient to drive regular adaptivity. As such, we take the angular flux in Haar space, and scale it by a thresholding coefficient. This coefficient is input by the user and drives refinement as it is decreased (towards a uniform discretisation). We also scale by the maximum of the angular flux, in an attempt to make the thresholding coefficients somewhat problem agnostic. If each resulting wavelet coefficient is greater than 1.0, we tag it for refinement. If it is less than 0.01, we tag it for de-refinement. We therefore know which angular elements in \(\mathrm{P}^{0}\) space to refine/de-refine, as they are given by the elements which are in the support of each wavelet; see [1]. This is performed across each spatial point separately. The initial condition for the \(\mathrm{P}^{0}\) solve on the adapted discretisation can then be taken from the previous step. The hierarchical nature of the wavelets makes this simple; it is equivalent to interpolating the \(\mathrm{P}^{0}\) solution onto the newly adapted \(\mathrm{P}^{0}\) discretisation. We then continue this adapted process up to a maximum level of refinement, or number of adapt steps.
The only difference between this adaptivity process compared with [1] is that we enforce that if a single wavelet is added, all the wavelets on the same level of refinement that share the same support (a maximum of two other wavelets) on the sphere are also added. Similarly, the removal requires that all wavelets with the same support are below the threshold coefficient and all are removed at once. This ensures that when an angular element is refined (or de-refined), we always have 4 \(\mathrm{P}^{0}\) elements present, which makes the \(\mathrm{P}^{0}\) implementation simpler. In comparison to the matrix-free method in [1], we must also introduce further controls on the sparsity of our discretisation given we solve in \(\mathrm{P}^{0}\) space; this is detailed below.
### Spatial discretisation
Our spatial discretisation is a sub-grid scale FEM, which represents the angular flux as \(\boldsymbol{\psi}=\boldsymbol{\phi}+\boldsymbol{\theta}\), where \(\boldsymbol{\phi}\) is the solution on a "coarse" scale and \(\boldsymbol{\theta}\) is the solution on a "fine" scale. We perform a separate finite element expansions on both the fine and coarse scales, with continuous linear basis functions on the coarse scale and discontinuous linear basis functions on the fine scale. We then use (constant) basis functions in angle and enforce that the coarse and fine scales have the same angular expansion on co-located nodes. Our discretised form of (1) can then be written as
\[\begin{bmatrix}\mathbf{A}&\mathbf{B}\\ \mathbf{C}&\mathbf{D}\end{bmatrix}\begin{bmatrix}\boldsymbol{\Phi}\\ \boldsymbol{\Theta}\end{bmatrix}=\begin{bmatrix}\mathbf{S}_{\boldsymbol{\Phi} }\\ \mathbf{S}_{\boldsymbol{\Theta}}\end{bmatrix}, \tag{2}\]
The number of unknowns in the coarse scale discretised vector, \(\boldsymbol{\Phi}\), is NCDOFs and the number of unknowns in the fine scale discretised vector, \(\boldsymbol{\Theta}\), is NDDOFs. \(\mathbf{S}_{\boldsymbol{\Phi}}\) and \(\mathbf{S}_{\boldsymbol{\Theta}}\) are the discretised source terms for both scales. (2) is built using standard FEM theory and as such \(\mathbf{A}\) and \(\mathbf{D}\) are the standard continuous Galerkin (CG) and discontinuous Galerkin (DG) FEM matrices that result from discretising (1).
A Schur complement of block \(\mathbf{D}\) allows us to solve for the coarse scale variable, \(\boldsymbol{\Phi}\), given by
\[(\mathbf{A}-\mathbf{B}\mathbf{D}^{-1}\mathbf{C})\boldsymbol{\Phi}=\mathbf{S} _{\boldsymbol{\Phi}}-\mathbf{B}\mathbf{D}^{-1}\mathbf{S}_{\boldsymbol{\Theta}}, \tag{3}\]
with the fine scale solution then computed with
\[\boldsymbol{\Theta}=\mathbf{D}^{-1}(\mathbf{S}_{\boldsymbol{\Theta}}-\mathbf{C }\boldsymbol{\Phi}). \tag{4}\]
The addition of the two solutions \(\boldsymbol{\Psi}=\boldsymbol{\Phi}+\boldsymbol{\Theta}\) (where the coarse solution \(\boldsymbol{\Phi}\) has been projected onto the fine space) then gives us our discrete solution. Solving the sub-grid scale equations as posed would be more expensive than solving
with a DG FEM discretisation, so we sparsify \(\mathbf{D}\) (see also [8; 27; 28; 25; 29; 30; 1; 2]). We replace \(\mathbf{D}^{-1}\) in (3) and (4) with \(\mathbf{\hat{D}}^{-1}\), which is the streaming operator with removal and self-scatter, and vacuum conditions applied on each DG element. This removes the jump terms, resulting in element blocks in \(\mathbf{\hat{D}}\), making the computation of \(\mathbf{\hat{D}}^{-1}\) tractable. With a uniform \(\mathrm{P}^{0}\) angular discretisation (as in [4]), this is sufficient to ensure fixed sparsity with either spatial or angular refinement, as \(\mathbf{\hat{D}}\) (and hence \(\mathbf{\hat{D}}^{-1}\)) has diagonal blocks. Unfortunately this is not sufficient when we have adapted our \(\mathrm{P}^{0}\) angular discretisation with differing angular resolution at each spatial point. If we denote the sparsity of the streaming component \(\mathbf{D}_{\Omega}\) as \(S_{\mathrm{D}}\subset\{(i,j)\,|\,(\mathbf{D}_{\Omega})_{i,j}\neq 0\}\), then we enforce \((\mathbf{\hat{D}}^{-1})_{i,j}\in S_{\mathrm{D}}\). This is equivalent to using an ILU(0) to invert our sparsified approximation.
The ILU(0) is only necessary to ensure a fixed sparsity with our adapted \(\mathrm{P}^{0}\) space; for example if we have spatial nodes with differently adapted \(\mathrm{P}^{0}\) discretisations, the construction of our angular discretisation means that the nnzs in each of the blocks of our element matrices depends on how each adapted angle overlaps the others (i.e., the angular mass matrix is no longer diagonal). Fig. 1(a) shows this on an example element, where the element streaming operator is no longer made up of diagonal blocks. If we inverted this without the sparsity control of an incomplete LU factorisation, we would significantly increase the nnzs in an adapted \(\mathrm{P}^{0}\) space, as shown in Fig. 1(b); this is in contrast to a uniform \(\mathrm{P}^{0}\) space where the nnzs in the inverse is the same as the streaming operator.
For our adaptivity to be useful, we want the cost of our matvec to scale with the number of _adapted_ unknowns. We can imagine a pathological case where, in one dimension for example, the angular resolution varies across the domain such that each (linear) spatial element has one node at the lowest resolution possible and one at the highest. The streaming operator would have the same nnzs as a uniform \(\mathrm{P}^{0}\) discretisation at the highest resolution, given the effect of overlapping angular elements in the angular matrices, described above. Thankfully in our experience this pathology does not occur in practice; we show this in Section 5.
## 3 Iterative method
The iterative method we use in this work comes from [4] and is briefly detailed here. We solve a right preconditioned version of (2) given by
\[(\mathbf{A}-\mathbf{B}\mathbf{\hat{D}}^{-1}\mathbf{C})\mathbf{M}^{-1}\mathbf{ u}=\mathbf{S}_{\Phi}-\mathbf{B}\mathbf{D}^{-1}\mathbf{S}_{\Theta},\quad\mathbf{u}= \mathbf{M}\boldsymbol{\psi}. \tag{5}\]
We use GMRES(30) to solve this equation and a matrix-free matvec to compute the action of \((\mathbf{A}-\mathbf{B}\mathbf{\hat{D}}^{-1}\mathbf{C})\). The preconditioner, \(\mathbf{M}^{-1}\), uses the the additive combination of a CG diffusion-synthetic-acceleration (DSA) method and a
Figure 2: Sparsity of a element streaming matrix from \(\mathbf{D}\) given adapted P0 angle in 2D. The three DG nodes have 10, 7 and 10 angles present, respectively; all three nodes start with three octants at level 1 and one octant at level 2, the nodes with 10 angles have one of the level 2 patches further refined to level 3.
sparsified form of our streaming/removal operator, which we denote as
\[\mathbf{M}^{-1}=\mathbf{M}_{\text{angle}}^{-1}+\mathbf{M}_{\Omega}^{-1}. \tag{6}\]
The CG DSA preconditioner is based on a CG FEM discretisation of a diffusion equation, \(\mathbf{D}_{\text{diff}}\), with \(\mathbf{R}_{\text{angle}}\) and \(\mathbf{P}_{\text{angle}}\) the restriction/prolongation of the angular flux to the constant moment and hence
\[\mathbf{M}_{\text{angle}}^{-1}=\mathbf{R}_{\text{angle}}\mathbf{D}_{\text{ diff}}^{-1}\mathbf{P}_{\text{angle}}. \tag{7}\]
We can rewrite (3) as the contribution from a streaming/removal component (denoted with a subscript \(\Omega\)) and a scattering component (denoted with a subscript S), or
\[\left(\mathbf{A}_{\Omega}-\mathbf{B}_{\Omega}\hat{\mathbf{D}}^{-1}\mathbf{C}_ {\Omega}\right)\mathbf{\Phi}+\left((\mathbf{A}_{\text{S}}+\mathbf{B}_{\text{S }}(y+\hat{\mathbf{D}}^{-1}\mathbf{C}_{\Omega})+\mathbf{B}_{\Omega})\right) \mathbf{\Phi}=\mathbf{S}_{\mathbf{\Phi}}-(\mathbf{B}_{\Omega}+\mathbf{B}_{ \text{S}})\hat{\mathbf{D}}^{-1}\mathbf{S}_{\mathbf{\Phi}}. \tag{8}\]
where \(y=\hat{\mathbf{D}}^{-1}\mathbf{C}_{\text{S}}\) and our fine component is \(\mathbf{\Theta}=\hat{\mathbf{D}}^{-1}(\mathbf{S}_{\mathbf{\Theta}}-(\mathbf{C }_{\Omega}+\mathbf{C}_{\text{S}})\mathbf{\Phi})\). In [4] we used \(\mathbf{M}_{\Omega}^{-1}=(\mathbf{A}_{\Omega}-\mathbf{B}_{\Omega}\hat{ \mathbf{D}}^{-1}\mathbf{C}_{\Omega})^{-1}\), but here our adapted \(\mathrm{P}^{0}\) space in angle requires further sparsification. If we denote the sparsity pattern of \(\mathbf{B}_{\Omega}\) as \(S_{\text{B}}\subset\{(k,l)\,|\,(\mathbf{B}_{\Omega}^{c})_{k,l}\neq 0\}\), then a sparsified streaming operator is given by
\[\mathbf{M}_{\Omega}^{-1}=\left(\mathbf{A}_{\Omega}-(\mathbf{B}_{\Omega}(\hat{ \mathbf{D}}^{-1}\mathbf{C}_{\Omega})_{i,j})_{k,l}\right)^{-1},\quad(i,j)\in S _{\text{D}},(k,jl)\in S_{\text{B}}. \tag{9}\]
With uniform \(\mathrm{P}^{0}\) angle, (9) is exactly \(\mathbf{A}_{\Omega}-\mathbf{B}_{\Omega}\hat{\mathbf{D}}^{-1}\mathbf{C}_{\Omega}\); with adapted \(\mathrm{P}^{0}\) angle it is equivalent to computing the matrix product \(\hat{\mathbf{D}}^{-1}\mathbf{C}\) with no fill-in, and then using this result to compute \(\mathbf{B}_{\Omega}\hat{\mathbf{D}}^{-1}\mathbf{C}_{\Omega}\), again with no fill-in. This is necessary as again the overlap of adapted angular elements means the product of two (element) matrices with sparsity shown in Fig. 2a results in the sparsity given in Fig. 2b, which is unacceptable. When computing the action of the matrix triple-product one-by-one as part of a matrix-free matvec for the outer GMRES, the extra non-zeros are not a concern, it is only when we wish to form an assembled version of \(\mathbf{A}_{\Omega}-\mathbf{B}_{\Omega}\hat{\mathbf{D}}^{-1}\mathbf{C}_{\Omega}\) to precondition with that we must take care to ensure no extra fill-in occurs. Practically, we store an assembled copy of (9) to apply the preconditioner so we use this as a replacement for the non-sparsified \(\mathbf{A}_{\Omega}-\mathbf{B}_{\Omega}\hat{\mathbf{D}}^{-1}\mathbf{C}_{\Omega}\) in (8). The difference between these two approaches is simply a modified stabilisation term when we have adapted.
We now require a method to apply the inverses in our preconditioner. In [4] we developed a multigrid method known as AIRG, based on a reduction-style multigrid [31; 32]. We used a single V-cycle of AIRG per application of the preconditioner to apply \(\mathbf{M}_{\Omega}^{-1}\). The diffusion operator was applied with a single V-cycle of _boomerAMG_ from _hypre_. We use the same approach here. If we consider a general linear system \(\mathbf{A}\mathbf{x}=\mathbf{b}\) we can form a block-system due to a coarse/fine (CF) splitting of the unknowns as
\[\begin{bmatrix}\mathbf{A}_{\text{ff}}&\mathbf{A}_{\text{fc}}\\ \mathbf{A}_{\text{cf}}&\mathbf{A}_{\text{cc}}\end{bmatrix}\begin{bmatrix} \boldsymbol{x}_{l}\\ \boldsymbol{x}_{\text{c}}\end{bmatrix}=\begin{bmatrix}\boldsymbol{b}_{l}\\ \boldsymbol{b}_{\text{c}}\end{bmatrix}. \tag{10}\]
We discuss the CF splitting further in Section 4. Ideal prolongation and restriction operators are given by
\[\mathbf{P}=\begin{bmatrix}-\hat{\mathbf{A}}_{\text{ff}}^{-1}\mathbf{A}_{ \text{fc}}\\ \mathbf{I}\end{bmatrix},\quad\mathbf{R}=\begin{bmatrix}-\mathbf{A}_{\text{cf}} \hat{\mathbf{A}}_{\text{ff}}^{-1}&\mathbf{I}\end{bmatrix}, \tag{11}\]
with the coarse-grid matrix computed with \(\mathbf{A}_{\text{coarse}}=\mathbf{R}\mathbf{P}\). Repeating this process on the coarse-grid then builds a multigrid hierarchy. AIRG forms approximations to \(\mathbf{A}_{\text{ff}}^{-1}\), namely \(\hat{\mathbf{A}}_{\text{ff}}^{-1}\), by using fixed-order GMRES polynomials. This results in a stationary multigrid method that can be applied with just matrix-vector products (i.e., no dot products). One key feature of AIRG is that it doesn't rely on any block or lower-triangular structure in our matrix. This is essential given that our spatial discretisation results in a CG-stencil in (3) and hence our linear system does not feature lower-triangular blocks in the streaming limit, unlike a traditional DG FEM. If we use angular adaptivity in \(\mathrm{P}^{0}\) space, different angles in an octant can be coupled across space in the streaming limit, and hence we no longer have independent angle blocks in our matrix. Instead our matrix has at most 4 (in 2D or 8 in 3D) angular blocks given by the octant coupling. We found in [4] that with uniform angle, in both the streaming and scattering limit, our iterative method with AIRG
used close to fixed work in both the setup and solve with constant memory consumption. We would hope that the block-independent nature of AIRG gives the same performance when we have adapted in angle; we examine this in Section 5.
To measure the amount of work required to solve (3), (4) and form \(\boldsymbol{\Psi}\), we use the metrics defined in in [4]. This is given in terms of the number of "Work Units", which is a FLOP count scaled by the number of FLOPs required to compute a matrix-free matvec of (8). In order to make comparisons with traditional DG FEM/source iteration methods easier, we also present the FLOP count scaled by the number of FLOPs required to compute a matrix-free matvec with a DG FEM discretisation. Please see [4] for a more detailed definition of these quantities.
## 4 CF splitting
In [4] we used the Falgout-CLJP coarsening algorithm implemented in _hypre_ to determine the CF splitting required by our AIRG multigrid method (see [33] for some general strategies related to CF splittings). In an effort to build cheaper CF splitting algorithms, in this work we also use the element agglomeration algorithms from [1; 34]. These methods were vital to our previous work, as we needed to build and apply spatial tables on coarse elements in order to compute matrix-free matvecs on lower spatial grids. In this work we don't require the coarse elements these methods provide, but we can still use the spatial CF points they provide to pick which of our space/angle DOFs are fine and coarse. Given these algorithms only depend on the spatial grid, they can be cheaper to run than algorithms which require access to the matrix entries, though because of this we would expect them to perform less well, given the same spatial CF points would be applied to each angle, and hence ignore the directionality. For a reduction-based multigrid, the selection of "good" CF points results in a well-conditioned \(\mathbf{A}_{\text{ff}}\), though as we demonstrate in Section 5 this can be ameliorated through the use of strong approximations to \(\mathbf{A}_{\text{ff}}^{-1}\).
In this paper when using uniform angle (i.e., for the first solve in our adapt process), our element agglomeration CF splitting has no directionality. As mentioned in Section 2.1 however, the P\({}^{0}\) angular discretisation used in this work can adapt, refining in "important" directions. The adapt process solves a linear system with a uniformly refined angular discretisation in the first step, followed by angular refinement at each spatial point. From there each subsequent adapt steps continues to build anisotropically adapted angular discretisations. For the linear solves after the first step, we can use the directional information contained within the adapted angular discretisations to determine a spatially dependent "coarsening direction" and hence a set of CF points with directionality. There are many ways we could define a "coarsening direction" at a given spatial point; we could pick the direction of the most refined angular element, compute an average direction on the sphere across all the angular elements, compare the distribution of angles or level of refinement across the sphere, etc. For simplicity in this work, we set the coarsening direction at each spatial node point to be the direction given by the centre of the most refined angular element at that point, or if _a priori_ angular refinement is used, the direction of that refinement is used.
We then have a set of coarsening directions at each spatial node point and this is then used to guide our element agglomeration. The spatial coarse points are the vertices of the coarse agglomerates. If a spatial node is designated coarse, all the angular DOFs on that spatial node are also tagged as coarse. The key to this process is that it must result in a CF splitting that reflects the underlying streaming or scattering limits. One of the benefits of using element agglomeration in this fashion is that after several adapt steps, we could consider the "coarsening direction" to be (mostly) converged and hence freeze the element agglomeration. The spatial CF points are then frozen and any additional angular DOFs added in subsequent adapt steps can be cheaply tagged based on the frozen spatial CF points. This could make it cheaper to form the CF splitting with many adapt steps, as we would only need to perform the element agglomeration a small number of times.
For many problems in the streaming limit, the angular adaptivity will pick out a limited number of key direction(s) and hence the coarsening needs to occur in those directions. We should note that the coarsest level of refinement on our angular domain is given by a single basis function per octant. If we have a single important direction, for example in Fig. 1 and our coarsening direction has been computed as say, \(+y\), there will always be some angles (the unrefined elements) that are not well represented by the coarsening direction. In the limit of angular refinement however, given the AMR-style nested refinement of our angular adaptivity, the majority of angles at that node point will be well represented by the coarsening direction.
In scattering regions however, the angular adaptivity tends to uniform refinement and hence the coarsening needs to form well-shaped circular/spherical agglomerates (as there is no "important" direction), with agglomerates that
could have a different number of fine elements compared to the streaming case, given possible differences in optimal coarsening rates. The combination of spatially-dependent coarsening directions formed through angular adaptivity and scattering cross-sections should therefore provide us with the information required to compute a directional element agglomeration algorithm that provides "good" CF splitting for the majority of unknowns in our angularly adapted problems.
There are a few related problems with this scheme. For example, both the linear system in the first adapt step and problems where uniform refinement is triggered far from the scattering limit won't have the required directional information. The first of these is not a major concern as the number of unknowns in the first adapt step is small compared to the number of unknowns at all the other adapt steps. In both cases we must return to using the directionless agglomeration algorithms described in [34]. We can also have the case where we have adapted with the angular DOFs concentrated in small regions of the spatial mesh, but the element agglomeration proceeds with a constant agglomerate size. This can result in many DOFs in those refined regions not being tagged as "coarse"; the non-uniformity of the distribution of angular DOFs across the spatial mesh therefore persists on the coarse grid. This means our multigrid may have very different operator complexity when compared to uniform angle. These problems may harm the performance of our multigrid methods; we examine this further below but note that the traditional coarsening algorithms that rely on matrix entries can always be used.
### Directional element agglomeration method
Previously we presented 7 different element agglomeration methods from the literature and compared their performance on scattering/absorbing problems [34] with the matrix-free multigrid from [1]. Here we present one of those algorithms that has been modified to use directional information; all of the algorithms mentioned in [34] can be modified (and perform similarly).
Our element agglomeration has been modified with very simple heuristics; if the average scattering cross-section is greater than 1 (in this work we use the actual cross-section value on each element; ideally this would just be defined as the mean free path in an element, or an equivalent dimensionless quantity), or the number of angles on a spatial node is "close" to that of a uniform discretisation at that level of refinement (we define "close" as having \(>63\%\) of the uniform angles; this is equivalent to a level 2 discretisation having more than 2 octants refined), then agglomeration proceeds without directional information. If we do proceed with directional coarsening, the tangent vector to the face on the spatial mesh is compared with the average coarsening direction computed on that face (i.e., an average of the coarsening direction across all the spatial nodes on that face). If those two vectors are close to parallel then agglomeration across that face is discouraged; we define close to parallel as the smallest angle between the vectors being less than \(\pi/4\). The cleanup routine described in [34] has also been modified to use directional information. Any elements which require cleanup (e.g., unused elements, elements within completely closed agglomerates, etc) are added to the agglomerates closest to the coarsening direction with the smallest number of elements.
Algorithm 1 uses METIS and requires setting a desired agglomerate size to control the coarsening rate. As mentioned above, typically we would consider the ideal coarsening rate to be dependent on the cross-section. Previously [34] we examined the ideal agglomerate size with our matrix-free multigrid applied to scattering/absorbing problems. The simple grid transfer operators we used in that work meant that the multigrid method could be considered a geometric multigrid and as such we found very large number of elements (20-200 in two and three dimensions, respectively) in an agglomerate were required to keep the grid complexity low and hence maintain good performance. The same is not the case in this work, where our multigrid methods are much closer to AMG/AMGe style methods and we require high quality interpolation/restriction for good performance across all parameter regimes. This requires much smaller numbers of elements in an agglomerate and hence higher grid complexities, although we do use the same coarsening rate in streaming and scattering problems in this work.
As in [34], once top grid agglomeration has occurred on an unstructured grid, the lower grids have fundamentally different spatial connectivity and hence come to resemble structured grids, where the agglomerate size can be easily set to a constant across levels. As such we set the agglomerate size to 6 in 2D and 12 in 3D on the top grid, whereas on lower grids we set the agglomerate size to 2 in 2D and 4 in 3D (i.e., we perform aggressive coarsening on the top grid).
## 5 Results
Outlined below are two examples problems taken from [4], in the streaming and scattering limit that we use to test our P\({}^{0}\) angular adaptivity. We solve our linear systems with GMRES(30) to a relative tolerance of 1\(\times\) 10\({}^{\text{-}10}\), with an absolute tolerance of 1\(\times\) 10\({}^{\text{-}50}\) and use an initial guess of zero unless otherwise stated. We use the additive preconditioners defined in [4] and all the same parameters as that work. This is the case even when we have used angular adaptivity and hence may have non-zero initial guesses from previous adapt steps. We do this to ensure fair comparisons across different material regimes at different levels of angular refinement. This is in order to show that the convergence of our method is not dependent on a "good" initial condition in some material regimes. In the scattering limit the non-zero guesses from coarser angular discretisations are helpful, but in the streaming limit ray-effects mean that coarse angular discretisations often do not provide good approximations to refined angular discretisations. If using the element agglomeration CF splitting, we rerun this at each adapt step, rather than freeze it after a set number of steps. All tabulations of memory used are scaled by the total NDOFs in \(\boldsymbol{\psi}\) in (2), i.e., NDOFs=NCDOFs + NDDOFs.
### Angular adaptivity
Section 2.1 described how our P\({}^{0}\) angular discretisation can adapt anisotropically on the sphere, allowing different angular refinement at different spatial points. As such we examine the use of AIRG and our additive iterative method in these adapted systems, along with the directional element agglomeration method described in Section 4.1. To see the performance on the same problems with uniform angle, please see [4]. In particular, [4] showed that with uniform angular refinement, in both the streaming and scattering limit we can solve our problem with close to fixed work. The first step of our adapt solves at a (low) uniform level of angular refinement and then this information (through some error metric) is used to trigger one level of refinement where required, followed by subsequent solves/refinements. We therefore expect our subsequent solves to be cheaper than a uniform equivalent given they should have fewer DOFs, and given the sparsity control in Section 2.2 there should be fewer nnxs in our matrices.
#### 5.1.1 Pure streaming problem
Table 1 shows the results from using a regular angle adapt with fixed spatial resolution (this is the third refined mesh from [4]) in the streaming limit, with directional element agglomeration to compute the CF splitting. Fig. 4 shows the scalar flux and the distribution of our angular resolution throughout space. Our adapt process is focusing resolution in the directions away from the source in order to resolve the rays-effects in this problem. Using a regular
adapt process in this problem is not necessary, given the optimal angular resolution can be easily determined _a priori_ (refining in directions looking away from the source), but we wished to show the effect of solving our adapted discretisations at different adapt steps. Fig. 4 also shows that after the first adapt step, the directional element agglomeration is gluing together elements in the "important" direction at each spatial node, given by the refined angular flux in Fig. 5. We see this results in an iteration count and overall work that is close to flat, with approximately 15% growth after 5 adapt steps. Interestingly, if we force the element agglomeration to be directionless at every adapt step, we see very similar cycle and operator complexities, with 15, 16, 17, 17 and 18 iterations. This implies that the directional element agglomeration is unnecessary. Further investigation revealed that it is the high GMRES polynomial order in AIRG (\(m=4\), i.e., a third order polynomial) that is compensating for the directionless CF splitting; reducing the polynomial order to 1 with the directional CF splitting gives 17, 31, 44, 53, 65 iterations compared with directionless at 17, 34, 40, 56 and 81 iterations. Our strong approximations to the ideal operators and F-point smoothers are compensating for the poorer CF splitting.
With higher spatial resolution (not pictured), the adapt can also remove the angular resolution in the regions between the rays that are added in the initial adapt steps; the coarse spatial resolution used in Fig. 4 means that numerical diffusion keeps some of the angular resolution above the removal threshold. Even with this excess resolution, we have 2.8\(\times\) and 4.8 fewer NDOFs for the adapted solves in step 2 and 3 than in a uniform level 2 and 3 discretisation in this problem. Similarly, there are 2.75 and 4.6\(\times\) fewer nnxs in the adapted matrices compared to the uniform. This shows that the pathology described in Section 2.2 where the nnxs grows considerably with the adapt is not seen in this problem and that the sparsity control in Section 3 is effective; the NDOFs grow 4.74\(\times\) from adapt step 1 to 5, with the nnxs growing 4.88\(\times\), or an increase of around 3% above the NDOFs.
Table 2 shows the results from using the Falgout-CLJP CF splitting instead of our directionless agglomeration algorithm, and we see much lower iteration count with similarly flat work at each adapt step, with fixed memory use of 10-11 copies of the angular flux.. This shows that accounting for the directionality of all the angles (not just the "important" angles) can improve the convergence in the streaming limit. Both the directional element agglomeration and Falgout-CLJP CF splitting however result in close to fixed work in this streaming problem when adapting. This shows the power of combining our iterative method and P\({}^{0}\) angular adaptivity and is in contrast to the results in [1], where we saw that after 3 angular adapt steps in a streaming problem, solving in Haar space with a matrix-free multigrid resulted in an iteration count that grew from 25 to 91.
\begin{table}
\begin{tabular}{c c c|c c c c c c} \hline \hline CG nodes & Adapt step. & NDOFs & \(n_{\text{its}}\) & CC & Op Complx & WUs\({}^{\text{full}}\) & WUs\({}^{\text{DG}}\) & Memory \\ \hline
2313 & 1 & 6.3\(\times\) 10\({}^{4}\) & 17 & 5.1 & 2.7 & 115 & 27.3 & 10.9 \\
2313 & 2 & 9.2\(\times\) 10\({}^{4}\) & 16 & 5.3 & 2.8 & 112 & 26.9 & 11.3 \\
2313 & 3 & 2.2\(\times\) 10\({}^{5}\) & 17 & 5.8 & 3.0 & 128 & 30.5 & 11.9 \\
2313 & 4 & 2.8\(\times\) 10\({}^{5}\) & 16 & 5.2 & 2.8 & 111 & 26.5 & 11.2 \\
2313 & 5 & 3.1\(\times\) 10\({}^{5}\) & 19 & 5.3 & 2.8 & 130 & 31.0 & 11.2 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results from using AIRG on a pure streaming problem in 2D with CF splitting by directional element agglomeration, drop tolerance on **A** of 0.0075 and **R** of 0.025, with regular angular adaptivity with a refinement tolerance of 0.001, a maximum of 3 levels of angular refinement and 5 adapt steps. The WUs listed are scaled by the nnxs in the _adapted_ solve at each step.
\begin{table}
\begin{tabular}{c c c|c c c c c c} \hline \hline CG nodes & Adapt step. & NDOFs & \(n_{\text{its}}\) & CC & Op Complx & WUs\({}^{\text{full}}\) & WUs\({}^{\text{DG}}\) & Memory \\ \hline
2313 & 1 & 6.3\(\times\) 10\({}^{4}\) & 9 & 4.4 & 2.87 & 60 & 14.1 & 10.2 \\
2313 & 2 & 9.2\(\times\) 10\({}^{4}\) & 11 & 4.7 & 3.2 & 74 & 17.6 & 10.7 \\
2313 & 3 & 2.2\(\times\) 10\({}^{5}\) & 12 & 5.0 & 3.6 & 85 & 20.2 & 11.3 \\
2313 & 4 & 2.8\(\times\) 10\({}^{5}\) & 11 & 4.6 & 3.3 & 73 & 17.5 & 10.7 \\
2313 & 5 & 3.1\(\times\) 10\({}^{5}\) & 11 & 4.5 & 3.2 & 72 & 17.2 & 10.6 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results from using AIRG on a pure streaming problem in 2D with CF splitting by the _hypre_ implementation of Falgout-CLJP with a strong threshold of 0.2, drop tolerance on **A** of 0.0075 and **R** of 0.025, with regular angular adaptivity with a refinement tolerance of 0.001, a maximum of 3 levels of angular refinement and 5 adapt steps. The WUs listed are scaled by the nnxs in the _adapted_ solve at each step.
In general with our adapted matrices, we might expect similar spectrums when compared to the uniform matrices (i.e., when comparing a uniform discretisation to that of an adapt with the same maximum level of refinement) and therefore require a similar (or smaller) number of iterations to converge. Typically this is the case. Table 2 however shows that with Falgout-CLJP CF splitting, the iteration count starts at 9 given the adapt process starts with a uniform level 1 solve, the second and third steps however takes 11 and 12 iterations. The results in [4] show that with uniform level 2 and 3 discretisations this problem took 10 and 11 iterations to solve, respectively. The uniform level 3 matrix on this mesh has a condition number of approx. 560 compared with around 1780 for the adapted discretisation in step 3 pictured in Fig. 4g, resulting in the increase in iteration count. Fig. 3 shows that the field of values for the adapted discretisation is closer to the origin when compared to the uniform, and the convergence of our GMRES polynomials is affected by this [36]. As noted the adapted discretisation has fewer DOFs (and nnzs) so it is still much cheaper to solve than the uniform discretisation, but it is worth noting that solving the matrices generated by our adapt steps is not always equivalent to solving a uniform discretisation.
#### 5.1.2 Scattering problem
We now examine the performance of our additively preconditioned iterative method with adaptivity in the scattering limit. We only require three adapt steps to resolve the required resolution in this problem; Fig. 6 shows this adapt process and we can see the majority of the angular resolution in the problem is focused around the source. Fig. 6 also shows that given the high scattering cross-section, the element agglomeration has proceeded uniformly, with no directional information after the first adapt step. Table 3 shows that with the CF splitting by element agglomeration, we see a plateau in the iteration count and work. In contrast to the pure streaming problem in Section 5.1.1, performing the CF splitting with Falgout-CLJP results in an identical iteration count (see Table 4), with slightly higher work due to the higher cycle complexity. This confirms that although the element agglomeration results in poorer CF splittings, the streaming/removal operator is easier to invert and in problems with a large removal term, a simple CF splitting algorithm without access to the matrix entries can be sufficient to ensure scalability.
\begin{table}
\begin{tabular}{c c c|c c c c c c} \hline \hline CG nodes & Adapt step. & NDOFs & \(n_{\text{its}}\) & CC & Op & Complx & WUs\({}^{\text{mf}}\) & WUs\({}^{\text{DG}}\) & Memory \\ \hline
2313 & 1 & 6.3\(\times\) 10\({}^{4}\) & 25 & 3.9 & 2.0 & 38 & 69 & 16.9 \\
2313 & 2 & 2.4\(\times\) 10\({}^{5}\) & 27 & 3.8 & 2.0 & 39 & 72 & 16.5 \\
2313 & 3 & 3.1\(\times\) 10\({}^{5}\) & 27 & 3.8 & 2.0 & 38 & 76 & 16.5 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results from using additive preconditioning on a pure scattering problem with total and scattering cross-section of 10.0 in 2D with regular angular adaptivity with a refinement tolerance of 0.001, a maximum of 3 levels of angular refinement and 3 adapt steps. The WUs listed are scaled by the nnzs in the _adapted_ solve at each step. The cycle and operator complexity listed are for AIRG on \(\mathbf{M}_{\Omega}\) with CF splitting by element agglomeration.
Figure 3: Field of values of the streaming operators on the third spatial grid. Blue is uniform level 3 angular refinement, red is an adapted angular discretisation with a maximum level of refinement of 3.
For both methods, the iteration count for each of the adapt steps is lower or equal to that of the uniform angular refinement shown in [4], with the work and memory use constant at around 38 WUs and 17 copies of the angular flux, respectively. The NDOFs grows 4.96\(\times\) from adapt step 1 to 3, with the growth in the nnzs of the streaming/removal operator at 5.07, giving an increase of around 2%.
#### 5.1.3 Reduced tolerance adapts
In the previous sections we have shown that AIRG and our additively preconditioned iterative method can solve adapted P\({}^{0}\) problems in both the streaming and scattering limit with close to fixed work with a zero initial condition used at each adapt step. In practice however we would like to use the solution from the previous adapt step as an initial condition to reduce the number of iterations; in streaming problems where the ray-effects between different adapt steps do not align we find this doesn't change the convergence. In scattering problems however it can help reduce the iteration count.
Furthermore, our previous work has used reduced tolerance solves [1; 2; 3] to decrease the cost of our adaptive process, where the linear system at each adapt step, except the last, is solved to a bigger tolerance, typically 1\(\times\) 10\({}^{-3}\) in the 2-norm; this is not very robust and requires problem specific tuning. We noted in [1] however that we only need to converge sufficiently such that the error in each of our wavelets (this is also true for any hierarchical angular discretisation, such as P\({}_{n}\) or FP\({}_{n}\)) has converged to within a relative tolerance of 0.1, so that we can determine whether the error is greater than the refinement threshold of 1.0 defined in Section 2.1 (and equivalent for de-refinement with a threshold of 0.01). The convergence criteria for the iterative method during each adapt step, except the last, can therefore be set as the infinity norm on the relative error being less than 0.1.
With goal-based adaptivity this requires that the error metric used has a very good effectivity index and we are forced to compute the goal-based error metric at each iteration which may be expensive. For regular adaptivity however this is very simple, as the error is given by a scaled version of our P\({}^{0}\) solution mapped to wavelet space. As such, with our P\({}^{0}\) discretisation we need to map to the equivalent wavelet space (at the cost of an \(\mathcal{O}(n)\) mapping) and compute the relative change of each wavelet coefficient; if all the wavelet coefficients have converged to a relative tolerance of 0.1 our iterative method has converged sufficiently for this adapt step and we can exactly determine the refinement/de-refinement required by the next adapt step.
This is made more difficult by our sub-grid scale discretisation, as the error calculation and thresholding after each adapt step uses \(\boldsymbol{\Psi}\), and hence requires computing \(\boldsymbol{\Theta}\) with (4) at every iteration which is costly given the matrix multiplications required. Instead, we map the "coarse" variable \(\boldsymbol{\Phi}\) to wavelet space at each iteration and check the convergence of the wavelet coefficients on the continuous mesh. This doesn't necessarily produce the exact adapted discretisation that would otherwise have been built, but it is very close; there are typically only a handful of wavelets that would otherwise be refined/de-refined at any single adapt step and those can be picked up by subsequent steps. Fig. 8 shows the results from using this reduced tolerance in a streaming problem and we can see that after 5 adapt steps, this results in a scalar flux and adapted discretisation almost identical to that without the reduced tolerance solve, shown in Figures 4m and 4k. We can see in Fig. 8 that there is a small region at around \(x=1.5,y=0\) where refinement has not been triggered in the same way, but overall we find this is a very robust way to reduce the cost of our adaptive process.
Table 5 shows the results from using both a non-zero intial guess (which has very little effect) and the reduced tolerance solves in a streaming problem. We should note we have included the extra cost that comes from mapping to wavelet space and computing the relative change in each wavelet coefficient at every iteration in the WUs. Compared to Table 2 we can see the cost of the first four adapt steps has been reduced considerably. If we compare the total
\begin{table}
\begin{tabular}{c c c|c c c c c c} \hline \hline CG nodes & Adapt step. & NDOFs & \(n_{\text{its}}\) & CC & Op Complx & WUs\({}^{\text{mf}}\) & WUs\({}^{\text{DG}}\) & Memory \\ \hline
2313 & 1 & 6.3\(\times\) 10\({}^{4}\) & 25 & 4.1 & 1.7 & 38 & 69 & 17.2 \\
2313 & 2 & 2.4\(\times\) 10\({}^{5}\) & 27 & 4.0 & 1.4 & 39 & 73 & 16.5 \\
2313 & 3 & 3.1\(\times\) 10\({}^{5}\) & 27 & 4.1 & 1.4 & 38 & 77 & 16.5 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results from using additive preconditioning on a pure scattering problem with total and scattering cross-section of 10.0 in 2D with regular angular adaptivity with a refinement tolerance of 0.001, a maximum of 3 levels of angular refinement and 3 adapt steps. The WUs listed are scaled by the nnzs in the _adapted_ solve at each step. The cycle and operator complexity listed are for AIRG on M\({}_{\Omega}\) with CF splitting by Falgout-CLJP.
cost of all five adapt steps and scale by the nnzs in a uniform level 3 discretisation, we find the total work has reduced from 72 WUs to 54. [4] showed that it costs 70 WUs to solve a uniform level 3 discretisation, so our adapt process with a reduced tolerance solve beats the uniform equivalent. This cost saving only increases as the number of adapt steps is increased; previously in [1; 2; 3] we typically needed a higher number of adapt steps than five in order to beat a uniform discretisation.
We see similar results with scattering, with Table 6 showing a substantial reduction in the number of iterations for the first two adapt steps and Fig. 9 showing that the resulting adapted discretisation is almost identical to that in Fig. 6h. If we compare the total cost of all three adapt steps and scale by the FLOPs required to compute the matrix-free matvec for the uniform level 3 angular discretisation, the total work is reduced from 26 WUs to 17 with the reduced tolerance solves. In both cases this is less than that required to solve the uniform level 3 angular discretisation as [4] showed this costs 41 WUs.
## 6 Conclusions
In this work we presented an adaptive angular discretisation, with nested hierarchical P\({}^{0}\) angular elements that can be mapped to the Haar wavelet space we presented previously [1]. Once adapted, the angular matrices required between two nodes with different angular discretisations are no longer block diagonal and hence we introduced a modified stabilisation term for use with our low-memory sub-grid scale FEM discretisation that ensured the nnzs in our adapted matrices didn't grow considerably with adaptivity. We found in both pure streaming and scattering problems that the number of nnzs grew 2-3% above that expected from the number of adapted DOFs. This meant we could form the streaming/removal operator scalably with angular P\({}^{0}\) adaptivity and hence use AIRG and the additively preconditioned iterative method we developed [4]. The results from this showed that we can solve our adapted problems in both streaming and scattering problems with very close to fixed work and memory. Our methods don't rely on Gauss-Seidel methods/sweeps, block-diagonal, lower triangular structure or diagonal/block scaling of our matrices and can be applied to many different discretisations.
Given our P\({}^{0}\) discretisation is equivalent to a Haar wavelet discretisation, with adaptivity we mapped our solution to Haar space (in \(\mathcal{O}(n)\)) and hence tagged the angular elements that require angular refinement/de-refinement. As such we introduced the ability to robustly build up our adapted discretisation with regular adaptivity with reduced cost. We achieve this by mapping the coarse solution, \(\mathbf{\Phi}\), to Haar space and testing the relative convergence of each wavelet
\begin{table}
\begin{tabular}{c c c|c c c c c c} \hline \hline CG nodes & Adapt step. & NDOFs & \(n_{\text{its}}\) & CC & Op Complx & WUs\({}^{\text{mf}}\) & WUs\({}^{\text{DG}}\) & Memory \\ \hline
2313 & 1 & 6.3\(\times\) 10\({}^{4}\) & 2 & 4.1 & 1.7 & 4 & 7.6 & 17.2 \\
2313 & 2 & 2.4\(\times\) 10\({}^{5}\) & 10 & 4.0 & 1.4 & 16 & 28.8 & 16.2 \\
2313 & 3 & 3.1\(\times\) 10\({}^{5}\) & 25 & 4.1 & 1.4 & 36 & 72 & 16.3 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results from using additive preconditioning on a pure scattering problem with total and scattering cross-section of 10.0 in 2D with regular angular adaptivity with a refinement tolerance of 0.001, a maximum of 3 levels of angular refinement and 3 adapt steps. The WUs listed are scaled by the nnzs in the _adapted_ solve at each step. The cycle and operator complexity listed are for AIRG on M\({}_{\Omega}\) with CF splitting by Falgout-CLJP.
\begin{table}
\begin{tabular}{c c c|c c c c c c} \hline \hline CG nodes & Adapt step. & NDOFs & \(n_{\text{its}}\) & CC & Op Complx & WUs\({}^{\text{full}}\) & WUs\({}^{\text{DG}}\) & Memory \\ \hline
2313 & 1 & 6.3\(\times\) 10\({}^{4}\) & 2 & 4.4 & 2.87 & 22 & 5.3 & 10.2 \\
2313 & 2 & 9.2\(\times\) 10\({}^{4}\) & 6 & 4.7 & 3.3 & 47 & 11.3 & 10.6 \\
2313 & 3 & 2.2\(\times\) 10\({}^{5}\) & 7 & 5.0 & 3.6 & 56 & 13.4 & 11.1 \\
2313 & 4 & 2.8\(\times\) 10\({}^{5}\) & 7 & 4.6 & 3.3 & 53 & 12.6 & 10.6 \\
2313 & 5 & 3.1\(\times\) 10\({}^{5}\) & 10 & 4.5 & 3.2 & 66 & 15.9 & 10.4 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results from using AIRG on a pure streaming problem in 2D with CF splitting by the _hypre_ implementation of Falgout-CLJP with a strong threshold of 0.2, drop tolerance on \(\mathbf{A}\) of 0.0075 and \(\mathbf{R}\) of 0.025, with regular angular adaptivity with a refinement tolerance of 0.001, a maximum of 3 levels of angular refinement, 5 adapt steps and with adapt steps prior to the final step solved with wavelet-based reduced tolerance to determine convergence. The WUs listed are scaled by the nnzs in the _adapted_ solve at each step.
coefficient at each iteration of our iterative methods. This resulted in an adapted discretisation very similar to that which would normally be produced and a large reduction in the iteration count of every adapt step prior to solving the final adapted discretisation. In a simple box test problem this reduced the cross-over point of when our adaptive process beats a uniform discretisation, down to only 5 steps with streaming or 3 with scattering. This shows the benefit of forming a P\({}^{0}\) discretisation hierarchically that can be mapped to an equivalent wavelet space, even when the solve occurs in P\({}^{0}\) space; our refinement/de-refinement is simple and we can use this to robustly reduce the cost of our adapt process with regular adaptivity.
We also presented a CF splitting algorithm based on element agglomeration that could use the adapted angular discretisation at each spatial point to determine "important" directions and hence produce a semi-coarsening in the streaming limit without needing the matrix entries. We found this performed worse than typical CF splitting algorithms like Falgout-CLJP, when used to invert the streaming operator, but when used with AIRG to invert the streaming/removal operator with a large total cross-section this method scaled similarly to Falgout-CLJP, but with approximately twice the work in the solve. Given it is simple to freeze the element agglomeration after a few adapt steps, it may work out cheaper overall to use an element-agglomeration CF splitting in scattering problems with many adapt steps, but we leave examining this for future work.
Overall the combination of Falgout-CLJP CF splitting, AIRG and our additively preconditioned iterative method resulted in close to scalable work in the solve in both the streaming and scattering limit with angular adaptivity for the Boltzmann transport problem. In previous work [1] we found that in the streaming limit the iteration count of a matrix-free multigrid method solving in Haar space tripled after only 3 adapt steps in angle; here we found the iteration count of our new method only went from 9 to 12 after 3 adapt steps. We believe this makes our iterative method an attractive choice for solving adapted discretisations of the Boltzmann transport equation. Future work will involve building an optimised version of this method in order to compare the cross-over point where our P\({}^{0}\) adaptivity results in a lower runtime than a uniform discretisation, re-using components built in prior adapt steps in order to reduce setup times and examining the performance in parallel with load-balancing.
## Acknowledgments
The authors would like to acknowledge the support of the EPSRC through the funding of the EPSRC grants EP/R029423/1 and EP/T000414/1.
|
2304.05876 | Markov chains applied to Parrondo's paradox: The coin tossing problem | Parrondo's paradox was introduced by Juan Parrondo in 1996. In game theory,
this paradox is described as: A combination of losing strategies becomes a
winning strategy. At first glance, this paradox is quite surprising, but we can
easily explain it by using simulations and mathematical arguments. Indeed, we
first consider some examples with the Parrondo's paradox and, using the
software R, we simulate one of them, the coin tossing. Actually, we see that
specific combinations of losing games become a winning game. Moreover, even a
random combination of these two losing games leads to a winning game. Later, we
introduce the major definitions and theorems over Markov chains to study our
Parrondo's paradox applied to the coin tossing problem. In particular, we
represent our Parrondo's game as a Markov chain and we find its stationary
distribution. In that way, we exhibit that our combination of two losing games
is truly a winning combination. We also deliberate possible applications of the
paradox in some fields such as ecology, biology, finance or reliability theory. | Xavier Molinero, Camille Mègnien | 2023-04-12T14:16:05Z | http://arxiv.org/abs/2304.05876v1 | # Markov Chains Applied to Parrondo's Paradox: The Coin Tossing Problem
###### Abstract
Parrondo's paradox was introduced by Juan Parrondo in 1996. In game theory, this paradox is described as: A combination of losing strategies becomes a winning strategy. At first glance, this paradox is quite surprising, but we can easily explain it by using simulations and mathematical arguments. Indeed, we first consider some examples with the Parrondo's paradox and, using the software R, we simulate one of them, the coin tossing. Actually, we see that specific combinations of losing games become a winning game. Moreover, even a random combination of these two losing games leads to a winning game. Later, we introduce the major definitions and theorems over Markov chains to study our Parrondo's paradox applied to the coin tossing problem. In particular, we represent our Parrondo's game as a Markov chain and we find its stationary distribution. In that way, we exhibit that our combination of two losing games is truly a winning combination. We also deliberate possible applications of the paradox in some fields such as ecology, biology, finance or reliability theory.
Parrondo's paradox, Markov chain, Engineering Decision Making, Maintenance and Evolution
## 1 Introduction
Since we start playing, we start asking how to win. Game theory is a quite recent discipline that emerged around the 19th century. Game theory is essentially the study of mathematical models of strategic interaction among rational decision-makers. In other words, Game theory help us to find a winning strategy based on _mathematical thinking_. Game theory does not stop at gambling, there is a wide spectrum of applications in social science, biology or computer science. This is why it is an important field of mathematics. In this paper we will focus on a specific paradox of game theory: the Parrondo's paradox [1]. This paradox states that there exists two losing games such as we can combine those games into a winning game. At first glance, this paradox seems counterintuitive and we can easily see how it made such noise in the game theory field. Everybody jumped on the opportunity and tried to find revolutionary applications of the paradox, and still is. However, could Parrondo's paradox really revolution the way we gamble or invest and so on?
Related work about background of Parrondo's paradox is [31, 32], among others.
We do a deep study of this paradox. First, we define the Parrondo's paradox and see some examples of Parrondo's paradox to have a good first understanding of it. Section 3 simulates one of those examples and try different combinations of the considered games. Later we introduce some definitions and results about Markov chains [2] to help us with the exhaustive study our specific example. Section 5 applies the given concepts of Markov chains to
Parrondo's paradox for coin tossing. We also go over the many possible applications of the paradox. Next, we present our discussion and conclusions in Section 7. Finally, we expose the used materials and methods.
## 2 What is Parrondo's paradox: A few examples
First, we stablish the formal concept of Parrondo's paradox to consider some known examples.
**Definition 1.** The Parrondo's paradox is defined as: There exists two losing games such as specific combinations of these two games lead to a winning game.
Here we introduce some known examples of those games.
**Example 1: A simple coin game [3,4].** In Game A, you simply lose one euro every time you play. In Game B, you count how much money you have left. If it is an even number, you win 3 euros. Otherwise, you lose 5 euros.
Suppose you begin with 100 euros in your pocket. If you start playing Game A exclusively, you will lose all your money in 100 rounds. Similarly, if you decide to play Game B exclusively, you will also lose all your money in 100 rounds.
However, consider playing the games alternatively, starting with Game B, followed by A, then by B, and so on (BABABA...). Then we will win 2 euros every two games!
Thus, even though each game is a losing proposition if played alone, the sequence in which the games are played can lead to a profit.
**Example 2: Saw-tooth [5,6]**. Imagine a rack almost vertical, which is going down at constant speed. We will put a ball on it. The ball will be stabilized by the teeth of the rack and thus it will go down with it. We will say that it is a failure if the ball reaches ground and success if it reaches the ceiling. Hence, this rack is a losing game for the ball.
Now suppose we have a second rack. This one is globally going down but alternates between small ascending and descending movements. This game is also losing, as the ball will reach the ground.
However, if we combine those two racks in such a way that the ball will go from one rack to the other at the right time, we could bring the ball to the ceiling. Hence making a winning game from two losing games.
**Example 3: Roulette [7].** Let us consider a game played on a roulette table. The wheel has slots for 0 and the numbers 1 to 36. The zero is coloured green, and half of the numbers from 1 to 36 is coloured black, and the other half in red. All numbers are equally likely to be chosen. We say that when zero turns up the casino always wins.
The first game is the following: You bet on red or black and win one euro if that color turns up and lose one euro otherwise. You will win with probability 18/37\(<\)0.5. Hence, this first game is a losing game.
Now the second game is a bit more complicated: If your capital is a multiple of 3 and one of the numbers 1,2 or 3 turns up, you win one euro, otherwise you lose one euro. You will win with probability 3/37. If your capital is not a multiple of 3 and if the outcome is between 1 to 28, you win one euro, otherwise you lose one euro. You will win with probability 28/37. This is a bit more complicated to see but this is also a losing game.
We can combine those two losing games into a winning one, thus exhibiting Parrondo's paradox.
**Example 4: Coin tossing [8].** This example is similar to the previous one, but we consider coin tossing instead of a roulette table. It is also similar to example 1, but a bit more complex.
We have two games, A and B. Game A consists of flipping a coin. We win one euro if it lands on head and loose one euro otherwise. However, this is a biased coin and the probability of landing head is 0.5-\(\alpha\), where \(\alpha>\)0. Hence, the probability of losing is 0.5+\(\alpha\)). We will consider small values of alpha (\(\alpha\)<0.1). Clearly, this game is not fair.
Let us look at the second game: Game B is a bit more complex. There is two different coins, let be coin 1 and coin 2. The first coin is really biased and lands head with probability 0.1-\(\alpha\) and tails with probability 0.9+\(\alpha\). For the second coin, it lands head with probability 0.75-\(\alpha\) and tails with probability 0.25+\(\alpha\). Since we will consider small \(\alpha\) the second coin will be preferable. We will choose which coin to toss with a specific rule: if the current gain of the player is a multiple of M (for M an integer) we will play coin 1, otherwise we play coin 2. Note that we have the same principle as game A: if either coin lands head we win one euro otherwise we lose one euro. This game is also a losing game.
Now, a strange thing happens, if we actually combine those two games, we can get a winning game! Clearly, we can see that if we play game A when the accumulated gain is a multiple of 3 and game B otherwise, we will be winning (since we will often play coin 2). But if we combine randomly those two games, we also get a winning one.
We mostly work on this last example (Example 4: Coin tossing). First, we simulate game A and game B. Then we will consider different combinations of those two games.
## 3 Simulation of Example 4: Coin tossing
First, we are going to check if game A and B are actually losing game. We are going to simulate those games using the software R (you can find more information on R at [9]), and usinf a 2.3 GHz Intel Core i5 dual core processor. Each chunk of the code has polynomial time and we have a total running time of 37.46 seconds. You can refer to the authors to get the code implementation in detail. For the simulation we chose \(\alpha\)=0.005 and M=3. We decided to simulate 50,000 plays, as it is clear enough to see the results. Indeed, Figure 7 shows a clear trend with that much plays.
### Game A
Remember game A consisted of tossing a coin. If the coin lands on head we win one euro, otherwise we lose one euro. The probability of landing head is 0.495 and the probability of landing tail is 0.505. Figure 1 shows the simulation of the profit for game A for 50,000 plays.
### Game B
Remember game B consisted of the following: we have two coins 1 and 2. If the current profit is a multiple of 3, i.e., M=3, we toss coin 1, otherwise we toss coin 2. Now we win one euro if the coin lands head, we lose one euro otherwise. The probability for coin 1 to land head is 0.095. The probability for coin 2 to land head is 0.745. Figure 2 presents the simulation of the profit for game B for 50,000 plays. We can clearly see that both game A and B are losing in the long run.
Next, we consider some combinations of the games A and B to see if we can get a winning combination.
### Game ABABAB
Consider the combination "ABABABAB" repeatedly, i.e. we play game A, then game B, then game A and so on. The simulation gives us the results shown in Figure 3. The combination ABABAB is clearly a losing combination in the long run.
### Game AABAABAAB
We play the combination "AABAAB" repeatedly. Now a surprising thing happens (see Figure 4): We have a clear profit in the long run if we play the combination "AABAABAAB" repeatedly. Indeed, we can see that with that combination we will often play game B when the current gain is not a multiple of M, and thus we will use the (winning) coin 2.
### Game BBBABBA
The simulation "BBBABBBA" repeatedly give us a losing winning combination (see Figure 5).
### Game ABBAB
Figure 6 shows the case of "ABBABBABAB" repeatedly. This combination is a winning one.
### Game random combination
Now, let us try to randomly chose a game at each turn (we chose game A or B with equal probability). We will simulate four random combinations shown in Figure 7.
This result is very surprising; by combining at random those games, we get a profit!
Let us look at the more profitable combination, we will make choices in terms of the actual gain, this is the best possible combination, but it requires choosing a game at each step in terms of our current profit. If our profit is a multiple of M we will choose game A otherwise game B.
### Game knowing current gain
Now we are going to choose which game to play in terms of our current profit. We will make the most profitable choice, i.e., at each round we will toss the best coin. Hence, if our profit is a multiple of 3, we will choose to play game A since we will toss the coin with probability of winning of 0.495. If our profit is not a multiple of 3, we will choose to play game B, as we will toss the most profitable coin: coin 2 (probability of winning of 0.745). In that way, we will never toss the coin 1, which is the worst coin (probability of winning of 0.095). Figure 8 shows such profit over 50,000 plays.
### Comparing results
Figure 9 plots previous simulations together, and Figure 10 plots all results without the game "knowing the current gain" to see clearly the profits.
During this simulation we considered the game where we could choose the game we are going to play in terms of our current gain. In that way, every time we had a profit that is a multiple of M we avoided the game B, since it will lead to the coin 1, which is the worst possible coin in the game. We considered this game as a reference, but we are not going to study it since we prefer games where the strategy is known beforehand and does not change at each step.
Hence, if we look at the last graph, we can see the profit over 50,000 plays of the games A, B, a random alternation of the two games and specific combinations of the two.
First, we see that indeed, game A and B are losing games on their own. We will compute later the expectation of those games.
Then we see that not all combinations of game A and B are winning ones; in fact, the combinations "ABABAB" and "BBBABBBA" are losing combinations, and the combinations "AABAAB" and "ABBABB" are winning. We know that the perfect combination is the one we described above about choosing which game to play in terms of the current gain. Hence, the more the specific combinations are close to this choice, the greater the profit. This is why we get such a high profit for the combination "ABBABB"; it is really close to the one with choices according to profit. Indeed, we can see that we will often play coin 2 of game B.
Now, an interesting thing is that if we choose at random between game A and B, we get a winning game! This result is the most intriguing one and we will work on the math behind it to understand how it can be.
In conclusion, we see that from the two losing games A and B we can get a winning combination. Moreover, a random combination of those two games is also winning.
## 3 Markov Chains
We have seen in detail one example of the Parrondo's paradox and simulated it. We have seen that this paradox happens, but now we pretend to study how it works. We are going to look at the mathematics behind it to better understand this paradox, for that we will use the theory about Markov Chains. The following is based mostly on those references [2, 10, 11, 12, 13, 14]. The interested reader can also see some applications to Markov Chains, related with random walks and similar examples that we consider here, in references [15, 16]
### Introduction to Markov Chains
First, we introduce the Markov chain definition.
**Definition 2**.: Markov Chain [14]. Given a stochastic process, i.e. a collection of random variables \(\mathbf{X}=\{\mathbf{\mathbb{X}_{\mathbf{z}}}\mathbf{:}\mathbf{\mathbb{z}}\mathbf{\mathbb{z}}\mathbf{\mathbb{z }}\mathbf{\mathbb{z}}\mathbf{\mathbb{z}}\mathbf{\mathbb{z}}\mathbf{\mathbb{z}}\mathbf{\mathbb{z}}\bm {\mathbb{z}}\mathbf{\mathbb{z}}\mathbf{\mathbb{z}}\mathbf{\mathbb{z}}\mathbf{\mathbb{
**Example 6**.: Coming back to our previous example we have \(\mathbf{S}=\{\mathbf{N},\mathbf{G},\mathbf{B}\}\).
The initial probability distribution is \(q=(\emptyset,0.5,0.5)\).
The transition matrix is \(\mathbf{P}=\mathbf{G}\), where the rows represent the action of the present day and the columns the ones of tomorrow.
### Computation with Markov chain
Now we are interested in the probability that given the chain is in state \(i\) today, it will be in state \(j\) two days from now. We denote this probability by \(p_{ij}^{(\Xi)}\).
**Example 7**.: In the previous example, we see that if the customer did not go to the bookstore today than the event that he will buy a book two days from now is the disjoint union of the events:
* A= {The customer does not go into the bookstore tomorrow and he buys a book in two days}
* B= {The customer goes into the store tomorrow and buys a book the day after}
* C= {The customer buys a book tomorrow and same in two days from now.}
\(\mathbb{P}(A)=\mathbb{P}\big{(}\)"_The customer does not go into the bookstore tomorrow_" \(\big{|}\)"_He doesn't go to the store today_" \(\big{)}\).
\(\mathbb{P}(\)"_The customer buys a book two days from now_" \(\big{|}\)" _he doesn't go into the store tomorrow_")
Thus:
\(\mathbb{P}(A)=p_{11}p_{11}\)
\(\mathbb{P}(B)=p_{12}p_{23}\)
\(\mathbb{P}(C)=p_{12}p_{23}\)
and
\(p_{12}^{(2)}=\mathbb{P}(A)+\mathbb{P}(B)+\mathbb{P}(C)=p_{11}p_{12}+p_{12}p_{2 2}+p_{13}p_{13}\)
Now, we realize that this equation is expressed as the dot product between the first row and the third column of \(\mathbf{P}\). In general, if we suppose there is r states we have:
\(p_{ij}^{(2)}=\sum_{k=1}^{r}p_{ik}p_{kj}\).
**Theorem 1 [12]**.: Let \(\mathbf{P}\) be the transition matrix of a Markov chain. The ij-th entry of the matrix \(\mathbf{P}^{\text{m}}\) gives the probability that the Markov chain, starting in state \(s_{i}\), will be in state \(s_{j}\) after n steps; i.e. \(p_{ij}^{(\Xi)}=(\mathbf{P}^{\text{m}})_{ij}\).
**Proof of Theorem 1.** Remember the matrix multiplication formula:
\((A*B)_{ij}=\sum_{k=1}^{r}(A)_{jk}*(B)_{kj}\).
By induction, we have the following.
Initial step: \(p_{ij}^{(\Xi)}=\sum_{k=1}^{r}p_{ik}p_{kj}=\sum_{k=1}^{r}(\mathbf{P})_{ik}(\mathbf{P}) _{kj}=(\mathbf{P}*\mathbf{P})_{kj}=(\mathbf{P}^{2})_{ij}\), by the previous example, which corresponds to the ij-th entry of the matrix \(\mathbf{P}^{\Xi}\). Heredity: Suppose the theorem is true for n, let us show it for n+1.
Remark that in order to get from state \(s_{i}\) to state \(s_{j}\) in \(n+1\) steps is the same as going from state \(s_{i}\) to intermediate state \(s_{k}\) in n steps and then going from state \(s_{k}\) to state \(s_{j}\) in one step (this corresponds to \(p_{ik}^{(\Xi)}*p_{kl}\)), and this for all k. Hence by the law of total probabilities:
\(p_{ij}^{(n+1)}=\sum_{k=1}^{r}p_{ik}^{(\Xi)}*p_{kj}\)
\(=\sum_{k=1}^{r}(\mathbf{P}^{\text{m}})_{ik}(\mathbf{P})_{kj}\) (by induction hypothesis)
\(=(\mathbf{P}^{\text{m}}*\mathbf{P})_{ij}\quad=(\mathbf{P}^{\text{m+1}})_{ij}\)
It verifies the heredity and, thus, the theorem is true for any values of n.
**Theorem 2 [12]**. Let \(\mathbf{P}\) be the transition matrix of a Markov chain, and let \(\mathbf{q}\) be the probability vector, which represents the starting distribution. Then the probability that the chain is in state \(g_{i}\) after n steps is the i-th entry in the vector: \(\boldsymbol{q}_{\texttt{m}}=\boldsymbol{q}*\boldsymbol{P}^{\texttt{n}}\), i.e., \(\mathbb{P}(X_{\texttt{n}}=s_{\texttt{i}})=(\boldsymbol{q}\boldsymbol{P}^{ \texttt{n}})_{\texttt{i}}\)
**Proof of Theorem 2.** The proof of the theorem is done by induction: Initialization step for n=1: \(\mathbb{P}(X_{\texttt{1}}=s_{\texttt{i}})=\sum_{\texttt{k=1}}^{\tau} \mathbb{P}(X_{\texttt{0}}=s_{\texttt{k}})\mathbb{P}(X_{\texttt{1}}=s_{ \texttt{i}}|X_{\texttt{0}}=s_{\texttt{k}})\), by the law of total probabilities.
Note that \(\mathbb{P}(X_{\texttt{0}}=s_{\texttt{k}})=(\boldsymbol{q})_{\texttt{k}}\) and \(\mathbb{P}(X_{\texttt{1}}=s_{\texttt{i}}|X_{\texttt{0}}=s_{\texttt{k}})=( \boldsymbol{P})_{\texttt{k}\texttt{i}}\) by definition.
Hence, \(\mathbb{P}(X_{\texttt{1}}=s_{\texttt{i}})=\sum_{\texttt{k=1}}^{\tau}( \boldsymbol{q})_{\texttt{k}}(\boldsymbol{P})_{\texttt{k}\texttt{i}}=( \boldsymbol{q}\boldsymbol{P})_{\texttt{k}}\)
Therefore, the theorem is true for n=1.
Heredity: Suppose the theorem is true for n and let us show it for n+1:
\[\mathbb{P}(X_{\texttt{n+1}}=s_{\texttt{i}})=\sum_{\texttt{k=1}}^{\tau} \mathbb{P}(X_{\texttt{n}}=s_{\texttt{k}})\mathbb{P}(X_{\texttt{n+1}}=s_{ \texttt{i}}|X_{\texttt{n}}=s_{\texttt{k}})\]
Note again that \(\mathbb{P}(X_{\texttt{n}}=s_{\texttt{k}})=(\boldsymbol{q}\boldsymbol{P}^{ \texttt{n}})_{\texttt{k}}\) by induction hypothesis. So, \(\mathbb{P}(X_{\texttt{n+1}}=s_{\texttt{i}}|X_{\texttt{n}}=s_{\texttt{k}})=( \boldsymbol{P})_{\texttt{k}\texttt{i}}\) by Theorem 1. Hence,
\[\mathbb{P}(X_{\texttt{n+1}}=s_{\texttt{i}})=\sum_{\texttt{k=1}}^{\tau}( \boldsymbol{q}\boldsymbol{P}^{\texttt{n}})_{\texttt{k}}(\boldsymbol{P})_{ \texttt{k}\texttt{i}}=(\boldsymbol{q}\boldsymbol{P}^{\texttt{n}}*\boldsymbol{P} )_{\texttt{i}}=(\boldsymbol{q}\boldsymbol{P}^{\texttt{n+1}})_{\texttt{i}}\]
Hence, the theorem is true for any n.
**Example 8**.: Coming back to our example, we can now compute the probability of each state for the second day (n=1) for our customer:
\(\boldsymbol{q_{1}}=\boldsymbol{q}\boldsymbol{P}=(\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0 }\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0 }\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} {0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0 }\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt {0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttttexttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttttexttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttttexttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt {0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0} \quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt {0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad\texttt{0}\quad \texttt{0}\quad\texttt{0}\quad\
Observe that if we represent our Markov chain as a graph; the chain is irreducible if the graph is strongly connected.
**Example 9**.: Our example about the bookstore (Example 5) is clearly an irreducible Markov chain. We can see it by looking at the graph: it is strongly connected. We also can go from any node to any other node.
**Definition 6**.: **Regular Markov chain [12]**. A Markov chain is call regular if some power of the transition matrix has only positive elements.
Note that in this paper the term "regular" will be used to refer to the preceding definition and never for an invertible matrix.
Intuitively, we can say that a Markov chain is regular if it is possible to go from any state to any state in exactly n steps. We see clearly that every regular chain is irreducible. However, the other way around is not true, see the following example.
**Example 10**.: Let P be the transition matrix of a Markov chain:
\[\textbf{P}=\begin{bmatrix}\textbf{0}&\textbf{1}\\ \textbf{1}&\textbf{0}\end{bmatrix}\]
Clearly, the chain is irreducible. Figure 12 shows this example as a graph.
But the chain is not regular. Suppose that n is odd, then it is not possible to go from state 1 to state 1 in n steps. If n is even, it is not possible either to go from step 1 to step 2 in n steps.
**Example 11**.: Is our bookstore example (Example 5) a regular chain? Yes!
Indeed, we can see that it is possible to go from any state to any other state, in two steps. Another way to see it is to compute \(\textbf{P}^{2}\) and see that all entries are positive:
\[\textbf{P}^{2}=\begin{bmatrix}\textbf{0}.\textbf{1}\textbf{4}\textbf{5}& \textbf{0}.\textbf{4}\textbf{5}\textbf{7}\textbf{5}&\textbf{0}.\textbf{3} \textbf{9}\textbf{7}\textbf{5}\\ \textbf{0}.\textbf{1}\textbf{6}\textbf{5}&\textbf{0}.\textbf{4}\textbf{1} \textbf{5}&\textbf{0}.\textbf{4}\textbf{2}\\ \textbf{0}.\textbf{1}\textbf{9}\textbf{4}\textbf{7}&\textbf{0}.\textbf{4} \textbf{4}\textbf{2}\textbf{2}&\textbf{0}.\textbf{3}\textbf{6}\textbf{3} \textbf{1}\end{bmatrix}.\]
**Theorem 3**.: **Fundamental limit theorem for Regular chain [12]**. Let **P** be the transition matrix for a regular chain, then, as n goes to infinity, the powers \(\textbf{P}^{n}\) approach a limiting matrix **W** with all rows the same vector **w**. The vector **w** is a strictly positive probability vector (i.e. the components are all positive and they sum to one).
In order to prove this Theorem, we first need the following lemma:
**Lemma 1**. Let **P** be a transition matrix for dimensions \(r\times r\), with no zero entries. Let d be the smallest entry of the matrix. Let **y** be a column vector with r components, let \(\mathbb{M}_{\textbf{0}}\) be the largest of those components and \(m_{\textbf{0}}\) the smallest. Let \(M_{\textbf{1}}\) and \(m_{\textbf{1}}\) be the largest and smallest components of the vector **Py**. Then:
\[M_{\textbf{1}}-m_{\textbf{1}}\leq(1-2d)(M_{\textbf{0}}-m_{\textbf{0}})\]
Figure 12: Representation of Example 10 as a graph
**Proof of Lemma 1.** First, let us understand what this lemma is saying; if an \(r\times r\) transition matrix has no zero entries, and \(y\) is any column vector with \(r\) entries, then the vector \(Py\) has entries which are "closer together" than the entries are in \(y\).
First, note that since each row of \(P\) is a probability vector, \(Py\) replaces \(y\) by averages of its components (with different weights).
The largest weighted average that could be obtained in the present case would occur if all but one of the entries of \(y\) have value \(M_{0}\) and the one left entry has value \(m_{0}\), and this one small entry is weighted by the smallest possible weight, namely \(d\). In this case, we will obtain the weighted average: \(dm_{0}+(1-d)M_{0}\).
The smallest weighted average would be obtained if all the entries of \(y\) except one have values \(m_{0}\), and the one left has value \(M_{0}\), and \(M_{0}\) is weighted by \(d\). We then obtain the average:
\(dM_{0}+(1-d)m_{0}\)
Thus,
\(M_{1}\leq\mathit{dm}_{0}+(1-d)M_{0}\) and \(m_{1}\leq\mathit{dM}_{0}+(1-d)m_{0}\).
\(M_{1}-m_{1}\leq\mathit{dm}_{0}+(1-d)M_{0}-(\mathit{dM}_{0}+(1-d)m_{0})=(M_{0}- m_{0})(1-2d)\).
Next, we are ready to proof Theorem 3.
**Proof of Theorem 3: The fundamental limit theorem for regular chain.** We will first review the proof of the theorem for the case where \(P\) has no zero entries. Let \(y\) be an arbitrary \(r\)-column vector, where \(r\) is the number of states of the chain. We assume that \(r>1\). Otherwise it is trivial. Once again, let \(M_{m}\) and \(m_{m}\) be the maximum and minimum components of the vector \(P^{m}y\). The vector \(P^{m}y\) is obtained from the vector \(P^{m-1}y\) by multiplying on the left by \(P\). As seen in the proof of the lemma each component of \(P^{m}y\) is an average of the components of \(P^{m-1}y\). Thus \(M_{0}\geq M_{1}\geq...\) and \(m_{0}\leq m_{1}\leq...\)
Hence, each sequence is monotone and moreover bounded: \(m_{0}\leq m_{m}\leq M_{m}\leq M_{0}\). By the monotone theorem, each of these sequences will converge.
Let \(M\) be the limit of \(M_{m}\) and \(m\) the limit of \(m_{m}\). We know that \(m\leq M\). Now, we want to show that \(M-m=0\). This will be the case if \(M_{m}-m_{m}\) tends to zero.
Let \(d\) be the smallest element of \(P\). Since all entries of \(P\) are strictly positive, we have \(d>0\). By our lemma we have:
\(M_{m}-m_{m}\leq(1-2d)(M_{m-1}-m_{m-1})\Leftrightarrow M_{m}-m_{m}\leq(1-2d)^{2 }(M_{m-2}-m_{m-2})\)
\(\Leftrightarrow M_{m}-m_{m}\leq(1-2d)^{n}(M_{0}-m_{0})\)
Since \(r\geq 2\), we must have \(d\leq 1/2\), so \(0\leq 1-2d<1\). Hence:
\(0\leq M_{m}-m_{m}\leq(1-2d)^{n}(M_{0}-m_{0})\)
\(0\leq\underset{n\to m}{lim}\ M_{m}-m_{m}\leq\underset{n\to m}{lim}(1-2d)^{n}(M _{0}-m_{0})\)
By the squeeze theorem: \(\underset{n\to m}{lim}\ M_{m}-m_{m}=0\).
Since any components of \(P^{m}y\) lies between \(M_{m}\) and \(m_{m}\), each component must approach the same number \(t=M=m\). This shows that \(\underset{n\to m}{lim}\ P^{m}y=L\). Where \(L\) is the column vector such that \(L_{i}=l\ \forall i\in\{1,...,r\}\).
Now let \(y\) be the vector with j-th component equal to 1 and all other components equal to zero. Then \(P^{m}y\) is the j-th column of \(P^{m}\). We do this for each j, and we can see that the columns of \(P^{m}\) approach constant column vectors. That is, the rows of \(P^{m}\) approach a common row vector \(w\), i.e. \(\underset{n\to m}{lim}\ P^{m}=W\)
We are left to show that all entries in \(W\) are strictly positive. Let \(y\) be the vector with j-th component equal to 1 and all other to zero. Then \(Py\) is the j-th column of \(P\), and this column has all entries strictly positive (by hypotheses). The minimum component of the vector \(Py\) was defined to be \(m_{1}\), hence \(m_{1}>0\). Since \(m_{1}\leq m\), we have \(m\geq 0\). Note finally that this value of \(m\) is just the j-th component of \(w\), so all components of \(w\) are strictly positive.
The following Theorem give us the probability vector for Markov chains.
**Theorem 4 [12]**. Let \(\mathbf{P}\) be a regular transition matrix and \(\boldsymbol{W=\underset{\mathbf{m\rightarrow\infty}}{lim}\,P^{\mathbf{m}}}\). Let \(\mathbf{w}\) be the common row of \(\mathbf{W}\), and let \(\mathbf{c}\) be the column vector all of whose components are 1. Then:
* \(\mathbf{wP}\)=\(\mathbf{w}\) and any row vector \(\mathbf{v}\) such that \(\mathbf{vP}\)=\(\mathbf{v}\) is a constant multiple of \(\mathbf{w}\).
* \(\mathbf{Pc}\)=\(\mathbf{c}\) and any column vector \(\mathbf{x}\) such that \(\mathbf{Px}\)=\(\mathbf{x}\) is a multiple of \(\mathbf{c}\).
**Proof of Theorem 4.** Let be \(\boldsymbol{P^{\mathbf{m}}\rightarrow\boldsymbol{W}}\). Thus, \(\boldsymbol{P^{\mathbf{m}+1}=P^{\mathbf{m}}P\rightarrow\boldsymbol{W}P}\). But, \(\boldsymbol{P^{\mathbf{m}+1}\rightarrow\boldsymbol{W}}\), hence \(\boldsymbol{W}\boldsymbol{P}=\boldsymbol{W}\).
Now, let \(\mathbf{v}\) be such that \(\boldsymbol{vP}=\boldsymbol{v}\). Then \(\boldsymbol{vP^{2}}=\boldsymbol{vP}=\boldsymbol{v}\), and so on: \(\boldsymbol{vP^{\mathbf{m}}=\boldsymbol{v}}\). Taking the limit on both sides, we get: \(\boldsymbol{vW}=\boldsymbol{v}\).
Let \(s\) be the sum of the components of \(\mathbf{v}\). Then \(\boldsymbol{vW}=[\sum v_{1}w_{1}\ldots\ \sum v_{1}w_{n}]=\sum v_{1}[w_{1}\ldots\ w_{n}]=sw\). So, \(\boldsymbol{v}=sw\).
For the second part, we proceed the same way and obtain that \(\boldsymbol{x}=\boldsymbol{W}\boldsymbol{x}\), by using that fact that all rows of \(\mathbf{W}\) are the same and that \(\varepsilon_{\mathbf{i}}=\mathbbm{1}\), we get that \(\mathbf{x}\) is a multiple of \(\mathbf{c}\).
**Corollary 1**. There is only one probability vector such that \(\mathbf{vP}\)=\(\mathbf{v}\).
Definition 7. Fixed row vector and fixed column vector [12]. _A row vector such that \(\boldsymbol{wP}\)=\(\boldsymbol{w}\) is called a fixed row vector for \(\boldsymbol{P}\). Similarly, a column vector \(\boldsymbol{x}\) such that \(\boldsymbol{Px}\)=\(\boldsymbol{x}\) is called a fixed column vector for \(\boldsymbol{P}\)._
**Theorem 5 [12]**. Let \(\mathbf{P}\) be the transition matrix for a regular chain and \(\mathbf{v}\) an arbitrary probability vector. Then \(\underset{\mathbf{n\rightarrow\infty}}{lim}\,\boldsymbol{vP^{\mathbf{m}}}= \boldsymbol{w}\), where \(\mathbf{w}\) is the unique fixed probability vector for \(\mathbf{P}\).
**Proof of Theorem 5.** By the theorem \(\underset{\mathbf{n\rightarrow\infty}}{lim}\,P^{\mathbf{m}}= \boldsymbol{W}\), thus \(\underset{\mathbf{n\rightarrow\infty}}{lim}\,\boldsymbol{vP^{\mathbf{m}}}= \boldsymbol{vW}\).
But, since \(\mathbf{v}\) is a probability vector, its entries sum up to 1, and remember that all rows of \(\mathbf{W}\) are equal to \(\mathbf{w}\). Hence, we get: \(\boldsymbol{vW}=[\sum v_{1}w_{1}\ldots\ \sum v_{1}w_{n}]=\sum v_{1}w= \boldsymbol{w}\).
**Theorem 6**. Equilibrium **Theorem [12].** For an irregular or ergodic Markov chain, there is a unique probability vector \(\mathbf{w}\) such that \(\mathbf{wP}=\mathbf{w}\) and \(\mathbf{w}\) is strictly positive. Any row vector such that \(\mathbf{vP}=\mathbf{v}\) is a multiple of \(\mathbf{w}\). Any column vector \(\mathbf{x}\) such that \(\mathbf{Px}=\mathbf{x}\) is a constant vector.
**Proof of theorem 6.** This theorem is the same as before but for ergodic chain, not regular.
Let \(\mathbf{P}\) be the transition matrix of an ergodic chain.
Let \(\mathbf{P}^{t}=(\mathbbm{1}/\mathbbm{2})\mathbf{J}+(\mathbbm{1}/2)\mathbf{P}\). This is a regular transition matrix with the same fixed vectors as \(\mathbf{P}\).
First, let us show that the fixed vectors are the same. Suppose \(\mathbf{w}\) is a fixed vector for \(\mathbf{P}\), then \(\boldsymbol{wP}=\boldsymbol{w}\). Hence, \(\boldsymbol{wP^{t}}=(\mathbbm{1}/\mathbbm{2})\boldsymbol{w}\boldsymbol{I}+( \mathbbm{1}/\mathbbm{2})\boldsymbol{wP}=(\mathbbm{1}/\mathbbm{2})\boldsymbol{w}+( \mathbbm{1}/\mathbbm{2})\boldsymbol{w}=\boldsymbol{w}\).
If \(\boldsymbol{wP^{t}}=\boldsymbol{P^{t}}\Rightarrow(\mathbbm{1}/\mathbbm{2}) \boldsymbol{w}+(\mathbbm{1}/\mathbbm{2})\boldsymbol{wP}=\boldsymbol{w}\)\(\implies(\mathbbm{1}/\mathbbm{2})\boldsymbol{wP}=(\mathbbm{1}/\mathbbm{2})\boldsymbol{w}\)\(\implies\)\(\boldsymbol{wP}=\boldsymbol{w}\).
We apply the same argument to the column vector.
In this section, we have introduced Markov chains. We also showed how to compute with Markov chains, i.e. how to find the next probability distribution. Finally, and most importantly, we found the equilibrium distribution of a regular Markov chain through the fundamental limit theorem for regular chains. In the next part, we will define our Parrondo's games as finite regular Markov chains in order to apply this last theorem and clearly determine how we can make a winning game out of two losing games.
## 5 Mathematical study of the paradox
Remember our coin tossing example from Section 2 (Example 4). It consisted of two games A and B. In each game, we had to toss a coin and head corresponded to a win, tail to a loss. \(0<\alpha<0.1\)
Game A: probability of landing heads=0.5-\(\alpha\) tails=0.5+\(\alpha\)
Game B: Coin 1: probability of getting head= 0.1-\(\alpha\), tails=0.9+\(\alpha\)
Coin 2: probability of getting heads=0.75-\(\alpha\), tails=0.25+\(\alpha\)
If the current gain is a multiple of M, we play coin 1; otherwise, we play coin 2.
Game AB: Play game A, then B, then A, B, and so on.
We have seen through the simulation (Section 3) that game A and B are losing but game AB is a winning one. Now we want to show this mathematically speaking. We will base our analysis mostly on the articles [1, 17, 18] and on the theory on Markov chains.
### Argument for a general \(\alpha\)
Now, let us see in general for which \(\alpha\) game A and B are losing games and game AB is a winning one. In order to do this analysis, we will consider M=3 (with a bigger or undefined M computations get really complex).
#### 5.1.1 Game A
First concerning game A. We can see that game A can consists of independent Bernoulli trials, with probability of success of 0.5-\(\alpha\). Clearly, game A will be losing whenever \(\alpha>0\). We can also prove that with Markov chains considering the transition matrix:
\[P_{A}(\alpha)=\begin{bmatrix}0&0.5-\alpha&0.5+\alpha\\ 0.5+\alpha&0&0.5-\alpha\\ 0.5-\alpha&0.5+\alpha&0\end{bmatrix}\]
Note that \(P_{A}(\alpha)\) is regular:
\[P_{A}^{2}(\alpha)=\begin{bmatrix}2(0.5-\alpha)(0.5+\alpha)&(0.5+\alpha)^{2}& (0.5-\alpha)^{2}\\ (0.5-\alpha)^{2}&2(0.5-\alpha)(0.5+\alpha)&(0.5+\alpha)^{2}\\ (0.5+\alpha)^{2}&(0.5-\alpha)^{2}&2(0.5-\alpha)(0.5+\alpha)\end{bmatrix}\]
Since \(\alpha<0.1\) the entries of \(P_{A}^{\alpha}(\alpha)\) are strictly positive.
We can apply theorems 3 and 5: \(vP_{A}(\alpha)=v\)
\[[v_{1}\quad v_{2}\quad v_{1}]\begin{bmatrix}0&0.5-\alpha&0.5+\alpha\\ 0.5+\alpha&0&0.5-\alpha\\ 0.5-\alpha&0.5+\alpha&0\end{bmatrix}=[v_{1}\quad v_{2}\quad v_{1}]\]
_knowing that \(v_{1}+v_{2}+v_{3}=1\)._ We obtain the following system of equations:
\[\begin{array}{l}(0.5+\alpha)v_{2}+(0.5-\alpha)v_{3}=v_{1}\\ (0.5-\alpha)v_{1}+(0.5+\alpha)v_{2}=v_{2}\\ (0.5+\alpha)v_{1}+(0.5-\alpha)v_{2}=v_{3}\end{array}\]
\[v_{1}+v_{2}+v_{3}=1\]
Which is equivalent, as before, to:
\[v_{1}=1/3\]
\[v_{2}=1/3\]
\[v_{3}=1/3\]
Then, the probability of winning one play in the long term is (by the total law of probabilities):
\[\begin{array}{l}P(win|we\mbox{\it are in $s_{1}$})P(to be in $s_{1}$)+P(win|we\mbox{\it are in $s_{2}$})P(to be in $s_{2}$)+\\ P(win|we\mbox{\it are in $s_{2}$})P(to be in $s_{2}$)=(0.5-\alpha)\cdot 1/3+(0.5-\alpha)\cdot 1/3+(0.5- \alpha)\cdot 1/3=0.5-\alpha\end{array}\]
Hence, game A is losing whenever this probability is strictly lower than 0.5 i.e. when \(\alpha>0\).
#### 5.1.2 Game B
Secondly, we consider game B; for which \(\alpha\) is game B a losing game?
Consider the following transition matrix:
\[P_{B}(\alpha)=\begin{bmatrix}0&0.1-\alpha&0.9+\alpha\\ 0.25+\alpha&0&0.75-\alpha\\ 0.75-\alpha&0.25+\alpha&0\end{bmatrix}\]
Note that \(P_{B}(\alpha)\) is regular:
\[=\begin{bmatrix}(0.1-\alpha)(0.25+\alpha)+(0.75-\alpha)(0.9+\alpha)&(0.9+\alpha )(0.25+\alpha)&(0.1-\alpha)(0.75-\alpha)\\ (0.75-\alpha)^{2}&(0.1-\alpha)(0.25+\alpha)+(0.75-\alpha)(0.25+\alpha)&(0.9+ \alpha)(0.25+\alpha)\\ (0.25+\alpha)^{2}&(0.75-\alpha)(0.1-\alpha)&(0.75-\alpha)(0.9+\alpha)+(0.75- \alpha)(0.25+\alpha)\end{bmatrix}\]
Since \(\alpha<0.1\) the entries of \(P_{B}^{2}(\alpha)\) are strictly positive.
We can apply the theorems 3 and 5.
\[vP_{B}(\alpha)=v\] \[\begin{bmatrix}v_{1}&v_{2}&v_{3}\end{bmatrix}\begin{bmatrix}0&0.1- \alpha&0.9+\alpha\\ 0.25+\alpha&0&0.75-\alpha\\ 0.75-\alpha&0.25+\alpha&0\end{bmatrix}=\begin{bmatrix}v_{1}&v_{2}&v_{3}\end{bmatrix}\]
And \(v_{1}+v_{2}+v_{3}=1\).
We obtain the following system of equations:
\[\begin{array}{l}(0.25+\alpha)v_{2}+(0.75-\alpha)v_{3}=v_{1}\\ (0.1-\alpha)v_{1}+(0.25+\alpha)v_{3}=v_{2}\\ (0.9+\alpha)v_{1}+(0.75-\alpha)v_{2}=v_{3}\end{array}\]
\[v_{1}+v_{2}+v_{3}=1\]
Which is equivalent to:
\[v_{1} =\frac{5(16\alpha^{2}-8\alpha+13)}{240\alpha^{2}-16\alpha+169}\] \[v_{2} =\frac{2(40\alpha^{2}+6\alpha+13)}{240\alpha^{2}-16\alpha+169}\] \[v_{3} =\frac{2(40\alpha^{2}+6\alpha+39)}{240\alpha^{2}-16\alpha+169}\]
Then, the probability of winning one play in the long term is (by the total law of probabilities): \(P(win|we\) are in \(s_{1})P(to be in \(s_{1})+P(win|we\) are in \(s_{2})\)P(to be in \(s_{2})+P(win|we\) are in \(s_{2})\)P(to be in \(s_{2})+P(win|we\) are in \(s_{2})\)P(to be in \(s_{2})=(0.1-\alpha)\cdot v_{1}+(0.75-\alpha)\cdot v_{2}+(0.75-\alpha)\cdot v _{2}=(0.1-\alpha)\cdot\frac{5(16\alpha^{2}-8\alpha+13)}{240\alpha^{2}-16\alpha +16\alpha}+(0.75-\alpha)\cdot\frac{2(40\alpha^{2}+6\alpha+39)}{240\alpha^{2}-16 \alpha+16\alpha}+(0.75-\alpha)\cdot\frac{2(40\alpha^{2}+6\alpha+39)}{240\alpha ^{2}-16\alpha+16\alpha}\]
Game B is a losing game whenever this probability is strictly lower than 0.5 i.e. when
\[\frac{-240\alpha^{2}+144\alpha^{2}-155\alpha+84.5}{240\alpha^{2}-16\alpha+169 }<0.5\Leftrightarrow-240\alpha^{2}+24\alpha^{2}-147\alpha<0\Leftrightarrow \alpha>0\]
Hence, game B is a losing game whenever \(\alpha>0\) (same that game A).
#### 5.1.3 Game AB
Finally, for the game AB which \(\alpha\) makes it a winning game?
We consider the following transition matrix:
\[Q(\alpha)=\begin{bmatrix}0&0.3-\alpha&0.7+\alpha\\ 0.375+\alpha&0&0.625-\alpha\\ 0.625-\alpha&0.375+\alpha&0\end{bmatrix}\]
Note that \(Q(\alpha)\) is regular:
\[=\begin{bmatrix}(0.3-\alpha)(0.375+\alpha)+(0.7+\alpha)(0.625-\alpha)&(0.7+ \alpha)(0.375+\alpha)&(0.3-\alpha)(0.625-\alpha)\\ (0.625-\alpha)^{2}&(0.375+\alpha)(0.3-\alpha)+(0.625-\alpha)(0.375+\alpha)&(0. 375+\alpha)(0.7+\alpha)\\ (0.375+\alpha)^{2}&(0.625-\alpha)(0.3-\alpha)&(0.625-\alpha)(0.7+\alpha)+(0.375+ \alpha)(0.625-\alpha)\end{bmatrix}\]
Since \(\alpha<0.1\) the entries of \(Q(\alpha)\) are strictly positive.
We can apply the theorems 3 and 5.
\[vQ(\alpha)=v\]
\[[v_{1}\quad v_{2}\quad v_{2}]\quad\begin{bmatrix}0&0.3-\alpha&0.7+\alpha\\ 0.375+\alpha&0&0.625-\alpha\\ 0.625-\alpha&0.375+\alpha&0\end{bmatrix}=[v_{1}\quad v_{2}\quad v_{3}]\]
_knowing that \(v_{1}+v_{2}+v_{2}=1\)._We obtain the following system of equations:
\[\begin{array}{l}(0.375+\alpha)v_{2}+(0.620-\alpha)v_{3}=v_{1}\\ (0.3-\alpha)v_{1}+(0.375+\alpha)v_{3}=v_{2}\\ (0.7+\alpha)v_{1}+(0.625-\alpha)v_{2}=v_{2}\\ v_{1}+v_{2}+v_{3}=1\end{array}\]
Which is equivalent to:
\[\begin{array}{l}\frac{320\alpha^{2}-80\alpha+245}{960\alpha^{2}-32\alpha+709 }\\ \frac{320\alpha^{2}+24\alpha+180}{960\alpha^{2}-32\alpha+709}\\ v_{2}=\frac{320\alpha^{2}+24\alpha+284}{960\alpha^{2}-32\alpha+709}\\ v_{2}=\frac{320\alpha^{2}+24\alpha+284}{960\alpha^{2}-32\alpha+709}\end{array}\]
Then, the probability of winning one play in the long term is (by the total law of probabilities):
\[\begin{array}{l}\frac{1}{2}[P(win\,A\,we\,are\,in\,\,s_{1})P(to\,be\,in\,\,s_{ 1})+P(win\,A\,|we\,are\,in\,\,s_{2})P(to\,be\,in\,\,s_{2})+\\ P(win\,A\,|we\,are\,in\,\,s_{2})P(to\,be\,in\,\,s_{2})]+\frac{1}{2}[P(win\,B\,|we\,are\,in\,\,s_{1})P(to\,be\,in\,\,s_{1})+\\ P(win\,B\,|we\,are\,in\,\,s_{,})P(to\,be\,in\,\,s_{,})+P(win\,B\,|we\,are\, in\,\,s_{2})P(to\,be\,in\,\,s_{2})]=\\ \\ \frac{1}{2}[(0.5-\alpha)\cdot v_{1}+(0.5-\alpha)\cdot v_{2}+(0.5-\alpha)\cdot v _{2}]+\frac{1}{2}[(0.1-\alpha)\cdot v_{1}+(0.75-\alpha)\cdot v_{2}+(0.75- \alpha)\cdot v_{3}]\\ =\frac{0.5-\alpha}{2}+\frac{1}{2}[(0.1-\alpha)\cdot v_{1}+(0.75-\alpha)\cdot( v_{2}+v_{2})]\\ =\frac{0.5-\alpha}{2}+\frac{1}{2}[(0.1-\alpha)\cdot\frac{320\alpha^{2}-80 \alpha+245}{960\alpha^{2}-32\alpha+709}+(0.75-\alpha)\cdot\left(\frac{320 \alpha^{2}+24\alpha+280}{960\alpha^{2}-32\alpha+709}+\frac{320\alpha^{2}+24 \alpha+284}{960\alpha^{2}-32\alpha+709}\right)]\end{array}\]
Game AB is a winning game whenever this probability is strictly greater than \(0.5\), i.e., when
\[\frac{-1920\alpha^{2}+1056\alpha^{2}-1406\alpha+727}{1920\alpha^{2}-64\alpha+ 1418}>0.5\Leftrightarrow\alpha<0.013109\]
Hence game AB is a winning game for \(\alpha<0.013109\).
## 6 Applications of the Paradox
We proved that we could actually combine two losing games into a winning one. So now, what do we do of it? Could we just rush into the closest casino and become rich? The answer is basically no. Indeed, the paradox cannot turn any losing games into winning ones, it follows a specific set of rules. In a Parrondo's game the rules are capital dependent, the game we play depend on the current capital. Game A and B were linked through the gain, playing one game will modify this gain and will have an impact on the next playing game. In casinos, all the games are capital independent; playing one game will not affect another game. In order to apply the paradox in a casino we will need to find three games to represent our three coins. Easily we could find games to model the two losing coins (coin from game A and coin 1 from game B), but we will not find a game in a casino that is winning such as coin 2 from game B.
### Application in Physics [19]
Professor Parrondo theorized the paradox is a physicist. He was working on flashing Brownian Ratchets. This is a rather complex device: It consist of a ratchet that is freely to move in one direction only. The ratchet is connected to an immersed wheel in a fluid of molecules at a certain temperature. The molecules will undergo Brownian motion and constitute a heat bath. The impulse from a molecular collision will turn the immersed wheel. Remember that the
ratchet can only turn in one direction. Hence the net effect of all the collisions would seem to make the ratchet turned continuously in that direction (see Figure 13).
We can see it more clearly in this representation of the potentials: (a) represents the potential on (game B), (b) the potential off (game A) and (c) the potential on (game B).
If the potential is switched on (analogous to game B), the Brownian particles will fall into one of the "valleys", as we see in the graph (a), we suppose they fall around 0. Then, if we turn off the potential, the particle will spread out. This is analogous to game A. Finally, if we switch on again the potential, the particles that have been spread out will fall again into the "valleys". But now, they will fall into different valleys. As we can see in the graph (c), some of them stay in the valley around 0, a small amount of them went into the valley around -L, and another part of the Brownian particle went into the +L valley. Hence, we have a net positive displacement of those particles.
To make it simple; the flashing Brownian ratchet is a process that alternates between two states; a one-dimensional Brownian motion and a Brownian ratchet. This system will produce a directed motion. Those two states alone do not result in a directed motion but the combination of the two is. It is a rather complicated process, and Dr. Parrondo introduced Parrondo's games to illustrate it in simpler way. Note that all the other areas of application (others than gambling) are areas of active research and the implication of Parrondo's paradox is here suggested and left to prove.
### Application in Ecology [20]
We consider a population that can express two kind of behavior: nomad or colonist. A nomad behavior corresponds to an independent lifestyle; thus, they are not affected by competition or cooperation. Hence, under poor environmental conditions they will go extinct. In the contrary, colonists live together in close proximity and, therefore, they are affected by competition and cooperation, but then they may drain all the resources of their environment and also go extinct. Hence, we can qualify those two behaviors as losing strategies in the "survival game".
The nomad's behavior will be "equivalent" to our previous game A. Concerning the colonist behavior, we have the following: let A be the critical lower capacity and K the carrying capacity. If the actual size of the population is between A and K, the population will grow, and it will decrease otherwise. Moreover, the carrying capacity K changes depending on the populations size (this represents the fact that environmental resources are destroyed) hence making colonist behavior a losing strategy. Thus, the colonist behavior will be our game B. Indeed, the current gain is replaced by the current population size and, depending on its value, we alternate between a winning (if A\(<\)population size\(<\)K, the population increases) and losing game (otherwise the population decreases).
Figure 13: Flashing Brownian Ratchets [5]
To recap, game A represents the nomad behavior (a losing strategy), game B represents the colonist strategy if the population size is between A and K the population will increase, otherwise it decreases, M is represented by A and K, the current gain is represented by the population size.
We suppose that the organism can know the amount of environmental resources to them and hence deduce the carrying capacity. In function of the values of this carrying capacity they will change strategies. We will call strategic alternation when the organism changes behavior from colonist to nomad if the carrying capacity is low and from nomad to colonist if the carrying capacity is high.
Simulations have been run based on those two survival strategies, nomad and colonist, the following has been observed: In case of no-behavioral switching, either the two populations go extinct or the nomad goes extinct whereas the colonial survives. In case of behavioral switching, either they both populations go extinct, or they survive through periodic behavioral alternation, or there is a long-term growth through strategic alternation. Most importantly, one aspect of those simulations illustrates the Parrondo's paradox: there were situations where both populations will die if we did not permit behavior-changes but will survive trough **periodic** behavioral alternation.
We exhibited Parrondo's paradox in this ecological situation. This paradox could have also wider application on the field of ecology such as explaining why a destructive specie like the Homo Sapiens can thrive and grow even with limited environmental resources or help us better understand the evolution or extinction of species in general and maybe the emergence of life.
### Application in Finance [21, 22, 23]
Finance is one of the most active field of research related to Parrondo's paradox as it could lead to the most profit; everyone wants to find out how we can combine two losing investment into a winning one.
For now, we did not find a direct application of the paradox. The reason for that is that Parrondo's paradox is following specific rules for the determination of Game A and B. To find strategies in asset management that will also follow those rules is nearly impossible. However, let us suppose that we are actually able to find an investment bank willing to sell us a security where we could apply the paradox. Let us then assume that we will be able to build a portfolio which will increase its total value while the values of the stocks will decrease individually. In that case we will make money out of worthless stocks, they must be a loophole somewhere. Moreover, note that it is as hard to find stocks that will decrease as stocks that will increase.
Hence, the Parrondo's paradox have not yet find a utility in stock investment but the research is still active. One article [24] states that "rebalanced portfolio diversification can turn individually, money-losing assets into a winning portfolio". Indeed, it shows that by taking separate investments (that were more likely to be losing ones), if we rebalance them to make an equally weighted portfolio, we were more likely to make profits. Hence, exhibiting Parrondo's paradox once more.
Moreover, we have found a correlation between the paradox and "volatility pumping" [22] similar to the rebalanced portfolio diversification. Volatility pumping consists of the following: We have two stocks A and B. Stock A is stable but not so good in the long run and stock B increases but is volatile. The method is to sell both stocks each day. Let N be the total amount of the sale. Then, we buy N/2 of stock A and N/2 of stock B. We repeat this procedure each day. This method will produce a positive profit. However, this is a toy model and it is hard to apply it to the real stock market. Indeed, it is not possible to buy and sell everyday as they will be high transaction costs.
Those preceding applications on finance are "toy models"; we are still waiting on a breakthrough concerning a direct application of Parrondo's paradox in finance.
Reliability theory is defined as the probability that a device will perform its intended function under specific times and conditions. To exhibit Parrondo's paradox we studied two systems in series, where the first system was less reliable than the second. Then, we modified the first system by randomly choosing its components, i.e. we chose each unit randomly in the same set of components as before. Hence the distributions of the new units were a mixture of the previous units' distributions. Surprisingly, the new system was more reliable than the second one in specific conditions. Therefore, by randomly choosing units from the first, less reliable, system, we obtained a better system than the second one, thus exhibiting the paradox.
### Application 5: Biology
Parrondo's paradox has also some possible application in biology. The first one concerns random phase variation [26]. Phase variation is a method for dealing with changing environment, it involves the variation of protein expression. Random phase variation is a phase variation that happen at random time. But phase variation is a losing strategy as it will often results in a maladaptive phenotype. However, it has been observed that random phase variation has better results in the survival of the organism in certain conditions, and it could even be necessary.
We have also studied some applications in oncology [27]. Indeed, the growth of tumors can admit a chaotic behavior in certain regimes. It has been shown that chaotic tumor growth trajectories can be made nonchaotic by modifying some control parameters as drugs. Hence, we could combine chaotic behaviors into a nonchaotic one, thus reflecting the paradox.
Another application in biology concerns the sensors [28]. The biological sensors of an organism permit him to analyze its environment and make decision upon it. We expect organisms with more accurate sensors to be more adaptive to their environment and be better at survival. But actually, under certain conditions, organisms with less accurate sensors tend to be the one surviving. This also exhibits Parrondo's paradox:
We can consider game A as doing nothing, we will say that this is a losing strategy as the environment is changing. Game B will consist of using the bad sensors to make a decision of whether or not to migrate. In Parrondo's game B, we had two coins, one better than the other, those two coins will be represented by the stochastic switching of the environment: in certain condition in the environment less accurate sensors will make better decisions, on other conditions it will do the opposite. Note that game B in general is a losing strategy as it consists of using bad sensors to make decision. However, it has been seen that less accurate sensors tend to survive over more accurate ones and thus expressing Parrondo's paradox.
Finally, we point out that recent studies over COVID-19 use strategies based on Parrado's paradox, see [29, 30].
## 7 Discussion and Conclusions
Through this paper we did a complete study of Parrondo's paradox. We first introduced Parrondo's game through different examples all related to gambling. Then, we decided to simulate the coin tossing Parrondo's game. From that simulation we clearly saw that it was possible to combine two losing games into a winning one. Even more surprising, we saw that a random combination of the two losing games leaded to a winning game. After having illustrated the paradox, we did a full mathematical study of it. In order to do that, we first introduced some definitions and theorems related to Markov chains. Notably we proved a specific version of the fundamental limit theorem for regular Markov chain. This theorem allowed us to represent our example as a regular Markov chain and compute its equilibrium distribution. With all that knowledge, we were able to analyze our Parrondo's game and prove that indeed those two losing games can be combined into a winning game. Finally, we saw that, even though in this
work we mostly use the examples of gambling games, Parrondo's paradox could also be applied in many other fields such as physics, ecology, finance, reliability theory or biology.
Therefore, although Parrondo's paradox is a "certain" revolution in game theory, it can actually be "easily" explained by using Markov chains.
## Acknowledgements
Xavier Molinero has been partially supported by funds from the Ministry of Science and Innovation grant PID2019-104987GB-I00 (JUVOCO) and the Catalan government [2021 SGR 01419 ALBCOM].
Camille Mignien has been partially funded by the scholarship Swiss-European Mobility Programme.
|
2301.11005 | Digitized-counterdiabatic quantum factorization | We factorize a 48-bit integer using 10 trapped-ion qubits on a Quantinuum's
quantum computer. This result outperforms the recent achievement by B. Yan et
al., arXiv:2212.12372 (2022), increasing the success probability by a factor of
6 with a non-hybrid digitized-counterdiabatic quantum factorization (DCQF)
algorithm. We expect better results with hybrid DCQF methods on our path to
factoring RSA-64, RSA-128, and RSA-2048 in this NISQ era, where the latter case
may need digital-analog quantum computing (DAQC) encoding. | Narendra N. Hegade, Enrique Solano | 2023-01-26T09:35:17Z | http://arxiv.org/abs/2301.11005v1 | # Digitized-counterdiabatic quantum factorization
###### Abstract
We factorize a 48-bit integer using 10 trapped-ion qubits on a Quantum's quantum computer. This result outperforms the recent achievement by B. Yan et al., arXiv:2212.12372 (2022), increasing the success probability by a factor of 6 with a non-hybrid digitized-counterdiabatic quantum factorization (DCQF) algorithm. We expect better results with hybrid DCQF methods on our path to factoring RSA-64, RSA-128, and RSA-2048 in this NISQ era, where the latter case may need digital-analog quantum computing (DAQC) encoding.
_Introduction.--_ The recent proposal of Bao Yan et al. [1], inspired by the classical Schnorr's algorithm, shows that one could encode the integer factorization problem on a quantum computer with \(\mathcal{O}(\log N/\log\log N)\) qubits, i.e., sublinear in the bit length of an integer N. In this sense, it is better than Shor's factorization algorithm [2] with respect to the number of qubits. However, the time complexity of this hybrid classical-quantum algorithm is unknown and hard to estimate. The authors combine Babai's algorithm with the quantum approximate optimization algorithm (QAOA) to solve the closest vector problem on a lattice. The resulting problem reduces to an optimization problem whose solution is encoded in the ground state of an Ising spin-glass Hamiltonian. Even though the authors experimentally factorize a 48-bit number on a superconducting quantum computer with a large enough success probability, its scalability for larger tasks remains unknown. In this report, we take up the classical preprocessing part of their work and enhance the quantum part of the algorithm. We propose a non-hybrid approach, called digitized-counterdiabatic quantum factorization (DCQF), to tackle the same problem outperforming QAOA techniques. In this sense, DCQF may allow us to factorize larger numbers, possibly up to RSA-64 and RSA-128, with current noisy intermediate-scale quantum (NISQ) computers. This report explores the possibility of factoring larger numbers with compressed algorithms in given quantum computers, rather than proving any scalability of computational resources. On the other hand, further use of analog [3; 4] or digital-analog encoding schemes [5; 6; 7] may pave the way towards factoring RSA-2048 in the NISQ era, without the long wait for fault-tolerant quantum computers.
In Ref. [1], the quantum part of their factorization algorithm consists in finding the ground state on an Ising spin-glass Hamiltonian with all-to-all connectivity. The general form of such Hamiltonian is given by
\[H_{Ising}=\sum_{i<j}J_{ij}\sigma_{i}^{z}\sigma_{j}^{z}+\sum_{i}h_{i}\sigma_{i} ^{z}. \tag{1}\]
Here, \(\sigma_{i}^{z}\) is the Pauli-\(z\) matrix, \(J_{ij}\), and \(h_{i}\) are the interactions between the spins and local field acting on a site \(i\), respectively. In the worst-case scenario, finding the ground state of an Ising spin-glass problem is known to be NP-hard. Along these lines, even with quantum computers, it is unlikely to solve this problem in polynomial time, though one could expect a polynomial quantum speed-up. There are various approaches to tackle this problem on a quantum computer, using adiabatic quantum computation (AQC), quantum annealing (QA) [8], QAOA [9], among others. Despite the vast interest in QA and QAOA for solving combinatorial optimization problems, we still need to learn about their quantum speed-up for large-scale problems of industrial relevance. Here, we will consider digitized-counterdiabatic quantum computing (DCQC) applied to the factorization problem, which is known to overcome some of the challenges faced by AQC and to outperform QAOA [10; 11; 12].
Counterdiabatic (CD) protocols are known to speed up the adiabatic evolution by suppressing the non-adiabatic transitions. The recent developments in this field have opened the possibility of applying these techniques to AQC [14; 15]. Even with approximate counterdiabatic terms, a drastic enhancement can be obtained for most problems [10; 11; 12; 13; 14; 15; 16; 17]. However, the experimental implementation of the CD protocols on analog quantum computers is a challenging task. Especially, while solving classical optimization problems, the CD terms are shown to be non-stoquastic, and the current quantum annealers do not have the capability to consider such problems. In order to overcome these difficulties, DCQC was proposed and experimentally tested. Recently, it was numerically proved that even the simplest approximate CD protocols could offer polynomial scaling enhancement in the ground state success probability, as compared with the finite time adiabatic quantum optimization [16; 17].
_DCQF algorithm.--_ In order to find the ground state of Hamiltonian in Eq. (1), we start with an adiabatic Hamiltonian defined as
\[H_{ad}(\lambda)=[1-\lambda(t)]H_{i}+\lambda(t)H_{Ising}, \tag{2}\]
where \(\lambda(t)\) is a scheduling function which defines the path between \(H_{i}\) and \(H_{Ising}\). We choose \(\lambda(t)=\sin^{2}\left[\frac{\pi}{2}\sin^{2}\left(\frac{\pi t}{2T}\right)\right]\) such that its first and second derivatives vanish at the initial and final time. This is an optional boundary condition for the CD protocol. The initial Hamiltonian is chosen as \(H_{i}=-\sum_{i}\sigma_{i}^{z}\) such that its ground state \(|+\rangle^{8\pi}\) can be easily prepared. In order to speed-up the adiabatic evolution, we introduce an ap |
2307.12682 | Pro-PRIME: A general Temperature-Guided Language model to engineer
enhanced Stability and Activity in Proteins | Designing protein mutants of both high stability and activity is a critical
yet challenging task in protein engineering. Here, we introduce Pro-PRIME, a
deep learning zero-shot model, which can suggest protein mutants of improved
stability and activity without any prior experimental mutagenesis data. By
leveraging temperature-guided language modelling, Pro-PRIME demonstrated
superior predictive power compared to current state-of-the-art models on the
public mutagenesis dataset over 33 proteins. Furthermore, we carried out wet
experiments to test Pro-PRIME on five distinct proteins to engineer certain
physicochemical properties, including thermal stability, rates of RNA
polymerization and DNA cleavage, hydrolase activity, antigen-antibody binding
affinity, or even the nonnatural properties, e.g., the ability to polymerize
non-natural nucleic acid or resilience to extreme alkaline conditions.
Surprisingly, about 40% AI-designed mutants show better performance than the
one before mutation for all five proteins studied and for all properties
targeted for engineering. Hence, Pro-PRIME demonstrates the general
applicability in protein engineering. | Pan Tan, Mingchen Li, Yuanxi Yu, Fan Jiang, Lirong Zheng, Banghao Wu, Xinyu Sun, Liqi Kang, Jie Song, Liang Zhang, Yi Xiong, Wanli Ouyang, Zhiqiang Hu, Guisheng Fan, Yufeng Pei, Liang Hong | 2023-07-24T10:41:48Z | http://arxiv.org/abs/2307.12682v5 | **A general Temperature-Guided Language model to engineer enhanced Stability and Activity in Proteins**
## Abstract
Designing protein mutants with high stability and activity is a critical yet challenging task in protein engineering. Here, we introduce PRIME, an innovative deep learning approach for the zero-shot prediction of both protein stability and enzymatic activity. PRIME leverages temperature-guided language modelling, providing robust and precise predictions without relying on prior experimental mutagenesis data. Tested against 33 protein datasets, PRIME demonstrated superior predictive performance and generalizability compared to current state-of-the-art models.
## Introduction
Proteins are the fundamental constituents of living systems, playing integral roles in cells, tissues, and organs, and covering a vast array of biological processes. These processes span from enzyme catalysis and cellular metabolism to immune responses, signal transduction, and transport, among others. Beyond their biological significance, proteins are critical to numerous industries. In biomedicine, they serve as therapeutic agents and targets; in the food industry, they are involved in food processing and preservation; in brewing, they are essential to the production process; and in chemical engineering, they act as key catalysts for various reactions. Additionally, proteins are the cornerstone of in vitro diagnostic (IVD) tests, being instrumental in the detection and monitoring of numerous diseases. However, proteins extracted from biological organisms often require modifications to make them suitable for industrial applications. This is primarily because the physicochemical environments (for example, temperature) in which these proteins need to function in industrial settings are often drastically different from their native biological contexts1. Therefore,
to meet the demands of these diverse application scenarios, the proteins need to be engineered through mutations to improve their performance. These modifications could aim to enhance stability under extreme temperature or pH conditions, or to increase enzymatic activity and specificity. The process of optimizing proteins for such industrial applications typically involves iterative cycles of mutation, screening, and selection - a labor-intensive and time-consuming endeavor. Indeed, a primary focus in protein engineering lies in enhancing the robustness of proteins to function effectively under extreme conditions. The initial target often is the improvement of thermostability, allowing proteins to retain their structure and function at high temperatures. Additionally, improving resistance to extreme pH environments (acidic or alkaline) and harsh solvents is equally crucial. Such enhancements enable proteins to be utilized in a broader range of industrial processes, many of which involve conditions that can denature or deactivate proteins.
In addition to stability, augmenting protein activity is another major goal in protein engineering. This could involve optimizing the efficiency of gene editing enzymes, increasing the rate of biocatalytic reactions, enhancing the binding affinity of antibodies, and so on. All these modifications aim at boosting the performance of proteins in their respective applications. Hence, the core mission of protein engineering involves designing protein mutants that not only exhibit heightened activity but also display increased stability under a range of conditions.
A sophisticated interplay exists between protein stability and activity, both of which are fundamental to their overall performance. Protein stability is crucial for maintaining the proteins' native structure and function[2]. Adverse conditions can provoke alterations in protein folding states, leading to the loss of native conformation and function. Notably, proteins are prone to denaturation under extreme circumstances, including elevated temperatures or exposure to potent amino acids and alkalis[3, 4]. Improving protein stability can increase evolvability by allowing a protein to tolerate a broader range of beneficial mutations while maintaining its native structure[5]. However, overemphasis on stability at the expense of protein flexibility may inhibit enzymatic activity[2]. Therefore, achieving an optimal balance between stability and activity is vital for optimizing protein efficacy across diverse contexts.
As computational simulation and related technologies continue to advance, various software tools have emerged to enhance protein thermostability, including Rosetta[6], ABACUS[7], and FoldX[8], which employ physical or statistical potential functions. While these computational methods often provide relatively accurate stability predictions, their capacity to predict protein biological activity is limited. Typically, modifying the biological activity of proteins requires long-term (-years) meticulous experimental research into their working mechanisms, which is the primary method of rational protein design. However, mechanistic research is time-consuming and labor-intensive, and it increasingly fails to meet the modification needs of many important industrial enzymes commonly used in everyday applications. In recent years, deep learning has been extensively applied in protein engineering. Large-scale protein language models[9, 10, 11, 12, 13], such as those utilizing self-supervised learning of protein sequence to understand protein sequence semantics and grammar, have demonstrated high predictive performance for protein fitness[14], even in zero-shot settings[13, 15, 16]. However, most of these models, pre-trained on extensive protein sequence databases, lack interpretation for distinct protein properties, such as thermostability and enzymatic activity. These specific properties indeed form the real goals of protein engineering. Other supervised deep learning methods often exhibit high accuracy in predicting protein function but rely on high-throughput experiments to generate thousands of data points[17, 18]. This approach may not be practical for many
proteins due to resource limitations. In this study, we amassed a comprehensive dataset comprising 96 million sequence-host bacterial strain optimal growth temperatures (OGT) [19]. Host bacterial strain optimal growth temperature has been shown to strongly correlate with information such as protein optimal enzymatic activity temperature and melting temperature [20]. Leveraging this dataset, we developed an interpretable deep learning-based methodology, termed PRIME, Protein language model for Intelligent Masked pretraining and Environment (temperature) prediction. In its training process, PRIME utilizes a masked language modeling (MLM) task, a methodology inspired by the transformer-based language models [21]. This task involves artificially modifying protein sequences based on the natural probability distribution of amino acids, followed by attempting to restore the sequences to their original state. Such a procedure allows PRIME to learn and comprehend the semantic and grammatical features inherent in protein sequences. Alongside this, PRIME capitalizes on a multi-task learning paradigm to capture the temperature traits associated with these sequences. This approach fosters an inherent predisposition in PRIME to assign superior scores to protein sequences exhibiting enhanced temperature tolerance and exhibiting conformity to natural biological principles. PRIME is trained with the objective of predicting optimal growth temperatures (OGTs) across a wide range of bacterial strains. As a result, PRIME naturally correlates higher scores with sequences that are more likely to contribute to robustness and survivability in varied environmental conditions, including extreme temperature scenarios. Therefore, PRIME proves particularly proficient in the design and optimization of industrial enzymes and proteins that often demand high-temperature tolerance and resilience for practical applications. Our model has demonstrated exceptional predictive performance relative to other state-of-the-art (SOTA) models, especially in forecasting the thermostability (change of T\({}_{\text{m}}\)) and enzymatic activity of protein mutated sequences.
## Results
**PRIME Architecture**
PRIME is a pre-trained model based on the Transformer architecture [22], as illustrated in the Figure 1. PRIME consists of three main components. The first is the feature extraction module, which is a Transformer-encoder model to extract the latent representation of the sequence. The second component is the language modeling module, which is designed to learn the contextual representation of amino acids according to the masked language loss. The third component is the OGT prediction module, which can predict the optimal growth temperature of the organism in which the protein is located, based on the latent representation. The details of PRIME are described in the supply information.
1) The pretraining tasks of PRIME
1.1) Denoised language modeling
We utilize masked language modeling, which is often employed to model sequential data, to pre-train our model. Specifically, given a protein sequence, 20% of amino acids in it are selected for prediction. The training objective is to predict the selected amino acid given its context (surrounding amino acids). The selected amino acid is replaced with a special token [MASK] with probability 70% and substituted with a random amino acid which is selected according to its frequency in the UniProtKB protein sequence database with probability 20%. And the remaining selected amino acids keep unchanged. The modified sequence is passed to the Transformer-encoder and language modeling module, which outputs a probability distribution over 20 amino acids. Then the distance between reconstruct sequence and original sequence are treated as the optimal target to train the model. The details can be found in the supply information.
1.2) Optimal growth temperature prediction
In the second pre-training task, the protein sequence undergoes processing by the Transformer-encoder and OGT prediction module to obtain a predicted OGT value. The distance between the predicted OGT and the true OGT is subsequently computed using the Mean Squared Error (MSE) metric. This metric serves as the optimal objective for the pre-training process.
1.3) Correlation objective
In order to enhance the predictive capacity of mutant effects on thermostability within the language
Figure 1: A: The architecture of PRIME. The core architecture of PRIME is a BERT-style transformer, with the pre-trained ESM2 utilized for initialization. Following the generation of the latent representation, PRIME encompasses two top modules: the Language Modeling (LM) module and Optimal Growth Temperature (OGT) prediction module. B: The scheme of zero-shot prediction for mutation effect of targeted protein. The mutant effect is scored by the log odds ratio between likelihoods of the mutated and wildtype sequences based on PRIME.
modeling module, we incorporate temperature information through the utilization of a correlation objective. This objective establishes a relationship where sequences exhibiting higher language modeling probabilities are aligned with higher Optimal Growth Temperatures (OGT), thus reinforcing the connection between sequence modeling and thermodynamic stability. Specifically, given a batch of protein sequences, the language modeling module generates probabilities for each sequence while the OGT prediction module predicts the OGT values. To quantify the correlation between the sequence probabilities and the predicted temperatures, we employ the Pearson Correlation Coefficient as a metric. This metric serves as an additional objective during the pre-training process, enabling the exploration of the relationship between the sequence probabilities of the language model and the predicted temperatures. This correlation objective can inject temperature-aware information learned by the OGT prediction module into the language modeling module.
2) Mutated protein sequence scoring strategy
It has been shown that language model trained with masked language modeling objective can be utilized to estimate the sequence variants. Leveraging this functionality, we utilize the language modeling module to evaluate mutant sequence of proteins. Specifically, for a given mutation, we treat the amino acid in the wildtype protein as the reference state. By comparing the probability assigned to the mutated amino acid with the probability assigned to the wildtype, we can evaluate and score the impact of the mutation.
3) Enhancing mutated protein sequence scoring capacity through homogeneous fine-tuning
We have noticed that further fine-tuning language modeling module on the homogeneous sequences of proteins. In the pre-training stage, PRIME is trained on a vast dataset including 96 million sequences (refer to the supply information in the dataset section). After pretraining, the PRIME is further trained on the homologous sequences of proteins present in the \(\Delta\)Tm datasets (as detailed in the subsequent section) with the same training objective.
PRIME outperforms state-of-the-art methods in predicting thermostability of mutated protein sequence
We conducted a comparison of the zero-shot prediction capacity on thermostability between our model, PRIME, and several current state-of-the-art (SOTA) models, including deep learning models esm-1v[15], MSA-transformer[11] and Tranception[23], as well as the traditional computational method, Rosetta[6]. Notably, among these methods, Rosetta incorporates protein structure information, whereas the others rely solely on protein sequence. Our analysis utilized a dataset derived from ProThermDB[24], featuring single-site mutations in proteins with \(\Delta\)Tm data collected under the same experimental pH and ensuring a minimum of 10 data points per protein. We obtained the wild-type protein structure from the Protein Data Bank and employed Alphafold[25] to construct structures absent in PDB.
This comprehensive dataset enabled a systematic investigation of the impact of specific mutations on protein thermostability, supporting the development and validation of advanced predictive models such as PRIME. The comparison provides valuable insights into the relative performance of different modeling approaches and highlights the potential of PRIME for predicting
protein thermostability in a zero-shot setting. The results are illustrated in Figure 2A (Details can be found in Table S1 and S2). As can be seen, PRIME demonstrates superior performance over all the other methods in predicting protein thermostability without losing the accuracy on the prediction of enzymatic activity. After further refinement using homologous sequences of target sequences from \(\Delta\)Tm datasets, PRIME achieves a lead of 28% (enhancing from 0.38 to 0.488) over another SOTA method, ESM-1v. Meanwhile, PRIME's prediction accuracy for activity is on par with that of ESM-1v. This result highlights the potential of PRIME in protein engineering applications, especially in designing protein sequences with augmented thermostability and activity. PRIME outperforms both traditional computational approaches and other deep learning models, demonstrating its exceptional efficacy.
In addition to the zero-shot assignment, we also tested the representational capacity and transferability of PRIME. Specifically, we conduct supervised fine-tuning on two temperature-aware downstream tasks. (Details can be found in SI). As the pretraining of PRIME incorporates the optimum growth temperature of the bacterial where the protein lives in, it is anticipated that PRIME can also perform better in predicting other properties of proteins associated with thermal stability. As exhibited in Figure 2B and 2C, PRIME also outperforms other supervised methods in the task of predicting melting temperature (T\({}_{\text{m}}\)) of a native protein and its optimal enzymatic activity
Figure 2: Comparison of performance between PRIME and other methods. A: The averaged spearman correlation of different unsupervised models on the dataset of \(\Delta\)T\({}_{\text{m}}\) (orange) and enzymatic activity (blue). The Pearson correlation (cyan) and \(R^{2}\) (black) of various supervised methods for the prediction of T\({}_{\text{m}}\) (B) and T\({}_{\text{opt}}\) (C) of native sequences. Where the datasets and data splits for T\({}_{\text{m}}\) (\(\sim\)39,000 protein sequences) and T\({}_{\text{opt}}\) (\(\sim\)1900 sequences) are referenced from Ref [26].
temperature (T\({}_{\text{opt}}\)). We further investigated the contributions made by the two primary modules of PRIME, specifically the OGT prediction module and the language modeling module (table S1 and table S2). It is evident that utilizing only one of the OGT prediction or the MLM task will result in decreased performance for PRIME. This finding highlights the significance of combining both the OGT prediction and MLM tasks in the PRIME model to achieve optimal performance. The synergistic effect of these two modules allows the model to better understand the complex relationships between protein sequences and their thermostability properties, ultimately resulting in improved predictive capabilities. The integration of both modules in the PRIME model ensures a more comprehensive understanding of the protein sequence information, which in turn contributes to its superior performance compared to other state-of-the-art models.
## Conclusion and discussion
In conclusion, we unveil PRIME, a pioneering deep learning methodology, which deffly exploits an expansive dataset, comprising sequence-host bacterial strain optimal growth temperatures. Utilizing an adapted masked language model (MLM) for Optimal Growth Temperature (OGT) prediction, PRIME capably assimilates semantic, grammatical, and temperature-related characteristics of protein sequences. Our systematic in silico experimental validations unequivocally establish PRIME's superior performance over other leading models, such as ESM-1v, MSA-transformer, Tranception, and Rosetta, in predicting thermostability and activity of mutation sequences in proteins.
Traditional protein engineering strategies typically rely on high-throughput experimental screening to identify beneficial mutations. This approach involves generating a large library of protein variants, each with different mutations, and then screening these variants for desired traits. These traits can include enhanced stability under certain conditions (such as high temperature or extreme pH) or increased enzymatic activity [27]. These techniques, however, are labor-intensive and costly. For many important proteins, designing a high-throughput experimental protocol is challenging, rendering low-throughput experimental testing a more common and practical approach. If we can identify a sufficient number of superior single-site mutations via low-throughput experimentation alone, we can then build on these single-site mutations to generate cumulative multi-site mutations. In such cases, a more targeted approach based on computational prediction can be valuable. Tools like PRIME can predict the impact of specific mutations on protein stability and activity, enabling a more focused and efficient approach to engineering improved protein variants. By reducing the reliance on extensive experimental screening, such computational tools can make the protein engineering process more efficient and accessible, potentially broadening the range of proteins that can be effectively engineered.
Furthermore, the adaptable nature of PRIME's general language model representation offers scope for its deployment in other prediction tasks such as determining the melting temperature (T\({}_{\text{m}}\)) or optimal enzymatic activity temperature (T\({}_{\text{opt}}\)) of native proteins. By lowering the thresholds for protein modification, PRIME allows for improvements in the stability and biological activity of proteins, bypassing the need for extensive mechanistic investigations. Additionally, PRIME's multi-task learning paradigm, which incorporates domain knowledge into the language model, could catalyze the development of more specialized AI methodologies across diverse fields of specialty.
## Acknowledgements
This work was supported by the grants from the National Science Foundation of China (grant number 12104295). We acknowledge Shanghai Artificial Intelligence Laboratory for computing resources.
|
2308.02214 | Semi-Markov Processes in Open Quantum Systems. II. Counting Statistics
with Resetting | A semi-Markov process method for obtaining general counting statistics for
open quantum systems is extended to the scenario of resetting. The simultaneous
presence of random resets and wave function collapses means that the quantum
jump trajectories are no longer semi-Markov. However, focusing on trajectories
and using simple probability formulas, general counting statistics can still be
constructed from reset-free statistics. An exact tilted matrix equation is also
obtained. The inputs of these methods are the survival distributions and
waiting-time density distributions instead of quantum operators. In addition, a
continuous-time cloning algorithm is introduced to simulate the large-deviation
properties of open quantum systems. Several quantum optics systems are used to
demonstrate these results. | Fei Liu | 2023-08-04T09:16:15Z | http://arxiv.org/abs/2308.02214v2 | # Semi-Markov Processes in Open Quantum Systems. II. Counting Statistics with Resetting
###### Abstract
A semi-Markov process method for obtaining general counting statistics for open quantum systems is extended to the scenario of resetting. The simultaneous presence of random resets and wave function collapses means that the quantum jump trajectories are no longer semi-Markov. However, focusing on trajectories and using simple probability formulas, general counting statistics can still be constructed from reset-free statistics. An exact tilted matrix equation is also obtained. The inputs of these methods are the survival distributions and waiting-time density distributions instead of quantum operators. In addition, a continuous-time cloning algorithm is introduced to simulate the large-deviation properties of open quantum systems. Several quantum optics systems are used to demonstrate these results.
## I Introduction
In a previous paper [1], we explicitly constructed semi-Markov processes (sMPs) embedded in quantum jump trajectories of open quantum systems and clarified their connections with the Markov quantum master equation (MQME) [2; 3; 4]. The unique advantages of the sMP method are in analyses and computations of the general counting statistics of open quantum systems [5; 6; 7; 8], e.g., the large deviation properties of these systems. On the one hand, unlike the tilted quantum master equation (TQME) [7; 9; 10; 11; 12; 13; 14], which is now dominant in the literature, the sMP method is a theory concerning classic probability; all quantum characteristics are indirectly indicated through the classical waiting time distributions. On the other hand, the sMP method can handle the statistics of general time-extensive quantities related to the occurrence frequencies of adjacent collapses in quantum jump trajectories. In contrast, the TQME is restricted in the time-extensive quantities of the frequencies of single collapses. Notably, the counting statistics of sMPs were established over a decade ([15; 16]). However, to the best of our knowledge, their significance in open quantum systems has not been appreciated until the present work.
In this paper, we aim to deepen the sMP method by investigating the counting statistics in more complex open quantum systems in the presence of stochastic resetting. In the past decade, stochastic systems with various resetting protocols have attracted much theoretical interest in the community of nonequilibrium physics [17; 18; 19; 20; 21; 22; 23; 24]. Although most of this work has involved classical systems, some attention has also been devoted to quantum systems [25; 26; 27; 28; 29; 30; 31; 32; 33], e.g., constructing autonomous entanglement engines utilizing resetting [25; 29], designing nonequilibrium stationary states of quantum many-body systems through resetting [31], and achieving speedup of quantum hitting times by resetting [33]. Very recently, Perfetto et al. [32] studied the thermodynamics of quantum jump trajectories subject to non-Poissonian resetting. They found that the large deviation properties of the counting statistics can be calculated exactly by relating the moment-generating function (MGF) in the presence of resetting to that of a reset-free system. To achieve this result, they combined techniques used on the TQME with the renewal structure of the resetting dynamics.
The presence of stochastic resetting in open quantum systems raises a fundamental challenge to the sMP method. In the reset-free case, according to the theory of quantum jump trajectories [2; 9; 11; 34; 35; 36; 37; 38; 39], the wave function of the quantum system deterministically evolves in a nonunitary way and is randomly interrupted by collapses. Because the time distribution of pairs of adjacent collapses is non-Poissonian, called memory in this paper, and is independent of the previous history of the wave function, if we are only concerned about collapses of wave functions, including the collapsed quantum states and times, these random events constitute a sMP [40; 41]. In general, the resetting process is also an sMP. Hence, when these two stochastic processes occur simultaneously, the composite process is usually no longer semi-Markovian.
This work overcomes the aforementioned challenge, and it makes three main contributions. First, we construct a set of probability formulas that can precisely describe the composite stochastic process. The key idea is that resetting does not alter the underlying quantum dynamics; the notion of quantum jump trajectories is still valid. Second, we extend the previous results of Perfetto et al. [32] to general counting statistics. Because our theory is fully
based on sMPs, previously unknown formulas are discovered. For simplicity of description, the counting statistics mentioned in the remainder of this paper always refer to the general statistics unless otherwise indicated. Finally, we introduce a continuous-time cloning algorithm (CTCA) to simulate the large deviation statistics of general time-extensive quantities in open quantum systems with resetting. The algorithm originally aimed to compute the scaled cumulant generating functions (SCGFs) of non-Markov classical jump processes [42; 43; 44]. Because the set of probability formulas for the composite stochastic process is obtained, the applications of this method in open quantum systems are natural, and its realization is also simple.
This paper is organized as follows. In Sec. (II), we briefly summarize the sMP method for determining the counting statistics of open quantum systems and extend it to a situation with arbitrary initial states. In Sec. (III), counting statistics with memoryless resetting are studied. We will see that the method is still available if the set of collapsed states is expanded to include the reset state. Section (IV) discusses a more complex case with memory resetting. Although quantum jump trajectories in this situation are no longer sMPs, by considering trajectories and using probability formula, the counting statistics can still be constructed by reset-free statistics. In particular, an exact tilted matrix equation is obtained. In Sec. (V), we introduce a continuous-time cloning algorithm to simulate the large-deviation statistics. In Sec. (VI), several quantum optics systems are used to demonstrate our results. Section (VII) concludes the paper.
## II SMP method for counting statistics
Let \(\rho(t)\) be the reduced density matrix of an open quantum system. Under appropriate conditions, the dynamics of the system is described by the MQME [45; 46; 47]
\[\partial_{t}\rho(t)=-{\rm i}[H,\rho(t)]+\sum_{\alpha=1}^{M}r_{\alpha}\left(A _{\alpha}\rho(t)A_{\alpha}^{\dagger}-\frac{1}{2}\left\{A_{\alpha}^{\dagger}A _{\alpha},\rho(t)\right\}\right)\equiv{\cal L}[\rho(t)], \tag{1}\]
where the Planck constant \(\hbar\) is set to 1, \(H\) denotes the Hamiltonian of the quantum system, \(A_{\alpha}\) is the Lindblad operator, and the nonnegative coefficients \(r_{\alpha}\) and \(\alpha=1,\cdots,M\) represent certain correlation characteristics of the environment surrounding the quantum system.
The MQME (1) can be unraveled into quantum jump trajectories [34; 35; 36; 2; 37; 38; 39; 2]. These trajectories, which concern the evolutions of the wave functions of the single quantum systems, are composed of deterministic pieces and random collapses of the wave functions. The former are the solutions of nonlinear Schr\(\ddot{o}\)dinger equations. The latter indicate that the systems have collapsed to fixed states \(\phi_{\alpha}\), \(\alpha=1,\cdots,M\), which are called the collapsed states in this paper. If we focus on these states and use random time intervals \(\tau\) to replace the deterministic pieces between successive collapses, the quantum jump trajectories can be seen as the realizations of a sMP [1]. The ingredients of the sMP include the waiting time densities (WTDs) and survival distributions (SDs) [40; 41], which are
\[p_{\alpha|\beta}^{0}(\tau) = r_{\beta}\parallel A_{\beta}e^{-{\rm i}\tau\hat{H}}\phi_{\alpha }\parallel^{2}, \tag{2}\] \[S_{\alpha}^{0}(\tau) = \parallel e^{-{\rm i}\tau\hat{H}}\phi_{\alpha}\parallel^{2}, \tag{3}\]
respectively [2]. Here, the non-Hermitian Hamiltonian is
\[\hat{H}=H-\frac{{\rm i}}{2}\sum_{\alpha=1}^{M}r_{\alpha}A_{\alpha}^{\dagger} A_{\alpha}. \tag{4}\]
Equation (2) is the probability density of the wave function starting from collapsed state \(\phi_{\alpha}\), continuously evolving, and collapsing in state \(\phi_{\beta}\) until time \(\tau\). Equation (3) is the probability of the wave function successively evolving until time \(\tau\) without collapse. In this paper, we always denote quantities defined or solved in the absence of resetting with a superscript or subscript 0, unless otherwise stated. It is useful to introduce the hazard functions of the sMP:
\[k_{\alpha|\beta}^{0}(\tau) = \frac{p_{\alpha|\beta}^{0}(\tau)}{S_{\alpha}^{0}(\tau)}. \tag{5}\]
Obviously, they are the conditional probability densities at which the system collapses in \(\phi_{\beta}\) at time \(\tau\) within a unit time interval under the condition that the system continuously evolves from state \(\phi_{\alpha}\) until time \(\tau\) without collapsing.
A major application of the sMP perspective to open quantum systems is in counting statistics [5; 6; 7; 8]. These statistics concern time-extensive quantities
\[C[\vec{X}]=\sum_{i=1}^{N}\omega_{\alpha_{i-1}\alpha_{i}}. \tag{6}\]
Here, we denote the quantum jump trajectory with \(N\) collapses as
\[\vec{X}=(\phi_{\alpha_{1}},\phi_{\alpha_{2}},\cdots,\phi_{\alpha_{N}}), \tag{7}\]
where \(\phi_{\alpha_{i}}\) represents the collapsed state at time \(t_{i}\), \(i=1,\cdots,N\), and \(\omega_{\alpha_{i-1}\alpha_{i}}\) is a weight specified by the collapsed states at adjacent times \(t_{i-1}\) and \(t_{i}\), that is, the quantum states at the beginning and end times of a deterministic stage. The properties of the random variable (6) are characterized by the MGF [48]
\[M_{0}(\lambda,t)=\sum_{\vec{X}}{\cal P}[\vec{X}]e^{-\lambda C[\vec{X}]}, \tag{8}\]
where \({\cal P}[\vec{X}]\) represents the probability density of a quantum jump trajectory \(\vec{X}\). We set all the trajectories to start from a certain collapsed state. Note that the summation in Eq. (8) over quantum jump trajectories is only a shorthand notation, and its exact meanings include summing over all possible collapsed states at every time and performing time-ordered integrals at different times; see also Eq. (43) below.
The MGF (8) is obtained by first solving a tilted matrix equation in the complex frequency domain [1]:
\[{\bf G}_{0}(v)\hat{\bf P}_{0}(v)={\bf 1}_{\gamma}, \tag{9}\]
where the \(1\times M\) vector \(\hat{\bf P}_{0}^{T}=(\hat{P}_{1},\cdots,\hat{P}_{M})\) with the uppercase \(T\) denoting the transpose, \({\bf 1}_{\gamma}^{T}=(\delta_{1\gamma},\cdots,\delta_{M\gamma})\) and \(\delta_{\alpha\gamma}\) is the Kronecker symbol. Here, the initial collapsed state is set to \(\phi_{\gamma}\). Throughout this paper, we use a circumflex placed over a symbol to denote its Laplace transformation. The diagonal and nondiagonal elements of the matrix \({\bf G}_{0}\) are
\[\left[{\bf G}_{0}\right]_{\alpha\alpha} = \frac{1-\hat{p}_{\alpha|\alpha}^{0}(v)e^{-\lambda\omega_{\alpha \alpha}}}{\hat{S}_{0}^{0}(v)}, \tag{10}\] \[\left[{\bf G}_{0}\right]_{\alpha\beta} = -\frac{\hat{p}_{\beta|\alpha}^{0}(v)}{\hat{S}_{\beta}^{0}(v)}e^{ -\lambda\omega_{\beta\alpha}}\hskip 14.226378pt(\alpha\neq\beta), \tag{11}\]
respectively. Then, the Laplace transform of the MGF is
\[\hat{M}_{0}(\lambda,v)={\bf 1}^{T}\hat{\bf P}_{0}(v)={\bf 1}^{T}{\bf G}_{0}^{-1 }(v){\bf 1}_{\gamma}, \tag{12}\]
where \({\bf 1}^{T}\)=\((1,\cdots,1)\) is a \(1\times M\) vector. The last step is to take the inverse Laplace transform of Eq. (12) over the complex frequency \(v\) to obtain the MGF in the time domain. Equation (12) is itself useful since the SCGF of the large deviation of current \(j=C[\vec{X}]/t\) over a long time limit [49],
\[\varphi(\lambda) = \lim_{t\rightarrow\infty}\frac{1}{t}\ln M_{0}(\lambda,t), \tag{13}\]
can be obtained by finding its pole with the largest real value [15]. According to Eq. (9), it is also equal to the largest real root of the vanishing determinant of the tilted matrix \({\bf G}_{0}(v)\).
### Arbitrary initial states
We extend the previous results to a situation in which the quantum jump trajectories start with a quantum state that does not belong to the set of collapsed states, e.g., \(|A\rangle\)[50]. For an "autonomous" quantum system, if it starts with such a state, after the first collapse, all subsequent collapse states are in the set of collapsed states; that is, the system will never collapse to \(|A\rangle\) again. Because the quantum jump trajectories are still a sMP, we expand the previous vector and matrix to \(1\times(M+1)\)\(\hat{\bf P}_{0A}^{T}=(\hat{P}_{A},\hat{P}_{1},\cdots,\hat{P}_{M})\) and \((M+1)\times(M+1)\)\({\bf G}_{0A}(v)\), respectively. Note that we use the subscript \(A\) to indicate the arbitrary initial state. The elements of the matrix are
\[\left[{\bf G}_{0A}\right]_{00}=\frac{1}{\hat{S}_{0}^{0}(v)}, \hskip 56.905512pt\left[{\bf G}_{0A}\right]_{0\beta}=0, \tag{14}\] \[\left[{\bf G}_{0A}\right]_{\alpha 0}=-\frac{\hat{p}_{A|\alpha}^{0}(v)}{ \hat{S}_{0}^{0}(v)}e^{-\lambda\omega_{A\alpha}},\hskip 56.905512pt[{\bf G}_{0A} ]_{\alpha\beta}=[{\bf G}_{0}]_{\alpha\beta}(v). \tag{15}\]
In these equations, \(p^{0}_{A|\alpha}\) and \(S^{0}_{A}\) possess analogous formulas to Eqs. (2) and (3) except that the subscripts \(\alpha\) and \(\beta\) are replaced by \(A\) and \(\alpha\), respectively. An analogous tilted matrix equation about \(\mathbf{G}_{0A}\) and \(\hat{\mathbf{P}}^{T}_{0A}\) such as Eq. (9) is valid as well. Therefore, the Laplace transform of the MGF with the special initial state is
\[\hat{M}_{0A}(\lambda,v)=\mathbf{1}^{T}\hat{\mathbf{P}}_{0A}(v)=\mathbf{1}^{T} \mathbf{G}_{0A}^{-1}(v)\mathbf{1}_{A}. \tag{16}\]
Here, both \(\mathbf{1}^{T}\) and \(\mathbf{1}^{T}_{A}=(1,0,\cdots,0)\) are \(1\times(M+1)\) vectors. Because the determinants of the tilted matrices \(\mathbf{G}_{0}\) and \(\mathbf{G}_{0A}\) are the same, an arbitrary initial state does not alter the large deviation properties of open quantum systems.
## III Counting statistics with memoryless resetting
Let us start with the resetting case in which resetting occurs at a constant rate \(K\) and is independent of previous quantum states. This is a special case of more general memory resetting. However, we will prove that this case can
Figure 1: Schematic diagrams of two quantum jump trajectories of a quantum system with memoryless resetting (a) and memory resetting (b). The long red vertical lines represent the times at which resetting occurs, while the short black vertical lines represent the times at which collapses of wave functions occur. In Panel (a), the times are indicated by \(t_{i}\). The collapsed states and the reset state at these times are denoted by \(\phi_{\alpha_{i}}\) and \(|R\rangle_{i}\), respectively, \(i=1,2,\cdots,N\). There are four types of combinations of different beginning and end states. We mark them in the panel by horizontal short lines with double arrows. Their time intervals are uniformly labeled as \(\tau\). In Panel (b), we denote the resetting times as \(T_{I}\), \(I=1,\cdots,N\), while the times at which the quantum system collapses are labeled as \(t_{I,i}\). The subscripts indicate the \(i\)-th collapse following the \(I\)-th resetting, where \(i=1,2,\cdots\). Accordingly, the collapsed states are denoted as \(\phi_{\alpha(I,i)}\). There are also four types of combinations of different beginning and end states. We still denote them with horizontal lines with double arrows. Because the time interval \(\tau\) is inadequate to characterize the memory effects on the two combinations, we additionally define \(T\), which is the time interval from the first collapsed state to the last resetting.
be studied by simply modifying the reset-free results. Throughout this paper, we only consider one type of resetting, and quantum systems are always reset to \(|\phi_{R}\rangle\). This resetting quantum state may be one of the collapsed states. In addition, we initialize all quantum trajectories to the reset state.
Because resetting is memoryless, the quantum jump trajectories in the case of a reset state are still sMPs. This point is shown schematically in Fig. (1)(a). At time \(t_{2}\), resetting occurs. Then, the evolution of the quantum system that started from the collapsed state \(\phi_{\alpha_{3}}\) at time \(t_{3}\) is not affected by the previous wave function history, including the time interval \(t_{3}-t_{2}\) and the \(t_{2}\) value. Therefore, we naturally think of the quantum jump trajectories as having a set of extended collapsed states; that is, the resetting quantum state \(|R\rangle\) is included. On the other hand, we emphasize that resetting indeed leads to new SDs and WTDs and modifies the existing SDs and WTDs:
\[S_{\alpha}(\tau)=e^{-\int_{0}^{\tau}\sum_{\beta=1}^{M}k_{\alpha| \beta}^{0}ds-K\tau}=S_{\alpha}^{0}(\tau)e^{-K\tau}, \tag{17}\] \[S_{R}(\tau)=e^{-\int_{0}^{\tau}\sum_{\beta=1}^{M}k_{R|\beta}^{0 }ds-K\tau}=S_{R}^{0}(\tau)e^{-K\tau}, \tag{18}\]
and
\[p_{\alpha|\beta}(\tau)=k_{\alpha|\beta}^{0}(\tau)S_{\alpha}( \tau)=p_{\alpha|\beta}^{0}(\tau)e^{-K\tau}, \tag{19}\] \[p_{\alpha|R}(\tau)=KS_{\alpha}(\tau)=S_{\alpha}^{0}(\tau)Ke^{-K \tau},\] (20) \[p_{R|\beta}(\tau)=k_{R|\beta}^{0}(\tau)S_{R}(\tau)=p_{R|\beta}^{ 0}(\tau)e^{-K\tau},\] (21) \[p_{R|R}(\tau)=KS_{R}(\tau)=S_{R}^{0}(\tau)Ke^{-K\tau}. \tag{22}\]
With Eqs. (17)-(22), the MGF in the presence of memoryless resetting is calculated in the same way as in the previous reset-free case. A difference is that in the current case, the tilted matrix \(\mathbf{G}\) is \((M+1)\times(M+1)\), the elements of which are
\[[\mathbf{G}]_{00}=\frac{1}{\hat{S}_{R}^{0}(v+K)}-K,\hskip 28.452756pt[ \mathbf{G}]_{0\beta}=-K,\] \[[\mathbf{G}]_{\alpha 0}=-\frac{\hat{p}_{R|\alpha}^{0}(v+K)}{\hat{S}_{R}^{0 }(v+K)}e^{-\lambda\omega_{R\alpha}},\hskip 5.690551pt[\mathbf{G}]_{\alpha \beta}=[\mathbf{G}_{0}]_{\alpha\beta}(v+K).\]
Because we are not interested in the number of resets, we set the weights \(\omega_{RR}\) and \(\omega_{\alpha R}\) to zero. Obviously, the tilted matrices defined thus far have a simple relationship:
\[\mathbf{G}(v)=\mathbf{G}_{0R}(v+K)-K\mathbf{\Pi}, \tag{23}\]
where the matrix elements of \(\mathbf{G}_{0R}\) are given in Eqs. (14) and (15)) with the subscript \(A\) replaced by \(R\), and \(\mathbf{\Pi}=(\mathbf{1}_{R},\cdots,\mathbf{1}_{R})\) is a \((M+1)\times(M+1)\) square matrix.
Analogous to the reset-free case, the Laplace transform of the MGF in the presence of memoryless resetting is \(\hat{M}(\lambda,v)=\mathbf{1}^{T}\hat{\mathbf{P}}(v)\), where the vector \(\hat{\mathbf{P}}^{T}=(\hat{P}_{R},\hat{P}_{1},\cdots,\hat{P}_{M})\) satisfies the tilted matrix equation
\[\mathbf{G}(v)\hat{\mathbf{P}}(v)\ =\ \mathbf{1}_{R}. \tag{24}\]
Substituting Eq. (23) into Eq. (24), multiplying both sides of the equation by \(\mathbf{1}^{T}\mathbf{G}_{0R}^{-1}(v+K)\) from the left-hand side, and using Eq. (16), we arrive at
\[\hat{M}(\lambda,v)=\frac{\hat{M}_{0R}(\lambda,v+K)}{1-K\hat{M}_{0R}(\lambda,v+ K)}. \tag{25}\]
This result indicates that the MGF in the presence of memoryless resetting with reset state \(|R\rangle\) is related to the MGF in the absence of resetting but with the special initial state \(|R\rangle\). Equation (25) also implies that the SCFG of the former case can be obtained by finding the largest real root of the following algebraic equation with the parameter \(v\):
\[K\hat{M}_{0R}(\lambda,v+K)-1=0. \tag{26}\]
## IV Counting statistics with memory resetting
We move to a more complex case with memory resetting. Unlike the previous case with memoryless resetting, a reset affects the evolution of wave functions even if the quantum system restarts from an independent collapsed
state after resetting. We illustrate this point in Fig. (1)(b). Assume that a reset and a subsequent collapse to the state \(\phi_{\alpha(2,1)}\) occur at \(T_{2}\) and \(t_{2,1}\), respectively. Because of memory, the possibility of the continuous evolution of the quantum system that restarts from the collapsed state depends on the time interval \(t_{2,1}-T_{2}\).
This characteristic can be indicated by precise formulas. First, we let the hazard function of memory resetting be \(\mathcal{K}(\tau)\). The corresponding WTD \(\mathcal{Q}(\tau)\) of the memory resetting process is
\[\mathcal{Q}(\tau)=\mathcal{K}(\tau)\mathcal{S}(\tau), \tag{27}\]
and the SD is \(\mathcal{S}(\tau)=\exp[-\int_{0}^{\tau}\mathcal{K}(s)ds]\). Then, we can write the WTDs and SDs of the quantum jump trajectories in the presence of memory resetting in an analogous way to Eqs. (17)-(22):
\[S_{\alpha}(\tau,T)=e^{-\int_{0}^{\tau}\sum_{\beta=1}^{M}k_{ \alpha|\beta}^{0}ds-\int_{T}^{T+}\mathcal{K}(s)ds}=S_{\alpha}^{0}(\tau)\frac{ \mathcal{S}(T+\tau)}{\mathcal{S}(T)},\] \[S_{R}(\tau)=S_{R}^{0}(\tau)e^{-\int_{0}^{\tau}\mathcal{K}(s)ds} =S_{R}^{0}(\tau)\mathcal{S}(\tau), \tag{28}\]
and the WTDs are
\[p_{\alpha|\beta}(\tau,T)=k_{\alpha|\beta}^{0}(\tau)S_{\alpha}( \tau,T)=p_{\alpha|\beta}^{0}(\tau)\frac{\mathcal{S}(T+\tau)}{\mathcal{S}(T)}, \tag{29}\] \[p_{\alpha|R}(\tau,T)=\mathcal{K}(\tau+T)S_{\alpha}(\tau,T)=S_{ \alpha}^{0}(\tau)\frac{\mathcal{Q}(T+\tau)}{\mathcal{S}(T)},\] (30) \[p_{R|\beta}(\tau)=k_{R|\beta}^{0}(\tau)S_{R}(\tau)=p_{R|\beta}^{ 0}(\tau)\mathcal{S}(\tau),\] (31) \[p_{R|R}(\tau)=\mathcal{K}(\tau)S_{R}(\tau)=S_{R}^{0}(\tau) \mathcal{Q}(\tau). \tag{32}\]
In contrast to the case with memoryless resetting, we specifically introduce the time parameter \(T\), which denotes the time interval between a collapsed state and the most recent previous reset; see Fig. (1)(b). Obviously, the breakdown of the sMP is due to the time dependence of the hazard function \(K(\tau)\). That is, if the function were constant, these SDs and WTDs would reduce to Eqs. (17)-(22).
Although the quantum jump trajectories in the presence of memory resetting are no longer a sMP, the MGF (8) defined by the probabilities of the trajectories is still true. Of course, we need to modify the notation of the trajectories to
\[\vec{X}_{N}=(|R\rangle_{1},\phi_{\alpha(1,1)},\cdots,|R\rangle_{2},\phi_{ \alpha(2,1)},\cdots,|R\rangle_{N},\phi_{\alpha(N,1)},\cdots), \tag{33}\]
where the total number of resets is \(N\), \(|R\rangle_{I}\) and \(\phi_{\alpha(I,i)}\) denote the reset state and collapsed state at times \(t_{I}\) and \(t_{I,i}\), respectively, \(I=1,\cdots,N\), and \(i=1,\cdots\). On the other hand, we can apply Eqs. (17)-(22) to explicitly write the probability density of an arbitrary trajectory, e.g., that in Fig. (1)(b):
\[p_{R|\alpha(1,1)}(t_{1,1}-T_{1})p_{\alpha(1,1)|R}(T_{2}-t_{1,1} )p_{R|\alpha(2,1)}(t_{2,1}-T_{2})p_{\alpha(2,1)|\alpha(2,2)}(t_{2,2},-t_{2,1},t_{2,1}-T_{2})\cdots\] \[p_{R|\alpha(N,1)}(t_{N,1}-T_{N})S_{\alpha(N,1)}(t-t_{N,1},t_{N,1 }-T_{N}). \tag{34}\]
However, if we are not simulating quantum jump trajectories, e.g., as in Sec. (V)), these complicated probability formulas are not very useful.
According to probability theory, The probability density of the quantum jump trajectory (33) is equal to the product of the probability density \(\mathcal{P}[\vec{R}_{N}]\) of observing the time series of resets,
\[\vec{R}_{N}=(|R\rangle_{1},|R\rangle_{2},\cdots,|R\rangle_{N}), \tag{35}\]
and the conditional probability density of observing the time series of the collapsed states given the time series of resets. Importantly, if a reset occurred, a segment of the quantum jump trajectory between now and the next reset is independent of the history before this reset. Hence, we formally write the probability density of the quantum jump trajectory (33) as
\[\mathcal{P}[\vec{X}_{N}]=\mathcal{P}[\vec{R}_{N}]\prod_{I=1}^{N} \mathcal{P}_{0}[\vec{X}_{N}^{I}], \tag{36}\]
where \(\mathcal{P}_{0}[\vec{X}_{N}^{I}]\) is the conditional probability density of a segment of the quantum jump trajectory
\[\vec{X}_{N}^{I}=(|R\rangle_{I},\phi_{\alpha(I,1)},\phi_{\alpha(I,2)},\cdots, \phi_{\alpha(I,M_{I})}) \tag{37}\]
given that the \(I\)-th and \(I+1\)-th resets happened. Here, we explicitly set the number of collapses in the segment to \(M_{I}\). Obviously, the entire quantum jump trajectory is a combination of these segments; that is,
\[\vec{X}_{N}=(\vec{X}_{N}^{1},\cdots,\vec{X}_{N}^{N}). \tag{38}\]
Using the WTD and SD of the resetting process, we have
\[\mathcal{P}[\vec{R}_{N}]=\mathcal{Q}(T_{2}-T_{1})\cdots\mathcal{Q}(T_{N}-T_{N- 1})\mathcal{S}(t-T_{N}). \tag{39}\]
On the other hand, the conditional probability density \(\mathcal{P}_{0}[\vec{X}_{N}^{I}]\) is simply the probability density of the quantum jump trajectory (37) of a reset-free quantum system with the special initial quantum state \(|R\rangle\). This case was discussed in Sec. (II.1). Therefore, we have
\[\mathcal{P}_{0}[\vec{X}_{N}^{I}]=p_{R|\alpha(I,1)}^{0}(t_{I,1}-T_{I})p_{\alpha (I,1)|\alpha(I,2)}^{0}(t_{I,2}-t_{I,1})\cdots S_{\alpha(I,M_{I})}^{0}(T_{I+1}- t_{I,M_{I}}). \tag{40}\]
Before achieving the desired MGF in the presence of memory resetting, we need to rewrite the time-extensive quantity (6) as
\[C[\vec{X}_{N}]=\sum_{I=1}^{N}C[\vec{X}_{N}^{I}]. \tag{41}\]
The reason is simply that we have already set the weights \(\omega_{\alpha R}\) and \(\omega_{RR}\) to zero. Substituting Eqs. (36) and (41) into the definition of the MGF, Eq. (8), we have
\[M(\lambda,t) = \sum_{N=1}^{\infty}\sum_{\vec{X}_{N}}\mathcal{P}[\vec{R}_{N}] \prod_{I=1}^{N}e^{-\lambda C[\vec{X}_{N}^{I}]}\mathcal{P}_{0}[\vec{X}_{N}^{I}] \tag{42}\] \[= \sum_{N=1}^{\infty}\sum_{\vec{R}_{N}}\mathcal{P}[\vec{R}_{N}] \sum_{\vec{X}_{N}^{I}}\cdots\sum_{\vec{X}_{N}^{N}}\prod_{I=1}^{N}e^{-\lambda C [\vec{X}_{N}^{I}]}\mathcal{P}_{0}[\vec{X}_{N}^{I}]\] \[= \sum_{N=1}^{\infty}\sum_{\vec{R}_{N}}\mathcal{P}[\vec{R}_{N}] \prod_{I=1}^{N}\left(\sum_{\vec{X}_{N}^{I}}e^{-\lambda C[\vec{X}_{N}^{I}]} \mathcal{P}_{0}[\vec{X}_{N}^{I}]\right).\]
We see that the term in parentheses in the last equality is simply the MGF \(M_{0R}(\lambda,T_{I+1}-T_{I})\) discussed in Sec. (II.1). Note that the summation over the time series of resetting, Eq. (35), is in fact a shorthand notation for the time-ordered integrals at different times:
\[\sum_{R_{N}}\equiv\int_{0}^{t}dT_{2}\int_{T_{2}}^{t}dT_{3}\cdots\int_{T_{N-1}} ^{t}dT_{N}. \tag{43}\]
The complex Eq. (42) can be dramatically simplified if we take its Laplace transform, and we arrive at
\[\hat{M}(\lambda,v)=\frac{(\hat{\mathcal{S}}*\hat{M}_{0R})(\lambda,v)}{2\pi \mathrm{i}-(\hat{\mathcal{Q}}*\hat{M}_{0R})(\lambda,v)}, \tag{44}\]
where the asterisks represent convolutions and Eq. (40) is used.
Equation (44) has two consequences. First, we can obtain the SCFG in the presence of memory resetting by finding the largest real root of an algebraic equation with the parameter \(v\):
\[\frac{1}{2\pi\mathrm{i}}(\hat{\mathcal{Q}}*\hat{M}_{0R})(\lambda,v)-1=0. \tag{45}\]
Second, Eq. (44) can be interpreted as a result of a matrix equation analogous to Eq. (24), but the tilted matrix therein is updated to
\[\mathbf{G}(v)=\left[\left(\hat{\mathcal{S}}*\mathbf{G}_{0R}^{-1}\right)(v) \right]^{-1}\left[2\pi\mathrm{i}-\left(\hat{\mathcal{Q}}*\mathbf{G}_{0R}^{-1} \right)(v)\mathbf{\Pi}\right]. \tag{46}\]
We may simply verify Eqs. (44)-(46) by applying them to the memoryless resetting case: because the WTD and SD of such a resetting process are exponential decay functions with the constant \(K\) and the convolutions therein are proportional to \(\dot{M}_{0R}(v+K)\) and \(\mathbf{G}_{0R}^{-1}(v+K)\), Eqs. (23), (25), and (26) can be rederived.
We close this section by commenting on differences between our theory and that of Perfetto et al. [32]. They obtained Eq. (44) by a method that is entirely based on the TQME [7; 9; 13]. Although our tilted matrix equation (9) was proven to be equivalent to the TQME, this consistency only holds for a special type of time-extensive quantity, i.e., the weights that depend only on the second collapse of a pair of collapsed states [1]. From this perspective, our results are more general than the previous ones. On the other hand, the mathematics we are using is essentially classical probability theory. In contrast, Perfetto et al. [32] applied a hierarchy of equations about conditional density matrices. The probability meanings in their method are not very direct. In the next section, we will show that the classical probability formulas are very useful when we attempt to simulate SCGFs of open quantum systems, either with or without resetting.
## V Continuous-time cloning algorithm
The algebraic Eq. (26) or the more general algebraic Eq. (45) provides us with a way of calculating SCGFs of open quantum systems with resetting. The first step is to solve the reset-free MGF \(\hat{M}_{0R}(\lambda,v)\) with the special initial state \(|R\rangle\). The next step is to solve the algebraic equations. In general, these two steps are implemented numerically. On the other hand, Cavallaro and Harris [42] developed a continuous-time cloning algorithm (CTCA) to simulate SCGFs of an arbitrary classical non-Markov process. In this paper, we do not review the CTCA of Cavallaro and Harris. Interested readers are referred to the original article [42]. Because we have established the sMP theoretical framework for quantum jump trajectories either with or without resetting, which is a special case of non-Markov processes, we can directly apply their algorithm to the current situation. The key ingredients of the simulation are presented in Eqs. (28)-(32).
## VI Several examples
### Reset-free resonant two-level system
We first give an example of the CTCA by simulating SCGFs in a reset-free open quantum system. To the best of our knowledge, applications of the algorithm in open quantum systems are rare in the literature. We choose a resonant two-level system (TLS) whose SCGF has an exact expression [1]. The quantum system is driven by a resonant field and surrounded by an environment with inverse temperature \(\beta\). In the interaction picture, the MQME of the TLS [51] is
\[\partial_{t}\rho(t) = -\mathrm{i}\left[H,\rho(t)\right]+r_{-}[\sigma_{-}\rho(t)\sigma_ {+}-\frac{1}{2}\{\sigma_{+}\sigma_{-},\rho(t)\}] \tag{47}\] \[+r_{+}[\sigma_{+}\rho(t)\sigma_{-}-\frac{1}{2}\{\sigma_{-}\sigma_ {+},\rho(t)\}].\]
Here, \(H=-\Omega(\sigma_{-}+\sigma_{+})/2\) represents the interaction Hamiltonian between the system and the resonant field, \(\sigma_{\pm}\) are the raising and lowering Pauli operators, \(\Omega\) is the Rabi frequency, and \(r_{\pm}\) are the pumping and damping rates. The two rates satisfy the detailed balance condition, \(r_{-}=r_{+}\exp{(\beta\omega_{0})}\), and \(\omega_{0}\) is the energy level difference of the two-level system. There are two collapsed states: the ground state \(|0\rangle\) and the excited state \(|1\rangle\). We select heat production as a time-extensive quantity, the weights of which are
\[\{\omega_{00},\omega_{01},\omega_{10},\omega_{11}\}\rightarrow\{\omega_{0},- \omega_{0},\omega_{0},-\omega_{0}\}. \tag{48}\]
Because the WTDs and SDs of the simple system are exactly known [1], the CTCA is easily implemented. Figure (2) shows the simulated SCGFs under two sets of parameters. For comparison, exact numeric data are shown in the same figure. We see that their agreement is very satisfactory.
### Memoryless resetting
In the presence of resetting, we focus on a TLS in a vacuum; i.e., \(r_{+}=0\) in Eq. (47). A similar system was considered by Perfetto et al. [31]. Unlike the previous discussions, which fully rely on numerical schemes, we remain
as analytically accessible as possible. Here, there is only one collapsed state, the ground state \(|0\rangle\). Let the reset state be
\[|R\rangle=a|0\rangle+b|1\rangle. \tag{49}\]
For simplicity, the parameters \(a\) and \(b\) are assumed to be real numbers. We select the time-extensive quantity (6) as the number of collapses to the ground state; that is, the unique weight \(\omega_{00}=1\). Performing direct calculations, we obtain the Laplace transform of the reset-free MGF with the initial state (49):
\[\hat{M}_{0R}(\lambda,v) = \hat{P}_{R}(v)+\hat{P}_{0}(v) \tag{50}\] \[= \hat{S}_{R}^{0}(v)+\frac{\hat{S}_{0}^{0}(v)}{1-e^{-\lambda}\hat{ p}_{0|0}^{0}(v)}\hat{p}_{R|0}^{0}(v)e^{\lambda}\]
Figure 2: The solid and dashed curves are the SCGFs simulated by the CTCA. The open squares and circles are the exact numeric data calculated by Eq. (79) in Ref. [1]. Two sets of parameters are used: for the solid curve and squares, \(r_{-}=1\), \(r_{+}=0.5\), \(\Omega=0.8\), and \(\omega_{0}=1\); for the dashed curve and circles, \(r_{-}=1\), \(r_{+}=0.0\), \(\Omega=0.8\), and \(\omega_{0}=1\). The latter concerns a TLS in a vacuum. These parameters were also applied in a previous paper; see Fig. (2) therein [1]. In the simulations, the number of clones is 2000, and the simulation time is 1500.
where the reset-free WTDs and SDs with the special initial states \(|R\rangle\) and \(|0\rangle\) are connected by
\[\hat{S}_{R}^{0}(v) = \hat{S}_{0}^{0}(v)-b^{2}\frac{r_{-}}{\xi^{2}+4\mu^{2}} \tag{51}\] \[= \frac{\xi^{2}+r_{-}(\xi+r_{-}/2)/2+4\mu^{2}}{\xi(\xi^{2}+4\mu^{2}) }-b^{2}\frac{r_{-}}{\xi^{2}+4\mu^{2}},\] \[\hat{p}_{R|0}^{0}(v) = \hat{p}_{0|0}^{0}(v)+b^{2}\frac{r_{-}(\xi-r_{-}/2)}{\xi^{2}+4\mu^ {2}},\] (52) \[= \frac{r_{-}\Omega^{2}}{2\xi(\xi^{2}+4\mu^{2})}+b^{2}\frac{r_{-}( \xi-r_{-}/2)}{\xi^{2}+4\mu^{2}},\]
respectively, where \(\xi\equiv v+r_{-}/2\) and \(16\mu^{2}\equiv 4\Omega^{2}-r_{-}^{2}\) (\(>0\)). If \(b^{2}\) is equal to zero, Eq. (50) is the Laplace transform of the reset-free MGF \(\hat{M}_{0}(\lambda,v)\).
To calculate the SCGF of large deviations of the open quantum system with memoryless resetting, we substitute Eq. (50) into Eq. (26) and simplify to obtain an algebraic equation involving \(v\):
\[\zeta^{3}(v)-K\zeta^{2}(v)+\left[4\mu^{2}-\frac{1}{2}Kr_{-}+b^{2} Kr_{-}\left(1-e^{-\lambda}\right)\right]\zeta(v)-\Omega^{2}\left(K+\frac{1}{2}r_{- }e^{-\lambda}\right)=0, \tag{53}\]
where \(\zeta(v)=v+r_{-}/2+K\). If the rate \(K\) is zero, Eq. (53) reduces the cubic equation for the SCGF of the reset-free TLS in a vacuum [Eq. (C) in Ref. ([1])] and does not involve the parameter \(b\). It is expected that the zero rate implies that resetting is absent from the dynamics of the quantum system, and the long-time behavior of the system is independent of the initial quantum state. The SCGF is solved by finding the largest real root of Eq. (53). This is a cubic algebraic equation and has an exact solution given by Cardano's formula [52]. Considering that the formula is slightly lengthy, in Fig. (3)(a), we only show its exact numerical values under several sets of parameters and compare them with the data simulated by the CTCA. We see that the two methods are indeed consistent with each other.
Equation (53) provides us with intriguing information about the statistics of the counting current \(j\). For instance, through it, we can derive analytical expressions for the first and second derivatives of the SCGF at \(\lambda=0\):
\[\varphi^{\prime}(0) = \zeta^{\prime}(0)=\frac{r_{-}[\Omega^{2}+2b^{2}K\zeta(0)]}{8\mu^ {2}-Kr_{-}-4K\zeta(0)+6\zeta(0)^{2}}, \tag{54}\] \[\varphi^{\prime\prime}(0) = \frac{\Omega^{2}r_{-}+4(K-3\zeta(0))\zeta^{\prime}(0)^{2}+2b^{2} Kr_{-}[\zeta(0)+2\zeta^{\prime}(0)]}{8\mu^{2}+6\zeta(0)^{2}-K(r_{-}+4\zeta(0))}, \tag{55}\]
where \(\zeta(0)=K+r_{-}/2\). According to the large deviation theory [49], Eq. (54) is the mean current, while Eq. (55) indicates the fluctuation of the current in the long time limit (the coefficient of diffusivity). In Fig. (3)(b), we show their values at different resetting rates with different reset states. We see that at larger \(K\) values, all of them tend toward certain values. At smaller and intermediate \(K\) values, however, their behaviors are diverse and depend on the concrete values of the parameters.
It is not trivial to present concise and clear explanations for the various behaviors of the mean and fluctuation of the counting current under general parameters, e.g., the nonmonotonic phenomena of the fluctuation with \(b^{2}=0\) and mean current with \(b^{2}=1/2\) in Fig. (3)(b). These complexities arise from mutual matching and/or competition among many factors, including the resetting frequency, different WTDs with respect to different reset states or collapsed states, etc. Hence, the following discussion is restricted to several of the simplest cases. We know that in the quantum jump trajectories, the initial state of the deterministic quantum processes is either a reset state or a collapsed state (here, only the ground state). It takes a certain amount of time to evolve from these quantum states to collapse [2]. We can prove that the higher the probability (\(b^{2}\)) of the excited state in these quantum states is, the shorter the time [53]. When the probability is negligible in the reset state, resetting only decreases the rate of collapse. This is because resetting interrupts the deterministic processes from the collapsed state to the next collapse. Furthermore, the time required for the deterministic process from this reset state to the next collapse is almost the same as the time required for the previous quantum process. Therefore, the more frequently the quantum system resets, the smaller the rate at which the system collapses. In contrast, if the probability of the excited state is dominant in the reset state, resetting will increase the rate of collapse. Although resetting indeed interrupts the deterministic processes from the collapsed state to the next collapse, the time required for the deterministic process from the reset state to the next collapse is shorter than that of the previous quantum process. Overall, it still shortens the time interval between the two collapses. Therefore, the more frequently the quantum system resets, the higher the rate at which the system collapses. Finally, if the resetting rate is so frequent that resetting even interrupts the deterministic processes from the previous reset state to the next collapse, it seems that the quantum system is almost "frozen" at the reset state.
The probability of a collapse from the reset state during a small time interval \(\Delta t\) is proven to be \(b^{2}r_{-}\Delta t\)[54]. In this situation, the quantum jump trajectories are close to the Poisson counting process with a constant rate \(\dot{b}^{2}r_{-}\). Accordingly, the fluctuation is equal to \(b^{2}r_{-}\). These discussions qualitatively explain the asymptotic behaviors of all curves at very large \(K\) values and the mean current curves with \(b^{2}=0,1\) at smaller \(K\) values in Fig. (2)(b).
The previous discussions can be treated in a quantitative way. The asymptotic behaviors are easily seen from Eqs. (54) and (55): when \(K\rightarrow\infty\), both \(\varphi^{\prime}(0)\) and \(\varphi^{\prime\prime}(0)\) tend toward \(b^{2}r_{-}\). At smaller \(K\) values, we use the Taylor expansion to analyze the variation:
\[\varphi^{\prime}(0) \approx \frac{\alpha}{3}r_{-}+\frac{z}{2+z}(b^{2}-\alpha)K, \tag{56}\] \[\varphi^{\prime\prime}(0) \approx \frac{(2-z)^{2}+2z}{(2+z)^{3}}r_{-}+\frac{z[(2-z)^{2}+8]}{(2+z)^{ 3}}\left(b^{2}-\beta\right)K, \tag{57}\]
Figure 3: (a) The solid, dashed, and dotted curves are the SCGF data solved by Cardano’s formula. The symbols are the simulated data given by the CTCA. The parameters are \(r_{-}=1\), \(\Omega=0.8\), and \(\omega_{0}=1\), and the rates \(K\) are 5, 1, and 0.1, accordingly. The reset state is fixed at the excited state \(|1\rangle\); i.e., \(b=1\). The number of clones is 2000, and the simulation time is 1500. (b) The solid and dashed curves are the first and second derivatives of the SCFG at \(\lambda=0\), respectively. The parameters \(b^{2}\) in the reset states are 1, 1/2, and 0 for the dark, blue, and red curves, respectively. Inset: The curves of \(\alpha\) and \(\beta\); see Eqs. (58) and (59). The arrow indicates the \(z\)-value of the parameters.
where the dimensionless \(z\) is equal to \(r_{-}^{2}/\Omega^{2}\) (\(<4\)) and
\[\alpha = \frac{3}{2+z}, \tag{58}\] \[\beta = \frac{28-34z+3z^{2}}{(2+z)[(2-z)^{2}+8]}. \tag{59}\]
The first terms on the right-hand side of Eqs. (56) and (57) are the mean and fluctuation of the counting current without resetting, respectively. The dependence of \(\alpha\) and \(\beta\) on \(z\) are plotted in the inset of Fig. (2)(b). For the parameters applied in the figure (see the bold arrow therein), \(0<\alpha<1\), \(\beta<0\), and the curves of fluctuation increase at \(K=0\), while for the curves of the mean current, there is a change of the slopes at \(K=0\) from negative to positive. In addition, Eq. (56) implies nonmonotonic features of the mean current when \(b^{2}\) is within the interval \((\alpha/3,\alpha)\), while Eq. (57) also implies that the fluctuation curve with \(b^{2}=0\) must have a maximum value.
### Memory resetting
To illustrate memory effects in the counting statistics with resetting, we choose the Erlang-2 distribution with rate parameter \(K\) as the WTD of the resetting process:
\[Q(\tau)=K^{2}\tau e^{-K\tau}. \tag{60}\]
Note that the mean rate of this distribution (the reciprocal of the mean waiting time) is equal to \(K/2\), and the variance is equal to \(2/K^{2}\). Substituting this distribution into Eq. (45), we have
\[K^{2}\frac{d}{d\zeta}\hat{M}_{0R}(\lambda,\zeta)+1=0, \tag{61}\]
\(\zeta(v)=v+r_{-}/2+K\), and the reset-free MGF with the initial quantum state \(|R\rangle\) is given by Eq. (50). Although the equation is exact, a simplification shows that this is a sixth-order algebraic equation about \(v\), and a numerical scheme must be used to find the largest real roots of \(v\) given \(\lambda\). We present the data for several \(K\)-values in Fig. (3)(a) and compare them with the SCGF data simulated by the CTCA. We see that their agreements are also satisfactory.
Similar to the memoryless-resetting case, Eq. (61) provides useful information about memory effects on the mean and fluctuation of the counting current. Rather than writing very lengthy equations analogous to Eqs. (54) and (55), we directly present their numerical values in Fig. (4)(b) and the inset with special reset states. To compare the data of the memoryless-resetting case, we use the mean rate as the horizontal axis. We find that these mean curves and fluctuation curves with memory are similar to the previous ones without memory. At smaller and adequately large mean rates, they almost overlap. Their differences become apparent at intermediate \(K\) values: the mean current and fluctuation with memory are larger than those without memory if the reset state is the excited state, while the opposite conclusion is obtained if the reset state is the ground state; see the inset. This demonstrates the complexity of the interactions between memory and the reset state in quantum jump trajectories.
At adequately small and large \(K\) values, memory is marginal in the counting statistics. The latter point is easily seen since the quantum state is almost "frozen" in the reset state, as resetting is very frequent. For smaller \(K\), we apply Taylor expansion again and find that the mean current has the same expression as Eq. (56) except that the parameter \(K\) therein is replaced by \(K/2\). Because it is simply the mean rate of the Erlang distribution, we explain the consistency of the mean currents with memoryless and memory resetting. Regarding the fluctuation, we obtain a slightly complicated expression:
\[\varphi^{\prime\prime}(0) \approx \frac{(2-z)^{2}+2z}{(2+z)^{3}}r_{-}+\] \[\frac{z[(2-z)^{2}+8+3z]}{(2+z)^{3}}\left[b^{2}-\frac{28-(59/2)z+3 z^{2}}{(2+z)[(2-z)^{2}+8+3z]}-b^{4}\frac{z(z+2)}{2[(2-z)^{2}+8+3z]}\right]\frac{K}{2}.\]
The \(b^{4}\)-term clearly indicates memory effects on the fluctuations. Although Eqs. (57) and (63) are distinct, the calculations show that their data are close, especially for the parameters applied in the figure (given the same mean rate, the values with memory are slightly smaller than the values without memory).
## VII Conclusion
In this paper, we extend our previous sMP method for determining the counting statistics of open quantum systems to situations with memoryless and memory resetting. For the former situation, because the composition of the random events, which includes the collapses of the wave function and the quantum reset state, is still a sMP, the method can be directly applied by simply adding the reset state into the set of collapsed states. For the latter situation, the composite stochastic process is no longer a sMP. Even so, because resetting affects quantum processes only through the initial quantum states instead of altering the quantum dynamics, using probability formulas, we prove that the MGF of open quantum systems with memory resetting can be calculated by relating it to the MGF of reset-free open quantum systems. Although this conclusion agrees with the previous one, its validity has been expanded to general counting statistics. A tilted matrix equation that has not been previously discovered is proposed. Finally, to simulate the large-deviation statistics of general random time-extensive quantities, on the basis of a set of probability formulas that can characterize the composite stochastic process, we introduce the CTCA to open quantum systems.
Figure 4: (a) The solid, dashed, and dotted curves are the SCGF data obtained by solving Eq. (61) by a numerical method. The symbols are the data simulated by the CTCA. The parameters are \(r_{-}=1\), \(\Omega=0.8\), and \(\omega_{0}=1\), and the rates \(K\) are 5, 1, and 0.1, accordingly. Here, the reset state is set to the excited state. The number of clones is 2000, and the simulation time is 1500. Inset: A comparison between an exponential distribution with rate 5 (dashed curve) and an Erlang distribution with rate parameter \(K=10\) (solid curve). Note that their mean rates are the same, and the variance of the latter (2/100) is smaller than that of the former (1/25). (b) The squares and circles are the first and second derivatives of the SCFG at \(\lambda=0\), respectively. Here, the reset state is the excited state. For comparison, we also replot the black solid and dashed curves in Fig. (3)(b) for the case of memoryless resetting. Note that we show the mean rate on the horizontal axis. Inset: Analogous data where the reset state is the ground state.
To illustrate these theoretical results, we concretely calculate the large-deviation properties and the SCGFs of two-level quantum systems. On the one hand, we verify that the CTCA is quite accurate by comparing the simulation data to the exact analytical or numerical results. On the other hand, we also find that the effects of resetting on quantum systems can be very complex, even if resetting is memoryless. The plausible reason is that the waiting-time distributions of reset-free systems are not trivial at all; e.g., there are quantum antibunching effects. The presence of resetting, especially memory resetting, further increases this complexity. Hence, quantitative formulas are more trustworthy than qualitative arguments. For the relatively simple TLS and Erlang-2 distribution, the sMP method is applicable. It will be interesting to investigate the applications of the sMP method in complex quantum many-body systems, in both reset-free and reset cases.
_Acknowledgments_ We thank Dr. Cavallaro for his inspiring discussions of the continuous-time cloning algorithm of non-Markov jump processes. This work was supported by the National Natural Science Foundation of China under Grant Nos. 12075016 and 11575016.
## Appendix A Renewal equation of the density matrix
Perfetto et al. [32] derived a renewal equation relating the dynamics of the reduced density matrix \(\rho(t)\) in the presence of memory resetting to the reset-free matrix and called the evolution equation the generalized Lindblad quantum master equation. It is interesting to connect the renewal equation with the more fundamental probability formulas and wave function notion developed in this paper. First, the reduced-density matrix of the quantum system at time \(t\) can be expressed by the age structure [1]:
\[\rho(t)=\sum_{\alpha=1}^{M}\int_{0}^{t}d\tau p_{\alpha}(t,\tau)U(\tau)|\phi_ {\alpha}\rangle\langle\phi_{\alpha}|U^{\dagger}(\tau)+\int_{0}^{t}d\tau p_{R }(t,\tau)U(\tau)|R\rangle\langle R|U^{\dagger}(\tau), \tag{10}\]
where \(U(\tau)\) is the time-evolution operator of the nonlinear Schrodinger equation [2]. In Eq. (10), \(p_{\alpha}(t,\tau)\) represents the probability density that the quantum system starts from the collapsed state \(\phi_{\alpha}\) at time \(t-\tau\) and continuously evolves until time \(t\). Thus, the age of the system is \(\tau\). The meaning of the probability density \(p_{R}(t,\tau)\) is similar except that the system starts from the reset state \(|R\rangle\). It is easier to understand their meanings by referring to Fig. (1).
Because of memory effects, the possibility of the continuous evolution of a quantum state is affected by the time interval \(s\) from the last reset to the current moment [55]. Hence, it is useful to write \(p_{\alpha}(t,\tau)\) in a detailed way:
\[p_{\alpha}(t,\tau)=\int_{\tau}^{t}p_{\alpha}(t,\tau,s)ds. \tag{11}\]
Following the idea of Eq. (36), we may temporally overlook all collapses and focus only on the resetting process. Let the probability density \(P(t,s)\) represent that there have been no resets at time \(t\) since the last reset at time \(t-s\). That is, the time without resets is \(s\). Then, according to the probability theory, we can rewrite
\[p_{\alpha}(t,\tau,s)=P(t,s)p_{R\alpha}^{0}(s,\tau), \tag{12}\]
where \(p_{R\alpha}^{0}(s,\tau)\) is the conditional probability density that the quantum system continuously evolves until time \(s\) with age \(\tau\) (\(<s\)). Note that the subscript \(R\) on the right-hand side denotes that the reset state \(|R\rangle\) is the initial quantum state at \(s=0\). Analogously, the other probability density in Eq. (10) is rewritten as
\[p_{R}(t,\tau)\ =\ P(t,\tau)S_{R}^{0}(\tau). \tag{13}\]
Substituting Eqs. (12) and (13) into Eq. (10) and rearranging the integrals, we obtain
\[\rho(t)=\int_{0}^{t}P(t,s)\rho_{0}(s), \tag{14}\]
where
\[\rho_{0}(s)=\sum_{\alpha=1}^{M}\int_{0}^{s}d\tau p_{R\alpha}^{0}(s,\tau)U(\tau )|\phi_{\alpha}\rangle\langle\phi_{\alpha}|U^{\dagger}(\tau)+S_{R}^{0}(s)U(s )|R\rangle\langle R|U^{\dagger}(s). \tag{15}\]
Equation (30) is simply the renewal equation, and Eq. (31) is the reduced-density matrix solution to the MQME (1) with the special initial density matrix \(|R\rangle\langle R|\)[1], which is formally equal to
\[\rho_{0}(s)\equiv e^{s\mathcal{L}}\left[|R\rangle\langle R|\right]. \tag{32}\]
In fact, if the probability density \(P(t,s)\) is defined from the beginning, the desired equation can be intuitively written out, as Perfetto et al. did previously [32]. Using the age structure of the resetting process, we can further derive the generalized Lindblad quantum master equation. Considering that this procedure is the same as the previous ones [20; 32], we do not show it in this paper.
|
2302.00458 | Improved Exact and Heuristic Algorithms for Maximum Weight Clique | We propose improved exact and heuristic algorithms for solving the maximum
weight clique problem, a well-known problem in graph theory with many
applications. Our algorithms interleave successful techniques from related work
with novel data reduction rules that use local graph structure to identify and
remove vertices and edges while retaining the optimal solution. We evaluate our
algorithms on a range of synthetic and real-world graphs, and find that they
outperform the current state of the art on most inputs. Our data reductions
always produce smaller reduced graphs than existing data reductions alone. As a
result, our exact algorithm, MWCRedu, finds solutions orders of magnitude
faster on naturally weighted, medium-sized map labeling graphs and random
hyperbolic graphs. Our heuristic algorithm, MWCPeel, outperforms its
competitors on these instances, but is slightly less effective on extremely
dense or large instances. | Roman Erhardt, Kathrin Hanauer, Nils Kriege, Christian Schulz, Darren Strash | 2023-02-01T14:02:06Z | http://arxiv.org/abs/2302.00458v1 | # Improved Exact and Heuristic Algorithms for Maximum Weight Clique
###### Abstract
We propose improved exact and heuristic algorithms for solving the maximum weight clique problem, a well-known problem in graph theory with many applications. Our algorithms interleave successful techniques from related work with novel data reduction rules that use local graph structure to identify and remove vertices and edges while retaining the optimal solution. We evaluate our algorithms on a range of synthetic and real-world graphs, and find that they outperform the current state of the art on most inputs. Our data reductions always produce smaller reduced graphs than existing data reductions alone. As a result, our exact algorithm, MWCRedu, finds solutions orders of magnitude faster on naturally weighted, medium-sized map labeling graphs and random hyperbolic graphs. Our heuristic algorithm, MWCPeel, outperforms its competitors on these instances, but is slightly less effective on extremely dense or large instances.
## 1 Introduction
Finding cliques in graphs is a classic problem in graph theory with many applications. In social networks, group behavior can be predicted with the help of cliques [47]. In biochemistry, cliques can be used to study the interaction between molecules, which can inform drug discovery [33]. Vertex-weighted graphs, and the analogous _maximum weight clique problem_ (MWC), can be used in an even wider variety of applications including video object co-segmentation [51], coding theory [52], combinatorial auctions [49], and genomics [4].
Solving the maximum (unweighted) clique problem has been the subject of extensive research [9, 31, 41, 42, 48, 53], with the most effective solvers combining branch-and-bound with MaxSAT reasoning for pruning [30, 38]. However, state-of-the-art algorithms still struggle to find solutions for certain instances in a reasonable time limit. Indeed, there are still unsolved instances, and recently closed instances have required over a year of computation [50]. Recent work has focused on solving weighted variants of **NP**-hard graph problems [7, 29, 45], which are more difficult in practice.
One powerful technique for tackling **NP**-hard graph problems is to use _data reduction rules_, which remove or contract local graph structures, to reduce the input instance to an equivalent, smaller instance. Originally developed as a tool for parameterized algorithms [13], data reduction rules have been effective in practice for computing an (unweighted) maximum independent set [11, 28, 39] / minimum vertex cover [2], maximum clique [10, 43], and maximum \(k\)-plex [12, 25], as well as solving graph coloring [32, 43] and clique cover problems [19, 40], among others [1]. However, recent work has only scratched the surface for _weighted_ problems. Lamm et al. [29], Gellner et al. [17], and Gu et al. [20] recently introduce an extensive collection of effective data reductions for maximum weight independent set problem (MWIS), and Wang et al. [45] perform data reduction for weighted graph coloring.
However, to our knowledge, the only data reduction rules for MWC remove vertices simply based on the weight of a neighborhood or the largest weight of a neighbor [7]. Thus, there is untapped potential for reducing input instances further, making them more amenable to exact solving. One strategy is to apply MWIS reductions to the _complement_ of the input; however, MWIS reductions are most effective on large, sparse instances and the complements of the graphs considered here are dense and unlikely to fit in memory.
**Our Results.** We develop a suite of novel exact and heuristic data reduction rules for MWC, with the goal of reducing the number of vertices and edges in the input graph while maintaining solution quality. To the best of our knowledge our data reduction rules are the first to exploit local graph structures for the MWC problem. We also present data reduction rules that are solely aimed at removing _edges_ in a graph, which to the best of our knowledge has not been done before for similar problems. After reducing the graph, we apply either heuristic or exact algorithms on the remaining instance to obtain a solution to the original input. We extend the recent reduce-and-peel framework
introduced for the MIS and MWIS problems, engineering methods for how and when to apply the reductions and switch to the exact solver. Our experiments show that our algorithms outperform the state of the art.
## 2 Preliminaries
### Basic Concepts.
We consider a simple, weighted, undirected graph \(G=(V,E,w)\) with \(n=|V|\) and \(m=|E|\), where \(V=\{1,\ldots,n\}\) is the set of vertices, \(E\subseteq\{\{u,v\}\mid u,v\in V\}\) is the set of dyadic edges, and \(w\colon V\to\mathbb{R}_{>0}\) is a function that assigns a positive real-valued weight to each vertex. We extend \(w\) to sets, such that for \(V^{\prime}\subseteq V\), \(w(V^{\prime})=\sum_{v\in V^{\prime}}w(v)\). The _maximum weight_ of \(V^{\prime}\) is denoted by \(w^{*}(V^{\prime})=\max_{v\in V^{\prime}}w(v)\). Two vertices \(u\) and \(v\) are _adjacent_ (also _neighbors_) if \(\{u,v\}\in E\). The _(open) neighborhood_\(N(v)\) of a vertex \(v\in V\) is defined as \(N(v)=\{u\in V\mid\{u,v\}\in E\}\), and its _closed neighborhood_ is \(N[v]=N(v)\cup\{v\}\). Both definitions extend straightforwardly to the neighborhood \(N(V^{\prime})\) of a set of vertices \(V^{\prime}\subset V\), i.e., \(N(V^{\prime})=\cup_{v\in V^{\prime}}N(v)\setminus V^{\prime}\) and \(N[V^{\prime}]=N(V^{\prime})\cup V^{\prime}\). The _degree_ of a vertex \(\deg(v)\) is the number of its neighbors \(\deg(v)=|N(v)|\), and \(\Delta:=\Delta(G)\) denotes the maximum degree \(\max_{v\in V}\deg(v)\). The _complement_ of \(G\) is defined as \(\overline{G}=(V,\overline{E})\), where \(\overline{E}=\{\{u,v\}\mid u,v\in V\wedge u\neq v\wedge\{u,v\}\notin E\}\) is the set of edges not present in \(G\). The _density_\(\rho:=\rho(G)\) of \(G\) is the ratio of the number of edges present to those that could exist, \(\rho(G)=\frac{2m}{n(n-1)}\). The subgraph _induced_ by the subset \(V^{\prime}\subseteq V\) is denoted by \(G[V^{\prime}]=(V^{\prime},E^{\prime})\), where \(E^{\prime}=\{\{v_{i},v_{j}\}\in E\mid v_{i},v_{j}\in V^{\prime}\}\). A set \(V^{\prime}\subseteq V\) is called _independent_ if for all pairs of vertices \(u,v\in V^{\prime}\), \(\{u,v\}\not\in E\).
A _clique_ is a set \(Q\subseteq V\) where all vertices are pairwise adjacent. A clique in the complement graph \(\overline{G}\) corresponds to an _independent set_ in the original graph \(G\) and vice-versa. The _maximum weight clique problem_ (MWC) consists in finding a clique of maximum weight. If \(w\equiv 1\), we obtain the _maximum cardinality clique problem_ (MCC) (more succinctly referred to as the maximum clique problem). The _maximum independent set problem_ (MIS) is that of finding an independent set of maximum cardinality, whereas the _maximum weight independent set problem_ (MWIS) asks for an independent set of maximum total weight. The complement of an independent set is a _vertex cover_, i.e. a subset \(C\subseteq V\) such that every edge \(e\in E\) is incident to at least one vertex in \(C\). The _minimum vertex cover problem_, which asks for a vertex cover with minimum cardinality, is thus complementary to the maximum independent set problem. The maximum clique problem is also dual to the maximum independent set problem and the minimum vertex cover problem via the complement graph \(\overline{G}\). By extension, the weighted versions of independent set and clique are also dual to each other.
The _vertex coloring problem_ asks to assign a color label \(c\in\mathbb{Z}\) to each vertex such that no two adjacent vertices have the same label and the number of different colors is minimal. All vertices in a clique must receive different colors. Thus, if a graph has a vertex coloring with \(k\) colors, any clique can have cardinality at most \(k\). All these problems are **NP**-hard.
### Related Work.
This paper is a summary and extension the master thesis [14]. A lot of research has been done for both the MCC and the MWC problem. As our focus in this work is on the weighted version, we only mention results for MWC and largely omit solvers and results for the cardinality version unless they were extended to the weighted case. A detailed review on approaches for MCC can be found in Wu and Hao [48] as well as in Abu-Khzam et al. [1] in the context of data reductions.
#### 2.2.1 Exact Solvers.
Most exact solvers for the MCC use a B&B framework [9], which maintains a current clique \(C\) and a candidate set \(P=N(C)\) of vertices for extending \(C\). Fast solvers prune the search space by quickly computing a tight upper bound on the clique size that can be found by including vertices from \(P\) into \(C\). One successful technique to do so is to compute a greedy heuristic vertex coloring on \(G[P]\) and use the number of colors as an upper bound. This approach was subsequently extended to MWC by Kumlander [27] as follows: Given a valid vertex coloring of \(G[P]\) that uses \(k\) colors and partitions \(V\) into color classes \(\mathcal{D}=D_{1}\sqcup D_{2}\sqcup\cdots\sqcup D_{k}\), an upper bound can be computed as \(ub(\mathcal{D})=\sum_{j=1}^{k}w^{*}(D_{j})\), assuming each color class contributes a vertex of maximum weight.
Fang et al. [15] were the first to implement the idea of MaxSAT reasoning introduced by the MCC solver MaxCLQ [31] for MWC. Jiang et al. [24] also rely on MaxSAT reasoning and contributed an efficient preprocessing step that computes an initial clique \(\hat{C}\) as well as a vertex branching ordering. It furthermore computes a simple upper bound on the maximum weight clique that each vertex \(v\) can be part of as \(w(N[v])\) and removes \(v\) if \(w(N[v])\leq w(\hat{C})\). TSM-MWC [23] refines the approach further with a two-stage MaxSAT reasoning approach that applies less expensive MaxSAT techniques to reduce the number of branching vertices before exhaustively looking for disjoint conflicting soft clauses. TSM-MWC currently achieves the best results for a wide spectrum of graph instances, most notably large sparse real-world graph instances, and is the current state-of-the-art exact solver for maximum weight clique.
#### 2.2.2 Heuristic Solvers.
The general scheme of a local search algorithm for MCC is as follows: A clique \(C\) is constructed by starting with a single vertex and repeatedly adding vertices that are adjacent to all vertices in \(C\) using some evaluation function. Again, candidate vertices are those vertices that could potentially be added to \(C\). Once no more add operations can be performed, some vertices can be removed in an attempt to construct a larger clique.
Gendrau et al. [18] proposed two algorithms for MCC based on this strategy: One is a deterministic scheme which adds the vertex with the highest degree first and when no further vertex can be added, the vertex that results in the largest set of candidate vertices is removed. The second algorithm randomly selects which vertex to add to the current solution. Pullan [35] proposed to include a swap operator in the main search procedure. This operator looks for a vertex that is connected to all but one vertex of the current candidate clique \(C\). Furthermore, the algorithm perturbs the current candidate clique by adding a random vertex and removing all non-adjacent vertices from the clique.
This algorithm has been extended to MWC by Pullan [36] by adding a vertex which is randomly chosen only among the vertices of highest weight. Wang et al. [46] added a prohibition rule based on configuration checking. Cai [5] further improved this algorithm by using a better strategy to decide which vertex from the candidate set to add next. This strategy works by randomly sampling \(k\) different candidate vertices and choosing the best vertex with respect to some benefit estimation function. Cai and Lin [7] combined the algorithm with data reduction rules in their solver FastWCLq. The reductions they use compute upper bounds for each vertex and remove a vertex if one of the computed upper bounds is less than the weight of the current best clique. Every time an improved solution is found by local search, the reductions are reapplied, which in turn improves the chance of local search finding the optimal solution.
SCCWalk4l[44] adopts the previously seen configuration checking strategies as well as data reductions. The authors furthermore introduce a technique called walk perturbation, which adds a random vertex to the solution when the search stagnates and removes all vertices from the candidate set that become invalid by this perturbation. Cai et al. [8] improved FastWCLq further to also apply a reduction-and-hill-climbing method based on vertex coloring.
SCCWalk4l and FastWCLq are the current state-of-the-art for heuristic MWC solvers, with the former being especially dominant in small dense networks, such as graphs from the DIMACS and BHOSLIB challenge [44], and the latter showing the best results in large sparse real-world networks [8].
## 3 Data Reductions
So far, only few reductions are known that can be used for the MWC. However, especially for large instances, applying exact data reductions is a very important technique to decrease the problem size. In general, reductions allow the classification of vertices as either (1) part of a solution, (2) non-solution vertices, or (3) deferred, i.e. the decision for this vertex depends on additional information about neighboring vertices that will be obtained later. We denote by \(\mathcal{K}\) the resulting _reduced graph_, where no reduction rule applies anymore. In the following, we review existing and introduce a large set of new reductions for the MWC.
### Neighborhood Weight Reduction.
A simple but effective reduction often seen in literature [7, 8, 23, 24, 44] is based on the upper bound \(w(N[v])\) for any clique containing \(v\in V\).
Reduction Rule 1.: ([7]) _Let \(\hat{C}\) be the highest-weight clique found so far and let \(v\in V\) s.t. \(w(N[v])\leq w(\hat{C})\). Then \(v\) can be removed from the graph without reducing the maximum solution weight._
The rule can be applied on a vertex \(v\in V\) in \(\mathcal{O}(1)\) time, given that the neighborhood weight is stored and maintained throughout the reductions.
### Largest-Weight Neighbor Reduction.
Cai et al. [7] tighten the neighborhood weight reduction rule by either including or excluding the highest weight vertex \(u^{*}\) in the neighborhood.
Reduction Rule 2.: ([7]) _Let \(\hat{C}\) be the highest-weight clique found so far, let \(v\in V\setminus\hat{C}\), and let \(u^{*}=\arg\max_{u\in N(v)}w(u)\). If \(\max\{w(N[v])-w(u^{*}),w(N[v]\cap N[u^{*}])\}\leq w(\hat{C})\), then \(v\) can be removed from the graph without reducing the maximum solution weight._
For applying the rule on a vertex \(v\in V\), first its highest weight neighbor \(u^{*}\) is identified in \(\mathcal{O}(\deg(v))\) and then the intersection of their neighborhoods is computed in \(\mathcal{O}(\min\{\deg(v),\deg(u^{*})\})\), resulting in overall \(\mathcal{O}(\deg(v))\) time. Computing the intersection of neighborhoods is a crucial operation for the application of this reduction rule as well as several others described in the following. The running time for computing \(N(u)\cap N(v)\) depends on the graph representation. Assuming constant time for checking whether two vertices are adjacent, we can iterate over the smaller set and identify those that are also adjacent to the other vertex in \(\mathcal{O}(\min\{\deg(u),\deg(v)\})\) time. For the application to
large sparse graphs we use an adjacency list and realize the operation using indicators by iterating over the neighbors of both vertices in \(\mathcal{O}(\deg(u)+\deg(v))\) time.
### Twin Reduction.
We now introduce our first new data reduction rule, based on twins. Consider two adjacent vertices \(u\) and \(v\) that share the same closed neighborhood. Such vertices are called _twins_. If either one of them is in the solution, then the other one must also be in it. Figure 1 gives an illustration.
Reduction Rule 3.: _Let \(u,v\in V\), \(u\neq v\), and \(N[u]=N[v]\). Then \(u\) and \(v\) can be contracted to a new vertex \(\{u,v\}\) with weight \(w(\{u,v\})=w(u)+w(v)\) and \(N(\{u,v\})=N(u)\cap N(v)\) without reducing the maximum solution weight._
Proof.: Suppose there is an optimal solution \(C^{*}\) that, w. l. o. g., contains \(u\), but not \(v\). Then it is always possible to add \(v\) to the solution, as it is connected to all neighbors of \(u\), resulting in a solution of larger weight. Hence, each optimal solution contains either both \(u\) and \(v\) or neither.
To check the precondition for two vertices \(u,v\in V\) where \(\deg(u)=\deg(v)\), the intersection of their neighborhoods can be obtained in time \(\mathcal{O}(\deg(v))\) using a marking scheme.
### Domination Reduction.
Vertex \(u\in V\) is said to dominate \(v\in V\) when \(N(v)\subseteq N(u)\). Furthermore, if \(w(v)\leq w(u)\), then a maximal clique containing \(u\) would have a weight greater or equal to one that contains \(v\). This observation leads to the following reduction rule:
Reduction Rule 4.: _Let \(u,v\in V\), \(\{u,v\}\not\in E\), \(N(v)\subseteq N(u)\), and \(w(v)\leq w(u)\). Then, \(v\) can be removed from the graph without reducing the maximum solution weight._
Proof.: Suppose there is an optimal solution \(C^{*}\) that, w. l. o. g., contains \(v\), but not \(u\). As \(u\) is adjacent to all neighbors of \(v\), it is always possible to substitute \(v\) with \(u\) in the solution, resulting in a solution with at least the same weight since \(w(v)\leq w(u)\). As \(\{u,v\}\not\in E\), no clique can contain both \(u\) and \(v\). Hence, there is at least one optimal solution that does not contain \(v\).
Given \(v\), we find vertices \(u\) with \(N(u)\supseteq N(v)\) as follows: We choose \(x\in N(v)\) arbitrarily and iterate over all \(u^{\prime}\in N(x)\). If \(\{u^{\prime},v\}\not\in E\), \(\deg(u^{\prime})\geq\deg(v)\), and \(w(u^{\prime})\geq w(v)\), we test whether \(N(v)\subseteq N(u^{\prime})\) in \(\mathcal{O}(\deg(v))\) time. The approach identifies all vertices \(u^{\prime}\) for a given vertex \(v\) satisfying the conditions of the reduction rule of Lemma 4 in \(\mathcal{O}(\deg(v)\cdot\Delta)\) time.
We now introduce our first reduction that is designed to remove _edges_ from the graph. A similar reduction is applicable if \(u\) and \(v\) are adjacent. However, simply removing \(v\) is not possible, as \(v\) may be part of a clique containing \(u\). Therefore, we add the weight of \(u\) to \(v\) and then remove the edge \(\{u,v\}\), thus preserving the best solution achievable by \(v\) and \(u\) being in the same clique while reducing the graph at the same time.
Reduction Rule 5.: _Let \(u,v\in V\), \(\{v,u\}\in E\), and \(N(v)\subseteq N[u]\). Then, increasing \(w(v)\) to \(w^{\prime}(v)=w(v)+w(u)\) and removing the edge \(\{u,v\}\) from the graph does not reduce the maximum solution weight._
Proof.: Let \(C^{*}\) be an optimal solution in the _original_ graph. Assume that \(C^{*}\) contains \(v\), but not \(u\). Then \(u\) can be added to \(C^{*}\) leading to a higher weight, contradicting the assumption that \(C^{*}\) is optimal. Hence, if \(C^{*}\) contains \(v\), it also contains \(u\). There are two cases left to consider:
**Case 1:** If \(C^{*}\) contains both \(u\) and \(v\), then \(w(C^{*})\leq w(u)+w(v)+w(N(v)\setminus\{u\})=w^{\prime}(v)+w(N(v)\setminus\{u\})\), so there exists an equivalent solution only containing \(v\) in the reduced graph.
**Case 2:** If \(C^{*}\) contains \(u\) but not \(v\), then \(w(C^{*})\leq w(u)+w(N(u))\), and the same solution exists in the reduced graph.
The reduction can be implemented analogously to the twin reduction (Reduction Rule 3).
### Edge Bounding Reduction.
This rule is a natural extension to Reduction Rule 2, using the computed bounds not only to decide whether a vertex can be removed, but also the edge that connects it with its highest-weight neighbor. Given a vertex \(v\in V\) and its highest-weight neighbor \(u^{*}\in N(v)\), let \(ub_{inc}(v,u^{*})\) denote the _including upper bound_\(w(v)+w(u^{*})+w(N(v)\cap N(u^{*}))\) and let \(ub_{exc}\) be the _excluding upper bound_\(w(N[v])-w(u^{*})\). Reduction Rule 2 states that \(v\) can be removed if both \(ub_{inc}(v,u^{*})\leq w(\hat{C})\) and \(ub_{exc}(v,u^{*})\leq w(\hat{C})\), where \(\hat{C}\) is the currently best solution. The extension provided by the edge bounding reduction is based on the observation that if \(ub_{exc}(v,u^{*})>w(\hat{C})\), but \(ub_{inc}(v,u^{*})\leq w(\hat{C})\), it is possible to remove the edge \(\{v,u^{*}\}\). We extend this rule to apply to all neighbors of \(v\):
Reduction Rule 6.: _Let \(v\in V\), \(u\in N(v)\), and let \(\hat{C}\) be the best clique found so far. If \(ub_{inc}(v,u)<w(\hat{C})\), the edge \(\{v,u\}\) can be removed from the graph without reducing the maximum solution weight._
Proof.: The value \(ub_{inc}(v,u)\) is an upper bound on the weight of any clique containing both \(v\) and \(u\). If a clique
\(\hat{C}\) with weight \(w(\hat{C})>ub_{inc}(v,u)\) is known, then there is at least one optimal solution \(C^{*}\) that does not contain both \(v\) and \(u\). The edge \(\{v,u\}\) is thus irrelevant in the search for a solution of higher weight.
Given an edge \(\{v,u\}\), the time complexity is \(\mathcal{O}(\min\{\deg(v),\deg(u)\})\), as with Reduction Rule 2.
### Simplicial Vertex Removal Reduction
A vertex \(v\) is called _simplicial_ if its closed neighborhood forms a clique \(C_{v}\), i.e. \(\forall x_{1},x_{2}\in N[v]\), \(\{x_{1},x_{2}\}\in E\). Simplicial vertices may be removed before applying a maximum weight clique solver as well: Once a simplex \(v\) has been identified, the largest clique it can be part of is \(C_{v}\) with \(w(C_{v})=w(N[v])\). If this weight is larger than the currently known highest-weight clique, the lower bound is updated.
**Reduction Rule 7**: _Let \(v\in V\) be a simplicial vertex and let \(\hat{C}\) be the best clique found so far. Only if \(w(N[v])>w(\hat{C})\), set \(\hat{C}=N[v]\). In any case, removing \(v\) from the graph then does not reduce the maximum solution weight._
If \(w(N[v])\leq w(\hat{C})\), \(v\) cannot be part of a strictly better solution. Otherwise, if \(w(N[v])>w(\hat{C})\), the same holds after the currently best solution has updated to \(\hat{C}=N[v]\).
Testing the adjacency of each pair of vertices in \(N[v]\) takes \(\mathcal{O}\big{(}\deg(v)^{2}\big{)}\) in the worst case.
Observe that in contrast to the other reductions, the simplicial Vertex reduction may directly improve the currently best solution \(\hat{C}\).
### Applying the Reductions
For applying the exact reduction rules proposed in this section, an adapted version of the strategy from Hespe et al. [22] that entails both dependency checking and reduction tracking is used. Specifically, the set of reductions \(\{r_{i}\}\)
Figure 1: Twin reduction (left) and simplicial vertex removal reduction (right) for MWC.
Figure 2: Domination reduction for MWC by applying Reduction Rule 4 (left) and Reduction Rule 5 (right).
is iterated, where each rule \(r_{i}\) is tried on its set of viable vertices \(D_{i}\), which is initially set to \(D_{i}=V\). After preliminary experiments, we settled on the following order of reductions: neighborhood weight, twin, simplicial vertex, edge bounding (which includes largest-weight neighbor), domination case 1, and domination case 2. Every time a rule \(r_{i}\) fails to _reduce_ a vertex, i.e. to remove it from the graph, this vertex is removed from the set of viable candidates \(D_{i}\). Otherwise, the set of each rule \(r_{j}\) is updated to \(D_{j}=D_{j}\cup N(v)\) and the applicable vertices or edges are removed from the graph. This minimizes redundant computations without affecting the final size of the reduced graph [22].
_Reduction tracking_ aims at tracking the effectiveness of reductions. Slightly different from the original strategy, reduction tracking is implemented by pausing a reduction once it fails to achieve a reduction rate of at least \(1\,\%\) of the current number of vertices or edges per second, until other reductions reduced the graph by that amount. Reduction tracking is checked both in between the application of different reduction rules as well as periodically during the iteration over candidate vertices, in order to prevent single reductions to delay the solver and allow either more efficient reductions or the exact solver to take over. Another addition to the strategy by Hespe et al. is to set a dynamic limitation on the degree of vertices that are tried in the reductions. The limit is set to \(10\,\%\) of the highest degree initially and is increased by \(10\,\%\) whenever the reductions have been exhaustively applied in the previous level. This guarantees that reductions applicable on low degree vertices, which are typically more efficient, are applied first. The loop terminates once the degree is no longer limited and all reductions are paused, at which point we run either an exact or heuristic solver on the reduced graph.
## 4 MwCedu: A New Exact Algorithm
Our exact algorithm MWCRedu works in two stages: First, the set of exact reduction rules from Section 3 is used to reduce the graph. Second, the reduced graph is passed to an exact B&B solver to compute the final solution.
### Computing a Lower Bound.
Reduction Rules 1, 2 and 6 depend on the currently best solution \(\hat{C}\) to be applicable. For computing bounds, fast heuristics are generally preferred, since spending more time on improving the initial solution typically gives diminishing returns. A well-suited heuristic for computing an initial lower bound is the one employed in Jiang et al. [24]: Repeatedly remove the vertex with the smallest vertex degree from the graph until all remaining vertices are pairwise adjacent and form the initial clique \(\hat{C}\), which yields an initial lower bound of \(w(\hat{C})\).
Afterwards, \(\hat{C}\) is continuously improved by the simplicial vertex reduction (Reduction Rule 7) and the local search algorithm from FastWCLq[8], the latter being applied on the reduced graph in between checking each reduction rule. Subsequently, \(\hat{C}\) provides the lower bound in the Reduction Rules 1, 2 and 6, and it also serves as the initial solution for the solver that is applied on the reduced graph. Algorithm 1 gives an outline.
### Branch and Bound.
The reduced graph is solved using the branch and bound paradigm. As the procedure has exponential time complexity, it is important to choose a good ordering and to reduce the set of branching vertices by computing tight upper bounds. We use the same ordering as Jiang et al. [24], i.e. the ordering of the vertices is given as \(v_{1}<v_{2}<...<v_{n}\), where \(v_{1}\) has the smallest vertex degree, \(v_{2}\) has the smallest vertex degree after \(v_{1}\) is removed, etc. Such an ordering is called a _degeneracy ordering_ of the graph.
To compute tight upper bounds and reduce the set of branching vertices, we apply efficient MIS- and MaxSAT-based approaches from [24, 23] throughout the search. Recall from Section 2.2 that for any vertex coloring that partitions \(V\) into color classes \(\mathcal{D}=D_{1}\sqcup D_{2}\sqcup\cdots\sqcup D_{k}\), each color class forms an independent set and \(ub(\mathcal{D})=\sum_{j=1}^{k}w^{*}(D_{j})\) is an upper bound on the maximum clique weight. The set of branching vertices is then further reduced via the two-stage MaxSAT reasoning approach from TSM-MWC [23].
In the first stage, which the authors refer to as binary MaxSAT reasoning, the set of branching vertices is reduced by inserting as many vertices as possible into the independent sets s.t. \(\sum_{j=1}^{k^{\prime}}w^{*}(D_{j})\leq w(\hat{C})\). As these vertices cannot form a clique with a weight larger than \(w(\hat{C})\) by themselves, they can be removed from the set of branching vertices. If a vertex \(v_{i}\in V\) has neighbors in all existing independent sets but \(ub+w(v_{i})\leq w(\hat{C})\) holds, it is inserted as a new independent set. Otherwise we try to split its weight among independent sets that do not contain any of its neighbors by adding \(v_{i}\) with weight \(w^{*}(S_{j})\) into independent \(S_{j}\) and updating the weight to \(w(v_{i})=w(v_{i})-w^{*}(S_{j})\) for \(j=1,2,...,k^{\prime}\), until its remaining weight is given as \(\delta=w(v_{i})-\sum_{j=1}^{k^{\prime}}w^{*}(S_{j})\). If \(\delta>0\) and \(ub+\delta\leq w(\hat{C})\), \(v_{i}\) is inserted as a new independent set with weight \(\delta\), otherwise the weight splitting procedure is undone and \(v_{i}\) is kept in the set of branching vertices.
In the second stage, called ordered MaxSAT reasoning, the set of branching vertices is reduced further by detecting disjoint conflicting subsets of independent sets. Firstly, the weight of a branching vertex \(v_{i}\) is again split among the independent sets \(\{S_{1},S_{2},...,S_{k^{\prime}}\}\) that do not contain any of its neighbors, resulting in the re
maining weight \(w(v_{i})=\delta>0\), since the vertex was not removed from the set of branching vertices in the first stage. After that, the algorithm tries to find a set of independent sets \(\{U_{1},U_{2},...,U_{r}\}\) that each contain exactly one neighbor \(u\) of \(v_{i}\). It then looks for an independent set \(D_{q}\) s.t. \(D_{q}\cap N(v_{i})\cap N(u)=\emptyset\) for any \(U_{j}\), proving that the sets \(\{\{v_{i}\},U_{j},D_{q}\}\) are conflicting. In this case, \(\mathit{ub}\) can be further improved to \(\mathit{ub}+\delta-\beta\), where \(\beta=\min(\delta,w^{*}(U_{j}),w^{*}(D_{q}))\)[23].
Finally, if after considering all \(U_{j}\in\{U_{1},U_{2},...,U_{r}\}\)\(\mathit{ub}\) is still higher than the lower bound, \(\mathit{ub}\) is reduced by identifying conflicting subsets via unit propagation as first implemented for maximum weight clique [15]. Unit propagation works from the idea that clauses with more literals are more likely to be satisfied and are thus considered _weaker_ clauses. A unit clause is thus the strongest clause since it only has one possibility of evaluating to true. The algorithm repeatedly satisfies such a clause, removing all occurrences of the contained literal from the other clauses. If an empty clause remains, the set of clauses is identified as conflicting. Each time a set of conflicting clauses \(\{S_{0},S_{1},...,S_{r}\}\) is identified, the upper bound can be reduced by \(\delta=\min\{w^{*}(S_{1}),\ldots,w^{*}(S_{r})\}\). To tighten the bound further, each \(S_{j}\) (\(0\leq j\leq r\)) is split into \(S^{\prime}_{j}\) and \(S^{\prime\prime}_{j}\) so that \(w^{*}(S^{\prime}_{j})=\delta\) and \(w^{*}(S^{\prime\prime}_{j})=w^{*}(S_{j})-\delta\). \(S^{\prime}_{j}\) then represents the conflicting subset found so far, whereas further conflicts can be deduced from \(S^{\prime\prime}_{j}\)[15].
The procedure is run at every branch of the solver in order to reduce the amount of work to be done. The algorithm terminates when all branches are either explored or pruned or when the time limit is reached, in which case the best solution found is reported.
## 5 MWCPeel: A New Heuristic Algorithm
For our new heuristic algorithm MWCPeel, we investigate vertex peeling techniques, which remove vertices from the graph that are assigned the lowest scores by some heuristic rule. This rule must therefore capture the likelihood of a vertex belonging to the solution as well as possible. Using the vertex degree is an obvious choice for MCC, since a vertex with a high degree is more likely to form a large clique. Furthermore, a vertex \(v\) cannot be part of a clique larger than \(\deg(v)\). For the measure to remain an upper bound in the context of MWC, the weight of the neighborhood of each vertex is taken into account. The resulting simple and intuitive scoring measure \(w(N[v])\) is used in our peeling step.
Overall, our heuristic solver works similarly to the exact approach MWCRedu described in Section 4, but implements the peeling reduction on top of the previously introduced exact reductions: We first run exact reductions exhaustively. On the reduced graph, we apply our peeling strategy that removes vertices that are unlikely to be part of a large clique. We repeat the process until the remaining graph is small or the scores of the peeling reductions are not sufficiently large, and then apply the exact algorithm on the remaining graph. Algorithm 2 gives an overview.
```
MWCPeel(\(G=(V,E,w)\)) compute initial clique \(\hat{C}\)\(\triangleright\) Section 4.1 repeat\(\triangleright\) Algorithm 1 \(G,\hat{C}\leftarrow\textsc{Reduce}(G,\hat{C},\textsc{isFirstIteration})\) \(\mathcal{N}\leftarrow\)\(\#\)vertices to peel off \(\triangleright\) Section 5.1 remove \(\mathcal{N}\) vertices \(v\) with lowest score \(w(N[v])\) until stopping criteria met \(\triangleright\) Section 5.2 returnTSM-MWC(\(G,\hat{C}\))
```
**Algorithm 2** Heuristic Solver MWCPeel
### Peeling Strategy.
Chang et al. [11] introduced a reduce-and-peel heuristic technique to repeatedly remove the minimum degree vertex from a graph, adding it to a growing independent set. For MWC, a straightforward approach is to remove the vertices with the lowest score and exclude them from the solution. More precisely, we remove a fixed percentage of the currently remaining vertices in each peeling step. The number of vertices to be peeled off in one step, \(\mathcal{N}\), is dynamically determined as follows:
\[\mathcal{N}=\begin{cases}0.1n&\text{if $n>50$,000},\\ \max\{0.01n,\frac{0.01}{50,000}n\}&\text{otherwise},\end{cases}\]
where \(n\) always refers to the current number of vertices and the threshold of 50,000 has proven itself suitable in preliminary experiments. Without the differentiation between larger and smaller graphs, the exact reductions would often be reapplied on many vertices, which would significantly slow down the solver. Furthermore, as the vertex degrees often follow a power-law distribution in real-world graph instances [21], the size of the optimal solution makes up a smaller portion of the graph for large graphs. After each peeling step, the viable candidate sets are updated and exact reductions are applied again.
### Stopping Criteria.
Another important decision is when to stop applying the peeling reduction; stopping too early could result in a much higher running time for the solver applied on the reduced graph, whereas stopping late might negatively impact the solution quality. Since the optimal amount of vertices to reduce is highly dependent on the graph structure, a static stopping criterion is unlikely to be a good strategy. For this reason, we employ a dynamic strategy that
works by comparing the current computed score with previously computed scores.
The first stopping criterion is the deterioration of the maximum score value below a certain threshold relative to the total maximum score value. This indicates that the peeling reduction begins to reduce the maximum solution.
A second stopping criterion takes effect if the difference between the minimum and maximum score shrinks below a certain threshold. This shows that the scoring model can no longer clearly distinguish high quality vertices from low quality vertices.
We set both thresholds to \(90\,\mathrm{\char 37}\) to achieve a good balance between speed-up and solution quality. As a fail-safe, a backup of the current graph state is created before applying the heuristic reduction, which can be reloaded in the case the graph is reduced to zero. After the reduction procedure, the branch-and-bound solver is applied on the reduced graph to obtain the final result.
## 6 Experimental Evaluation
We implemented our new solvers MWCRedu and MWCPeel and evaluate them against the state-of-the-art solvers in their class on an extensive and diverse set of instances. More precisely, we compare our exact solver MWCRedu with the currently best exact solver TSM-MWC on each dataset, and our heuristic solver MWCPeel with the currently best heuristic solvers FastWCLq and SCCWalk4l.
**Methodology.** The experiments were run on an Intel Xeon Silver 4216 CPU @2.10GHz with 16 cores under Linux with 95 GB of RAM. All solvers are implemented in C/C++ and compiled using GNU g++ with full optimization (-O3). Each solver was executed on up to 16 graph instances in parallel. As the solvers were run exclusively on the machine, there is no relevant difference to solving the graph instances sequentially. We always report the solution quality \(w(\hat{C})\) and the time to find that solution \(t_{sol}\). For exact solvers, we additionally give the time needed to prove optimality of the solution \(t_{prv}\). Solvers that use random number generation are run five times with different seeds and we report their average solutions to better capture their general performance. If an exact algorithm did not finish within a time limit of 3,600 seconds, it is halted and the best solution found so far is output. Heuristic algorithms are stopped after 1,000 seconds.
**Instances.** We evaluate our algorithms on a broad selection of graphs, covering different sizes, densities, weightings and areas of application. Some of the graphs are originally unweighted and thus were assigned weights artificially. For each unweighted graph, weights are drawn uniformly from the range \([1,200]\).1
Footnote 1: Other weight distributions such as power-law and exponential gave similar results and were excluded due to space constraints.
We compiled four sets of instances, with 58 instances altogether: OSM contains 12 naturally-weighted map labeling instances from Cai et al. [6], generated from OpenStreetMap data using the technique of Barth et al. [3]. The 10 instances in REP are real-world graphs from the network data repository [37], and the 23 instances in DIMACS were taken from the second DIMACS implementation challenge [26]. Moreover, we use 13 random hyperbolic graphs (RHG). These are randomly generated graphs such that the vertex degrees follow a power-law distribution [34] and were generated by the KaGen framework [16]. We varied the power-law exponent between 1.75 and 2.25 and chose the average degree between 100 and 500. For REP, DIMACS, and RHG, we assigned artificial weights as described above. See Table 8 in Appendix D for detailed per-instance statistics.
### Impact of New Data Reduction Rules.
We first investigate the impact of the reduction rules on the instances and compare the effect of adding our "new" rules to the "old" ones that are described in current literature. Table 3 shows reduced graph sizes on all instances, and Table 1 shows reduced graph sizes for a subset of instances.
On the DIMACS instances, the new data reduction rules do not help to compute smaller reduced graphs (hence they are excluded from the table). This is expected as these instances are dense and data reduction rules tend to work well on sparse instances. On the other instances, reduced graphs are significantly smaller when the new data reduction rules are employed additionally.
The largest reduction in the REP instance set is observed on web-wikipedia_link_it, where the new reduction rules result in an empty reduced graph, i.e., the instance is fully solved by the reductions only. The
Figure 3: Original graph sizes and reduced graph sizes for old and old + new reductions.
biggest improvement occurred on sc-TSOPF-RS-b2383, where the old rules were barely effective and reduced the number of nodes by only roughly 1 %. In combination with the new rules, however, the computed reduced graph contains only 42.29 % of the nodes of the original instance. On all instances, using the new rules in addition to the old ones always resulted in smaller reduced graphs than when just using the old ones. On average, the old rules alone reduced the graph size by about 67 %, which improved to over 80 % when combined with our new rules.
The new rules also work very well on the RHG instances and consistently produced smaller reduced graphs than when just using the old ones. Generally, the reductions are very efficient on these instances. If using only the old rules, the resulting reduced graphs are reduced to between 0.14 % and 1.64 % of the original graph sizes. Combined with the new rules, the range is between 0 % and 0.59 %. Two RHG instances were reduced to zero nodes when using the new rules in addition to the old ones. On average, the reduced graphs obtained by old and new rules together were only 0.05 % of the original graph sizes, whereas the average for the old rules alone was more than ten times larger.
The new and old rules together computed empty reduced graphs on all OSM instances, which never happened when using only the rules from the literature. On average, the old rules reduced the number of vertices down to 25.4 %, where the range is relatively large and between 5.58 % on district-of-columbia-AM2 and 56.42 % on idaho-AM3.
_In summary_, our new reduction rules distinctly and consistently produce smaller reduced graphs on all REP, RHG, and OSM instances and even compute empty reduced graphs on 15 instances, which the old ones alone never accomplished on any instance of our collection. Figure 3 summarizes this visually.
### Exact Algorithms.
We discuss the aggregated results for each of the four instance sets (see Table 2).
Our algorithm MWCRedu is more than an order of magnitude faster in the geometric mean than TSM-MWC on the OSM instances (Table 4), both with respect to time to find the solution \(t_{sol}\) and to prove optimality \(t_{prv}\). It is also consistently faster than TSM-MWC on each of the twelve instances in the set. As both are exact
\begin{table}
\begin{tabular}{l r r r r} \hline \hline & \multicolumn{4}{c}{Reduced Graph Size} \\ \cline{2-5} & \multicolumn{1}{c}{old+new reductions} & \multicolumn{2}{c}{old reductions only} \\ Graph & \multicolumn{1}{c}{absolute} & \multicolumn{1}{c}{\% of \(n_{0}\)} & \multicolumn{1}{c}{absolute} & \multicolumn{1}{c}{\% of \(n_{0}\)} \\ \hline \multicolumn{5}{c}{REP} \\ \hline bio-human-gene1 & **3,915** & **17.57** & 4,485 & 20.13 \\ sc-TSOPF-RS-b2383 & **16,123** & **42.29** & 37,737 & 98.99 \\ soc-orkut & **1,264,963** & **42.21** & 1,521,404 & 50.76 \\ web-wikipedia\_link\_it & **0** & **0.00** & 1,214 & 0.04 \\ web-wikipedia-growth & **83,724** & **4.48** & 637,483 & 34.08 \\ \hline \multicolumn{5}{c}{RHG} \\ \hline rhg\_250k\_100\_1.75 & **7** & **0.00** & 1,061 & 0.42 \\ rhg\_500k\_500\_2.25 & **0** & **0.00** & 1,761 & 0.35 \\ rhg\_750k\_250\_2.25 & **15** & **0.00** & 1,062 & 0.14 \\ rhg\_750k\_500\_1.75 & **4,445** & **0.59** & 7,341 & 0.98 \\ rhg\_750k\_500\_2.25 & **12** & **0.00** & 2,651 & 0.35 \\ \hline \multicolumn{5}{c}{OSM} \\ \hline district-of-columbia-AM2 & **0** & **0.00** & 759 & 5.58 \\ greenland-AM3 & **0** & **0.00** & 1,768 & 35.46 \\ idaho-AM3 & **0** & **0.00** & 2,293 & 56.42 \\ massachusetts-AM3 & **0** & **0.00** & 802 & 21.66 \\ virginia-AM3 & **0** & **0.00** & 907 & 14.66 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Selected instances and reduced graph sizes (number of nodes) when both old and new data reductions rules are applied vs. reduced graph sizes obtained when only running reductions from the current literature. Smaller is better. \(n_{0}\) refers to the initial number of nodes.
algorithms, the solution weights are identical except for two cases, where TSM-MWC failed to find the optimal solution within the time limit and stopped prematurely with a worse result. Thus, MWCRedu dominates here.
On DIMACS (Table 4), no major difference in performance between the two solvers is observable. MWCRedu was able to finish on nine of the 23 instances within the time limit, whereas TSM-MWC finished on only eight instances. The running times generally lie very close together, and the solution weights are identical except for seven cases. The reason for the similar behavior is that none of the new exact reductions employed by MWCRedu is able to remove vertices or edges for any instance in this set. Thus, the solver quickly proceeds to apply the B&B solver, which uses the same techniques as TSM-MWC. The overhead from applying the reduction rules is only notable for the easier instances. On average over those instances, where both finished regularly, MWCRedu performs slightly better, which is likely due to better initial solutions obtained from running local search during the reduction phase.
On the REP instances (Table 5), the results are mixed. MWCRedu and TSM-MWC both outperform the respective other algorithm for some instances. On three instances, TSM-MWC failed to prove optimality of a solution and terminated with a suboptimal result twice. TSM-MWC is very efficient for large instances with more than 1 000 000 vertices, whereas MWCRedu outperforms TSM-MWC on the smaller, more dense biology graphs.
On RHG (Table 5), MWCRedu outperforms its competitor TSM-MWC clearly. While TSM-MWC runs into a timeout twice and terminates with a suboptimal solution, MWCRedu always finishes regularly and is the faster algorithm except on one instance. Its dominance in running time is pronounced and up to two orders of magnitude. The reason for MWCRedu's good performance is likely the structure of the instances, which allows it to remove most vertices quickly using very efficient reductions.
_In summary_, MWCRedu is clearly the better algorithm on the OSM and RHG instances and on par with TSM-MWC on the DIMACS graphs. On instances that are small and dense, such as in the REP set, TSM-MWC may be the faster algorithm, whereas MWCRedu can play out its strengths on very large ones. Notably, MWCRedu finished within the time limit on the same instances as TSM-MWC plus some more, making it the _more reliable candidate_.
### Heuristic Algorithms.
We now compare our heuristic solver MWPCpe1 against the state-of-the-art solvers FastWCLq and SCCWalk41 and discuss the differences on each of the four instance sets. Aggregated results are presented in Table 2.
As shown in Table 6, MWCPee1 performs best for 11 out of 12 OSM instances. Both MWCPee1 and FastWCLq find the optimal solution to all instances.
For the DIMACS graphs (Table 6), SCCWalk41 clearly dominates its competitors. Between FastWCLq and MWCPee1, FastWCLq mostly computes slightly higher
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline & \multicolumn{2}{c}{\(t_{sol}\)} & \multicolumn{2}{c}{\(t_{prv}\)} & \multicolumn{2}{c}{\(w(\hat{C})\)} \\ \cline{2-7} Instance Set & TSM-MWC & MWCRedu & TSM-MWC & MWCRedu & TSM-MWC & MWCRedu \\ \hline \hline \multicolumn{7}{c}{Exact Results} \\ \hline DIMACS & 1,106.99 & **946.34** & 1,714.98 & **1,650.88** & 6,460 & **6,587** \\ REP & **117.15** & 134.60 & **190.45** & 259.14 & 14,092 & **14,321** \\ RHG & 95.93 & **14.55** & 128.16 & **17.73** & 106,210 & **106,781** \\ OSM & 27.62 & **1.55** & 31.01 & **2.43** & 537,149 & **542,993** \\ \hline \hline \multicolumn{7}{c}{Exact Results} \\ \hline DIMACS & 1,106.99 & **946.34** & 1,714.98 & **1,650.88** & 6,460 & **6,587** \\ REP & **117.15** & 134.60 & **190.45** & 259.14 & 14,092 & **14,321** \\ RHG & 95.93 & **14.55** & 128.16 & **17.73** & 106,210 & **106,781** \\ OSM & 27.62 & **1.55** & 31.01 & **2.43** & 537,149 & **542,993** \\ \hline \hline \multicolumn{7}{c}{Exact Results} \\ \hline DIMACS & 136.15 & **41.14** & 189.82 & **65.55** & 47,738 & **48,359** \\ \hline \hline \multicolumn{7}{c}{\(t_{sol}\)} \\ \hline \hline \end{tabular}
\end{table}
Table 2: Overview of results for exact (top) and heuristic (bottom) algorithms as geometric mean per graph set.
weight solutions, though it takes longer to compute them. Looking at the instances where TSM-MWC fails to find the optimal solution, both FastWCLq and MWCPeel achieve higher weight solutions in a much smaller amount of time for most of them.
As shown in Table 7, performance on REP graphs is very competitive among the heuristic solvers. While all algorithms compute the best solution an approximately equal amount of times, the solution quality of SCCWalk41 is the lowest on average. Taking speed into account, MWCPeel shows a good performance in comparison. On average, MWCPeel is a factor 3.7 faster than the second fastest algorithm FastWCLq which computing 0.9% better solutions on average than MWCPeel. It should be noted, however, that our exact solver MWCPeel computes even higher weight solutions than FastWCLq, while also being faster on average.
The results for RHG are presented in Table 7. Here, MWCPeel outperforms the other solvers in 31 out of 39 instances. While FastWCLq sometimes finds a slightly higher weight solution than MWCPeel, it has a higher running time on average (a factor 3.8). SCCWalk41 is clearly outperformed both in speed and solution quality.
## 7 Conclusion
We presented an exact algorithm called MWCRedu and a heuristic algorithm called MWCPeel for solving the maximum weight clique problem. Our algorithms interleave successful techniques from related work with novel data reduction rules that use local graph structures to identify and remove vertices and edges while maintaining the optimal solution. In experiments on a large range of graphs, we find that they outperform the current state-of-the-art solvers on most inputs. In particular, MWCRedu is faster by orders of magnitude on naturally weighted, medium-sized street network graphs and random hyperbolic graphs. MWCPeel outperforms its competitors on these instances, but is slightly less effective on extremely dense or large instances. In future work, we want to consider parallelization of our approaches. Given the good results of our algorithms, we plan to release them as open source.
**Acknowledgments.** We acknowledge support by DFG grant SCHU 2567/3-1. N. K. was supported by the Vienna Science and Technology Fund (WWTF) through project VRG19-009.
|
2310.19318 | Searching for Associations Between Short Gamma-ray Bursts and Fast Radio
Burst | The physical origin of fast radio bursts (FRBs) is still unclear. However,
young magnetars associated with short-duration gamma-ray bursts (SGRBs) have
been thought to be possible central engines for some FRBs. In this paper, we
perform a systematic search for SGRBs that are associated with FRBs in a sample
including 623 FRBs (601 one-off bursts and 22 repeaters) and 168 SGRBs with
precise localizations. We find that FRB 190309A is spatially associated with
GRB 060502B, with a chance probability of 0.05 when temporal and redshift
information is taken into account. Considering the high chance probability (the
statistical significance is < 3{\sigma}), we examine other observational
properties such as the host galaxy, the dispersion measure, and the energy
budget of the central engine to check the possibility of their association.
Although the available observational information is insufficient to determine
whether they are physically associated, it does not rule out such a
possibility. As the only pair of FRB and GRB that are spatially associated, it
remains an interesting case worthy of further attention | Ming-Xuan Lu, Long Li, Xiang-Gao Wang, Can-Min Deng, Yun-Feng Liang, Da-Bin Lin, En-Wei Liang | 2023-10-30T07:28:47Z | http://arxiv.org/abs/2310.19318v1 | # Searching for Associations Between Short Gamma-ray Bursts and Fast Radio Bursts
###### Abstract
The physical origin of fast radio bursts (FRBs) is still unclear. However, young magnetars associated with short-duration gamma-ray bursts (SGRBs) have been thought to be possible central engines for some FRBs. In this paper, we perform a systematic search for SGRBs that are associated with FRBs in a sample including 623 FRBs (601 one-off bursts and 22 repeaters) and 168 SGRBs with precise localizations. We find that FRB 190309A is spatially associated with GRB 060502B, with a chance probability of 0.05 when temporal and redshift information is taken into account. Considering the high chance probability (the statistical significance is \(<3\sigma\)), we examine other observational properties such as the host galaxy, the dispersion measure, and the energy budget of the central engine to check the possibility of their association. Although the available observational information is insufficient to determine whether they are physically associated, it does not rule out such a possibility. As the only pair of FRB and GRB that are spatially associated, it remains an interesting case worthy of further attention.
Radio transient sources (2008) -- Gamma-ray bursts (629) -- Magnetars (992)
## 1 Introduction
Fast radio bursts (FRBs) are mysterious radio transients with millisecond durations (Lorimer et al., 2007; Thornton et al., 2013; Platts et al., 2019). FRBs have extremely high brightness temperatures (Cordes and Chatterjee, 2019; Petroff et al., 2019). The dispersion measures (DMs) of FRBs exceed the Milky Way's contribution and some of them have been reliably determined to be located in the extragalactic systems (Chatterjee et al., 2017; Bannister et al., 2019; Prochaska et al., 2019; Ravi et al., 2019; Marcote et al., 2020). Up to now, except for the X-ray burst from the Galactic soft gamma repeater (SGR) SGR1935+2154 coincident with the fast radio burst FRB200428 (Bochenek et al., 2020; CHIME/FRB Collaboration et al., 2020), no other multiwavelength/multimessenger transients associated with FRBs are definitely observed (Yamasaki et al., 2016; DeLaunay et al., 2016; Hardy et al., 2017; Zhang and Zhang, 2017; Xi et al., 2017; James et al., 2019; Cunningham et al., 2019; Mereghetti et al., 2020; Tavani et al., 2020; Xin et al., 2021; Li et al., 2021; The LIGO Scientific Collaboration et al., 2022; Wang and Nitz, 2022), though there have been some works claiming tentative associations of FRBs with multiwavelength transient counterparts (Wang et al., 2020; Li et al., 2022). Current FRB models can be divided into two categories: catastrophic models (e.g., Kashiyama et al., 2013; Totani, 2013; Falcke and Rezzolla, 2014; Zhang, 2014, 2016; Liu et al., 2016; Wang et al., 2016) and non-catastrophic models (e.g., Dai et al., 2016; Murase et al., 2016; Metzger et al., 2017; Margalit and Metzger, 2018; Zhang, 2020; Ioka and Zhang, 2020; Wang et al., 2020; Geng et al., 2021; Deng et al., 2021). The former (e.g. NS-NS merger or NS-BH merger model with NS and BH denoting neutron star and black hole, respectively) is invoked to explain one-off FRBs. The latter usually invokes a NS born in a catastrophic event as the central engine of the FRB.
The localization of FRB sources and the identification of their host galaxies are important for understanding their physical origins, and several host galaxies of FRBs have been identified. The host galaxy of the first repeat source
FRB 121102 is found to be similar to the host galaxies of superluminous supernovae (SLSNe) and long gamma-ray bursts (LGRBs) (Tendulkar et al., 2017; Nicholl et al., 2017; Metzger et al., 2017; Tendulkar et al., 2017; Chatterjee et al., 2017; Marcote et al., 2017; Zhang and Wang, 2019). However, the host environment of another well-localized repeater FRB 20180916B is completely different to FRB 121102 (Tendulkar et al., 2021). The one-off bursts FRB 190523A and FRB 180924B are found to occur in massive galaxies with low star formation rates and have large offsets from the centers of their hosts (Ravi et al., 2019; Bannister et al., 2019). These two FRBs share similar properties with the host environments of short gamma-ray bursts (SGRBs) and binary neutron star (BNS) mergers (Margalit et al., 2019; Gourdji et al., 2020). In addition, Li et al. (2019) also suggested that some FRB host candidates have low star formation and large offsets, and the corresponding FRBs may be driven by NSs that were born in BNS mergers.
Gamma-ray bursts (GRBs) are the most luminous explosions in the universe. Based on the duration distribution, they can be divided into LGRBs and SGRBs with a separation line of about 2 seconds (Kouveliotou et al., 1993; Kumar and Zhang, 2015; Zhang, 2018). For SGRBs, the NS-NS or NS-BH merger is the preferred progenitor model. The joint detection of an SGRB and the gravitational wave event confirms the NS-NS merger origin for at least some SGRBs (Abbott et al., 2017; Goldstein et al., 2017). Although the event rate of SGRBs is much smaller than that of FRBs, we believe that some FRBs may be related to SGRBs for the following reasons: 1. Several theoretical models predict FRB emissions are also in connection with neutron star mergers (Totani, 2013; Wang et al., 2016; Zhang, 2020). 2. The BNS merger may leave behind a massive, stable, rapidly-spinning magnetar that could serve as the central engine of SGRB, which could also power FRBs after an uncertain time delay (Popov and Postnov, 2013; Lyubarsky, 2014; Kulkarni et al., 2014; Gao et al., 2016; Katz, 2016; Yang and Zhang, 2018; Lu and Kumar, 2018; Margalit et al., 2019; Wang et al., 2020; Lin and Totani, 2020; Beloborodov, 2020). 3. For most host galaxies of one-off FRBs, their properties are similar to the host galaxies of SGRBs, which have low star formation rates and large offsets from the centers of the galaxies (Barthelmy et al., 2005; Gehrels et al., 2005; Fox et al., 2005; Bloom et al., 2006; Fong et al., 2013; Bannister et al., 2019; Ravi et al., 2019; Marcote et al., 2020; Wang et al., 2020).
Footnote 3: [https://www.mpe.mpg.de/](https://www.mpe.mpg.de/)\(\sim\)jcg/grbgen.html/userconsent#
Searches for continuous radio emission and/or FRBs associated with SGRBs have been conducted in many works but have not revealed a reliable association (Dessenne et al., 1996; Bannister et al., 2012; Obenberger et al., 2014; Palaniswamy et al., 2014; Kaplan et al., 2015; Anderson et al., 2018; Rowlinson et al., 2019; Anderson et al., 2021; Rowlinson et al., 2021; Tian et al., 2022). It is valuable to study the connection between SGRBs and FRBs with large FRB and SGRB samples (Curtin et al., 2022). Recently, the Canadian Hydrogen Intensity Mapping Experiment (CHIME) released a large sample of FRBs (The CHIME/FRB Collaboration et al., 2021). In this work, we perform a systematic search for SGRBs that may be associated with FRBs in the SGRB and FRB samples which include all publicly reported FRBs and precisely-localized SGRBs before September 2022, making our work the one considering the most complete samples to date.
## 2 Search for FRBs associated with SGRBs
The first FRB was reported in 2007 (Lorimer et al., 2007). Since then, 807 FRBs have been reported in the literature until September 2022, including 601 one-off bursts and 204 repeat bursts from 22 repeaters1(Petroff and Yaron, 2020). For the SGRB sample, we consider the GRB catalog2 presented by J. Greiner (JG catalog), which compiles GRBs detected by various detectors, e.g., Fermi, Swift, HETE-2, BeppoSAX, BATSE, AGILE, etc. The JG catalog contains thousands of objects and is updated almost every day. Until September 2022, there are more than one thousand GRBs with afterglow detections recorded in the JG catalog. In general, T\({}_{90}\) is used to categorize LGRBs (T\({}_{90}\)\(>\) 2s) and SGRBs (T\({}_{90}\)\(\leq\) 2s), and there are 111 SGRBs with afterglow detections in the JG catalog (we have also checked the _Swift_/XRT website 3 and confirmed that there are no other SGRBs with afterglow detections by _Swift_/XRT). In addition, some GRBs with the T\({}_{90}\) larger than 2s are also classified as SGRBs because there is evidence that they are of merger origin, which include 19 SGRBs (T\({}_{90}\)\(>\) 2s). In total, there are 130 SGRBs with afterglow detections in our sample. In addition, considering the excellent localization capability of _Swift_/BAT, we also use the SGRBs detected by _Swift_/BAT even without an afterglow detection (38 SGRBs). We list our SGRB sample in Table 1. Figure 1 shows the sky distribution of our sample, including 168 SGRBs and 623 FRBs (601 one-off bursts and 22 repeaters).
Footnote 1: [https://www.wis-tns.org](https://www.wis-tns.org)
Footnote 2: [https://www.mpe.mpg.de/](https://www.mpe.mpg.de/)\(\sim\)jcg/grbgen.html/userconsent#
Footnote 3: [https://swift.gsfc.nasa.gov/archive/grb_table.html/](https://swift.gsfc.nasa.gov/archive/grb_table.html/)
We perform a systematic search of our sample based on the following three criteria (Wang et al., 2020): I, the SGRB should positionally overlap with the FRB within the localization error circle; II, the SGRB should occur earlier
than the FRB; III, the redshift of the SGRB should be compatible with the FRB distance derived from its DM. The criterion III is described in more detail below.
The observed total DMs can usually be composed of
\[\rm DM_{total,obs}=DM_{MW}+DM_{IGM}+\frac{DM_{host}}{1+z} \tag{1}\]
where \(\rm DM_{MW}\) is the DM contribution from the Milky Way, the \(\rm DM_{IGM}\) is the DM contribution from intergalactic medium (IGM) and \(\rm DM_{host}\) is the DM contribution from the FRB host galaxy. Although for a single FRB the relationship between \(\rm DM_{IGM}\) and redshift is difficult to be determined, a mean value can be estimated through (Zhang, 2018; Deng et al., 2019)
\[\rm\overline{DM}_{IGM}(z)=\frac{3cH_{0}\Omega_{b}f_{IGM}f_{e}}{8\pi Gm_{p}} \int_{0}^{z}\frac{1+z^{{}^{\prime}}}{E(z^{{}^{\prime}})}dz^{{}^{\prime}} \tag{2}\]
where \(E(z^{\prime})=H(z^{\prime})/H_{0}\) with \(H_{0}\) the Hubble constant, \(\Omega_{b}\) is the baryon density, \(f_{IGM}\sim 0.83\) is the fraction of baryons in IGM (Fukugita et al., 1998), \(m_{p}\) is the proton mass, and the number of free electrons per baryon in the universe is \(f_{e}\,\sim 7/8\)(Deng and Zhang, 2014). The \(\rm\Lambda CDM\) cosmological parameters of the Planck results are adopted, i.e., \(H_{0}\)=67.74 km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\rm m}\) = 0.3089, \(\Omega_{\Lambda}\) = 0.6911, and \(\Omega_{b}\) = 0.0486 (Planck Collaboration et al., 2016). Because Eq. (2) assumes that the matter in the IGM is uniformly distributed, the effects of inhomogeneities should also be considered, leading to an uncertainty \(\sigma_{\rm IGM}\) (\(z\)) of the \(\rm\overline{DM}_{IGM}\) (\(z\))(McQuinn, 2014). Thus, we have to take into account the \(\sigma_{\rm IGM}\)(z) when converting a redshift into \(\rm DM_{IGM}\). This means that for a given redshift, a DM range of \(\rm\overline{DM}_{IGM}\) (\(z\)) \(\pm\)\(\sigma_{\rm IGM}\)(z) is acceptable. The criterion III requires this range covers the \(\rm DM_{IGM}=\rm DM_{total,obs}-\rm DM_{MW}-\rm DM_{host}/(1+z)\). We use the results in McQuinn (2014) to obtain the value of \(\sigma_{\rm IGM}\) and adopt the baryon distribution model that considers the details of the accretion rates of baryons into dark matter halos.
The \(\rm DM_{MW}\) values have been provided in the FRB catalog (Petroff and Yaron, 2020), which are derived based on two Galactic electron models (Cordes and Lazio, 2002; Yao et al., 2017). However, only a handful of FRBs have the measurements of \(\rm DM_{host}\) to date (e.g., FRB 121102 (Tendulkar et al., 2017), FRB 171020 (Mahony et al., 2018), FRB 200120E (Bhardwaj et al., 2021)). Therefore, we adopt the \(\rm DM_{obs}-\rm DM_{MW}\) as a proxy / an upper limit of the \(\rm DM_{IGM}\) (i.e., ignoring the DM contribution from the host galaxy). We have tested that assuming a typical value of 50 of the \(\rm DM_{host}\)(Deng et al., 2019) would not change our main conclusions. By requiring \(\rm\overline{DM}_{IGM}\) (\(z\))
Figure 1: Sky distributions in celestial coordinates of the 623 FRBs and 168 SGRBs considered in this work. The black points represent the positions of the FRBs. The blue points represent the locations of SGRBs. The region enclosing GRB 060502B and FRB 190309A is marked by a red circle.
\(\pm\)\(\sigma_{\rm IGM}\)(z) covers the \(\rm DM_{IGM}=DM_{\rm obs}-DM_{\rm MW}\), we estimate the redshift range compatible with the observed DM for each FRB in our sample. Only one pair of SGRB and FRB satisfies the three criteria (in fact, only this pair satisfies criterion I): GRB 060502B and FRB 190309A. The position of GRB 060502B located by _Swift_/XRT is (RA, Dec) = \((18\,35\,45.74,+52\,37\,52.47)\) with an 90% error radius of 4\({}^{\prime\prime}\).4 (Troja et al., 2006a). The redshift of GRB 060502B is \(z\) = 0.287 (Bloom et al., 2007). FRB 190309A was detected by CHIME with a position of (RA, Dec) = \((278.96\pm 0.23,52.41\pm 0.24)\)4 (The CHIME/FRB Collaboration et al., 2021). The angular separation between the two sources is 0.22\({}^{\circ}\), less than the localization uncertainty of FRB 190309A by CHIME. FRB 190309A was detected 4694 days (\(\sim\)12.8 yr) after the GRB 060502B trigger. The observed total DM is 356.9 pc cm\({}^{-3}\) and the extragalactic contribution \(\rm DM_{E}=DM_{\rm IGM}+DM_{\rm host}/(1+z)\) is 298.3 pc cm\({}^{-3}\)(The CHIME/FRB Collaboration et al., 2021) when using the NE2001 model(Cordes and Lazio, 2002). We estimate its redshift range to be \(z\sim(0.23\sim 0.54)\) (assuming a \(\rm DM_{host}\) of 50 gives \(z\sim(0.20\sim 0.48)\)), which covers the redshift of GRB 060502B.
Footnote 4: [http://www.chime-frb.ca/catalog/FRB20190309A](http://www.chime-frb.ca/catalog/FRB20190309A)
To derive the significance of the association, we calculate the chance possibility of the putative GRB 060502B-FRB 190309A association by Monte Carlo (MC) simulation. The CHIME sky coverage is \(2\pi(1-\cos 10^{\circ})=7.48\) sr, where \(101^{\circ}\) is the latitude range of the CHIME observation (RA: \(0-360^{\circ}\), Dec: \(-11^{\circ}-90^{\circ}\)) (The CHIME/FRB Collaboration et al., 2021). Due to the large sky coverage, the present FRB sample is dominated by the CHIME FRBs with 503 (483 one-off FRBs and 20 repeaters) out of 623 FRBs in the whole sample. Therefore, the MC simulation could be performed considering only the CHIME FRBs and the SGRBs within the CHIME sky coverage. There are \(103/168\approx 61.3\)% SGRBs in our SGRB sample within the CHIME sky coverage. Finally, 503 CHIME FRBs and 103 SGRBs are used in this MC simulation.
We perform the MC simulation as follows: Based on the sky distributions of the observed 103 SGRBs and 503 FRBs, we generate 103 and 503 pseudo- SGRBs and FRBs within the CHIME sky coverage. The SGRBs are randomly sampled isotropically, while the FRBs follow the CHIME FRB distribution in the sky, as shown in Figure 2. For the simulated FRBs and SGRBs, we randomly assign each of them a localization error, observation time, and redshift taken from the actual data of the real FRBs and SGRBs5. It's important to note that the localization errors of CHIME FRBs are not uniform across the sky; they are notably better at lower declinations than at higher declinations. Therefore, we divide the sky into 3-degree declination bins and randomly select a real localization error from the CHIME FRBs within the corresponding bin based on the declination of the simulated FRB.
Footnote 5: Only 36 SGRBs have redshift measurements, and we randomly draw one from these 36 redshifts for each pseudo-SGRB.
We perform \(10^{5}\) MC simulations (for each simulation 103 SGRBs and 503 FRBs are sampled) and calculate the chance probabilities as follows. For each simulation, we apply the three criteria mentioned above to select association candidates. We record the number of simulations in which there is at least one pair of pseudo-sources satisfying the criteria, denoted as \(N\). The chance probability is then calculated as \(P=N/10^{5}\). Considering only criterion I, we find a chance probability of 0.27. When criterion II is included, the chance probability is 0.21. With criterion III included, the chance probability is further reduced to 0.05.
## 3 Is GRB 060502B associated with FRB 190309A?
### Host Galaxy
GRB 060502B triggered the _Swift_/BAT on 05 June 2006 (UT) at 17:24:41 (\(T_{0}\)), with \(T_{90}=0.131\) s (Troja et al., 2006b; Sato et al., 2006). The _Swift_/XRT and _Swift_/UVOT began to observe the X-ray and optical afterglow of GRB 060502B at 76 and 100 s after the _Swift_/BAT trigger, respectively (Troja et al., 2006b; Poole and Troja, 2006). Many optical telescopes have carried out follow-up observations, e.g., MASTER optical telescopes (Lipunov et al., 2006), Xinglong 0.8m telescope (Zhai et al., 2006), Tautenburg 1.34m Schmidt telescope (Kann et al., 2006), AROMA optical telescope (Takahashi et al., 2006b,a), MDM 1.3m telescope (Halpern and Mirabal, 2006b,a), Cassini telescope (Meurs et al., 2006), Gemini North telescope (Berger et al., 2006; Price et al., 2006) and MAO 1.5m telescope (Rumyantsev et al., 2006). No optical counterpart was found from several minutes to more than ten hours after the GRB 060502B trigger, and only a faint X-ray afterglow was detected by _Swift_/XRT.
The identification of the host galaxy of GRB 060502B has been studied in some previous works (Bloom et al., 2006a, 2007; Berger et al., 2007; Church et al., 2011, 2012). Bloom et al. (2007) proposed that a bright galaxy (referred to as \(G^{*}\)), situated south of the Swift/XRT localization of the GRB with an angular separation of 17.5 arcsec (approximately 73 kpc if assuming a redshift of \(z\sim 0.287\)), is the host galaxy of GRB 060502B (chance probability of 0.03). Church
et al. (2011) noted that such a large offset is inconsistent with the scenario that GRB 060502B originates from a compact binary merger in the galaxy. They suggest that either the precursor of GRB 060502B was a binary neutron star system that formed within a globular cluster inside \(G^{*}\), giving this binary a large initial kick, or \(G^{*}\) is not the actual host galaxy of GRB 060502B. Except for the \(G^{*}\), within 1 arcmin of the GRB 060502B position, we find no other possible host galaxy candidates for GRB 060502B with redshift measurements6. Furthermore, the results obtained from the relation between the spectral peak energy (\(E_{p}\)) and the isotropic gamma-ray energy (\(E_{\gamma,{\rm iso}}\)), namely the _Amati_-relation (Amati et al., 2002; Wang et al., 2018), indicate that when adopting a redshift of \(G^{*}\) (\(z\) = 0.287 Zhang et al. (2009)) the GRB 060502B well lies within the SGRB's _Amati_-relation. Therefore, we also consider the bright galaxy \(G^{*}\) as the host galaxy for GRB 060502B.
Footnote 6: Here, we search for the candidates from the Extragalactic Database (NASA/IPAC Extragalactic Database (NED) 2019) and Sloan Digital Sky Survey [https://www.sdss.org/dr17/](https://www.sdss.org/dr17/).
The properties of the host environment (e.g., the offset from the host center, the star-formation rate (SFR), and the total stellar mass of the galaxy (\(M_{\rm s,tot}\))) may provide important clues for unveiling the origin of FRBs. Although the sample size of FRB host galaxies is still limited, some statistical analyses between FRB hosts and the host galaxies of other transient sources, e.g., SGRBs, core-collapse supernovae (CCSNe), Type Ia supernovae (SNe Ia), LGRBs, SLSNe, have been performed (Safarzadeh et al., 2020; Bhandari et al., 2020; Heintz et al., 2020; Li and Zhang, 2020; Mannings et al., 2021; Bochenek et al., 2021; Bhandari et al., 2022; Law et al., 2023). A majority of these works suggest that the host galaxies of SGRBs and CCSNe are similar to those of FRBs. In particular, Li and Zhang (2020) investigated 9 FRBs' hosts and suggested that the SFRs and the \(M_{\rm s,tot}\) of 8 of them are consistent with SGRBs. Bhandari et al. (2022) presented that the global properties (offset, SFR, \(M_{\rm s,tot}\)) of FRB hosts are indistinguishable from those of the hosts of CCSNe and SGRBs.
To compare the host galaxy properties between the GRB 060502B (i.e., \(G^{*}\)) and the FRB population, we also demonstrate the offsets, SFRs, and \(M_{\rm s,tot}\) for 30 FRB hosts in Table 2 and Figure 3, including 17 FRBs from Bhandari et al. (2022), 11 FRBs from Law et al. (2023) and two possible hosts from Bhardwaj et al. (2021) (FRB 181030A) and Bhandari et al. (2022) (FRB 181112A). The red lines denote the values for the \(G^{*}\). As is shown in the figures, although the SFR of \(G^{*}\) falls well into the range of the FRB host galaxy population, the other two properties (offset and \(M_{\rm s,tot}\)) are not consistent with those of FRBs, which do not support that \(G^{*}\) is a typical FRB host galaxy.
Figure 2: CHIME does not have a uniform exposure across the sky (The CHIME/FRB Collaboration et al., 2021). This plot shows the probability density distribution of the 503 CHIME FRBs as a function of declination, which is given by the relative number of FRBs in the DEC \(\pm\) 1\({}^{\circ}\) region divided by the corresponding solid angle.
However, as mentioned above, if \(G^{*}\) is considered as the host galaxy of GRB 060502B, the possible born channel for this GRB is the merger of a BNS system formed in a GC of \(G^{*}\)(Church et al., 2011). Therefore, if FRB 190309A is associated with GRB 060502B, the FRB should also arise from this channel, which is not the same as the born channels of most of FRBs. One FRB that has a similar channel is the burst FRB 200120E. FRB 200120E could have arisen in a GC and been powered by a magnetar born from the merger of a compact binary system within the GC (Kirsten et al., 2022). Although the merger position of the progenitor of FRB 190309A is not identical to that of FRB 200120E, the sparse sample size currently does not allow us to tell whether \(G^{*}\) matches the host properties of this class of FRBs and thus cannot rule out the possibility that \(G^{*}\) serves as the host galaxy of FRB 190309A.
### Dispersion Measure
DM\({}_{\rm host}\) can be resolved into three components, including DM\({}_{{}_{\rm src}}\), DM\({}_{\rm ISM}\) and DM\({}_{\rm halo}\). They are the DMs contributed by the source environment, the interstellar medium (ISM), and the halo of the host galaxy, respectively. Here, we assume that GRB 060502B and FRB 190309A are associated, and we want to estimate whether the source environment of GRB 060502B can allow the FRB to escape and whether the observational DM\({}_{\rm host}\) of FRB 190309A can be explained by the host galaxy of GRB 060502B.
Since GRB 060502B is supposed to be from a BNS merger, its DM\({}_{{}_{\rm src}}\) would be contributed by the ejecta of the BNS merger. The BNS merger ejecta is different from the CCSNe and has a higher velocity \(v\sim(0.1-0.3)c\) and a lower mass \(M\sim(10^{-4}-10^{-2})M_{\odot}\). After the BNS merger, the DM\({}_{\rm src}\) can be derived as follows (Wang et al., 2020),
\[{\rm DM}_{\rm src}=n_{e}\Delta R\simeq\frac{\eta Y_{e}M}{4\pi m_{p}(vt)^{2}} \simeq 0.17{\rm pc\,cm^{-3}}\times\,\eta\Big{(}\frac{Y_{e}}{0.2}\Big{)}\Big{(} \frac{M}{10^{-3}M_{\odot}}\Big{)}\Big{(}\frac{v}{0.2\,c}\Big{)}^{-2}\Big{(} \frac{t}{1\,{\rm yr}}\Big{)}^{-2} \tag{3}\]
where \(n_{e}\simeq\eta Y_{e}M/[4\pi m_{p}(vt)^{3}]\) is the free electron density, \(\Delta R\sim vt\) is the ejecta thickness, \(v\) is the speed of the ejecta, \(t\) is the elapsed time after the BNS merger, \(\eta\) is the ionization fraction, \(Y_{e}\) is the electron fraction, and \(M\) is the mass of the ejecta. The DM\({}_{\rm src}\) is derived to be 7.9 \(\times\) 10\({}^{-4}\) pc cm\({}^{-3}\) using \(t=12.8\) yrs (the time difference between GRB 060502B and FRB 190309A). This indicates a clean source environment, which is thought to be a prerequisite for FRB escape if the magnetar acts as the central engine to power the FRB (Murase et al., 2016; Metzger et al., 2017; Margalit & Metzger, 2018).
According to the redshift of GRB 060502B (\(z\) = 0.287), the \(\overline{\rm DM}_{\rm IGM}\) can be estimated from equation (2), i.e. \(\overline{\rm DM}_{\rm IGM}\simeq 245.2\) pc cm\({}^{-3}\). We can derive that the DM\({}_{\rm host}\) is
\[{\rm DM}_{\rm host}=(1+z)({\rm DM}_{\rm obs}-{\rm DM}_{\rm MW}-\overline{\rm DM }_{\rm IGM})\simeq 68.3\,{\rm pc\,cm^{-3}} \tag{4}\]
where DM\({}_{\rm MW}\) = 58.6 pc cm\({}^{-3}\) is the contribution from the Milky Way. Clearly, the DM\({}_{\rm src}\) (\(\sim\) 7.9 \(\times\) 10\({}^{-4}\) pc cm\({}^{-3}\)) can be neglected. Meanwhile, due to the large offset between GRB 060502B and \(G^{*}\), the DM\({}_{\rm ISM}\) part should also be neglected. Thus, the DM\({}_{\rm host}\) is mainly contributed by the DM\({}_{\rm halo}\).
Following Church et al. (2011), we adopt the below profile to model the dark matter halo of \(G^{*}\)(Thomas et al., 2009),
\[\rho(r)=\frac{v_{\rm h}^{2}}{4\pi G}\frac{3r_{\rm h}^{2}+r^{2}}{(r_{\rm h}^{2 }+r^{2})^{2}} \tag{5}\]
Figure 3: Distributions of offsets (left panel), host galaxy SFRs (middle panel), and stellar masses (right panel) for 30 FRBs (light blue region). The dashed red lines represent the values for GRB 060502B.
where \(r_{\rm h}=20.45\,{\rm kpc}\) is the core radius of the halo, \(v_{\rm h}=505.31\,{\rm km\,s^{-1}}\) is the circular velocity at infinity and \(G=6.67\times 10^{-11}\,{\rm m^{3}\,kg^{-1}s^{-2}}\) is the gravitational constant. We take the halo mass from Church et al. (2011), i.e. 6.01 \(\times 10^{12}M_{\odot}\), which is enclosed within a spherical radius of \(R_{\rm max}=105\,{\rm kpc}\) of the halo. We consider the simplest model for baryons in the halo, which are assumed to trace the underlying dark matter halo distribution (Prochaska and Zheng, 2019). The baryon density in the halo is thus, \(\rho_{\rm b}(r)\propto f_{\rm b,halo}\rho(r)\Omega_{\rm b}/\Omega_{\rm m}\), where the cosmic baryon fraction is \(\Omega_{\rm b}/\Omega_{\rm m}\)\(\thickapprox\) 0.158 and \(f_{\rm b,halo}\) is the fraction of baryons existing in a halo form (i.e., excluding those portion in the form of ISM and stars). Here, we consider two values: (1) \(f_{\rm b,halo}\) = 0.75, which assumes that \(\thickapprox 25\%\) of the baryons exist in the ISM, stars and their remnants (Fukugita et al., 1998). (2) \(f_{\rm b,halo}\) = 0.40, which is a lower limit for a \(\sim 10^{12}M_{\odot}\) halo found by Hafen et al. (2019).
The line of sight may travel through the \(G^{*}\) halo, and thus the DM contributed by the halo is
\[{\rm DM}(x)=\int_{0}^{x}n_{e}dl. \tag{6}\]
The \(n_{e}\) is the free-electron number density in the halo, which can be estimated by (Prochaska and Zheng, 2019)
\[n_{e}=\mu_{\rm e}\frac{\rho_{\rm b}}{m_{\rm p}\mu_{\rm H}} \tag{7}\]
where \(\mu_{\rm H}\) = 1.3 the reduced mass (accounting for Helium) and \(\mu_{\rm e}\) = 1.167 accounts for fully ionized Helium and Hydrogen. The electron number density distribution is shown in Figure 4. Considering that we don't know exactly the position of the source on the line of sight, the \(x\) in Eq. (6) could be \(0\leq x\leq 2\sqrt{R_{\rm max}^{2}-R_{\perp}^{2}}=151\,{\rm kpc}\) (where \(R_{\perp}=73\,{\rm kpc}\) is the impact parameter) and we can only estimate the DM\({}_{\rm halo}\) range corresponding to \(x\) varying from 0 to 151 kpc. Therefore, we obtain the range of DM\({}_{\rm halo}\) of 0 \(-\) 470 pc cm\({}^{-3}\) (for \(f_{\rm b,halo}\) = 0.75) or 0 \(-\) 250 pc cm\({}^{-3}\) (for \(f_{\rm b,halo}\) = 0.4). Although the DM\({}_{\rm halo}\) result is affected by the profile model of the galaxy halo (one can see Figure 1 in Prochaska and Zheng (2019), which shows the DM\({}_{\rm halo}\) results for different halo profiles; see also Cook et al. (2023) for a newly developed model of the Galactic electron halo), at least for the profile considered in this paper [Eq. (5)], the DM\({}_{\rm host}\) of FRB 190309A (68.3 pc cm\({}^{-3}\)) is compatible with the possibility that \(G^{*}\) is the host galaxy.
### Optical Depth and Time Delay
In addition, the ejecta surrounding the newborn magnetar could affect the observational signature of FRBs. We here consider the free-free optical depth due to the ejecta of the BNS merger. When the ejecta is transparent to the FRB, the free-free optical depth should be less than 1. We can derive the free-free optical depth with (Wang et al., 2020)
\[\tau_{\rm ff}=\alpha_{\rm ff}\Delta R\simeq(0.018T_{\rm eje}^{-3/2}Z^{2}n_{e}n _{i}\nu^{-2}\overline{g}_{\rm ff})\Delta R=2.7\times 10^{-8}\]
\[\times\eta^{2}\Big{(}\frac{Y_{e}}{0.2}\Big{)}^{2}\Big{(}\frac{M}{10^{-3}M_{ \odot}}\Big{)}^{2}\Big{(}\frac{T_{\rm eje}}{10^{4}{\rm K}}\Big{)}^{-3/2}\Big{(} \frac{\nu}{1\,{\rm GHz}}\Big{)}^{-2}\Big{(}\frac{v}{0.2\,{\rm c}}\Big{)}^{-5} \Big{(}\frac{t}{1{\rm yr}}\Big{)}^{-5} \tag{8}\]
Figure 4: The electron number density as a function of halo radius in the halo of \(G^{*}\).
where \(n_{e}\) and \(n_{i}\) are the number densities of electrons and ions, respectively, and \(n_{e}\sim n_{i}\) and \(Z\sim 1\) are assumed in the ejecta that is a fully ionized hydrogen-dominated composition, \(T_{\rm eje}\) is the ejecta temperature, \(\overline{g}_{\rm ff}\sim 1\) is the Gaunt factor. The calculated result suggests that the FRB could be detected a few weeks after the BNS merger. Therefore, the optical depth allows the FRB 190309A to escape the source environment of GRB 060502B.
The FRB emitted from a magnetar has been proposed to be due to magnetospheric activities. When the magnetar magnetosphere is triggered by a crust fracturing, the magnetic energy in the crust will be released and converted into particle energy and radiation, and an FRB is emitted (Kumar and Bosnjak, 2020; Wadiasingh and Timokhin, 2019; Dehman et al., 2020; Lu et al., 2020). In this picture, as long as the magnetar exists, it is possible to produce FRBs through a crust fracturing, and the time of the crust fracturing is extremely uncertain during the existence of the magnetar. Here, we estimate the timescale for the existence of the magnetar (i.e., the timescale of magnetic field decay). The magnetic field in the crust would decay via the Ohmic dissipation and the evolution of the magnetic field can be approximated by \(dB/dt\simeq-AB\)(Yang and Zhang, 2021) with \(A=10^{18}\,\rm G^{-1}yr^{-1}\)(Colpi et al., 2000). Then the typical timescale of magnetic field decay is (Yang and Zhang, 2021)
\[\tau_{B}=\frac{1}{AB_{0}}=10^{4}\rm yr\Big{(}\frac{B_{0}}{10^{14}\rm G}\Big{)} ^{-1} \tag{9}\]
where \(B_{0}\) is the initial magnetic field. One can see that the timescale of magnetic field decay (and thus the time delay of the FRB) could be \(10^{4}\) years for \(B_{0}\) = \(10^{14}\) G. For example, FRB 200428 may be released by the magnetospheric activities of a Galactic magnetar with an age of approximately \(10^{4}\) yrs (Bochenek et al., 2020; Wang et al., 2021). Therefore, the relatively large time span (\(\sim 12.8\) yr) between GRB 060502B and FRB 190309A is also compatible with the model.
### Energy Budget
Although the central engine of FRBs is generally considered to be a magnetar, the exact process for the generation of FRBs differs in different models. For instance, FRBs could be generated by synchrotron maser in relativistic shocks driven by the magnetar flares (Lyubarsky, 2014; Metzger et al., 2019; Beloborodov, 2020), by the curvature radiation in the magnetosphere (Lu et al., 2020), by the magnetic reconnection of external magnetosphere (Lyubarsky, 2020), or by the inverse Compton scattering in the magnetosphere (Zhang, 2022). Here, rather than going into the details of any of the above models, we consider only a conservative estimate of the magnetic field of the magnetar, since for all of these models FRBs are essentially driven by the magnetic energy of the magnetar.
Within this picture, we try to calculate whether the magnetar could be used as the central engine of FRB 190309A. FRB 190309A has a duration of \(\Delta t\sim 1.97\) ms, a flux of \(S_{\nu,p}\sim 0.39\) Jy and a fluence of \(f_{\nu}\sim 0.72\) Jy ms at \(\nu_{c}\sim 468\) MHz. Assuming that FRB 190309A is associated with GRB 060502B, one can derive the luminosity distance as \(D_{\rm L}\sim 1.52\) Gpc according to the redshift of GRB 060502B (\(z\sim 0.287\)). The luminosity and isotropic energy of FRB 190309A can be calculated as \(L_{\rm p}\backsimeq 4\pi D_{\rm L}^{2}S_{\nu,p}\nu_{c}\simeq 5.1\times 10^{41}\) erg s\({}^{-1}\) and \(E_{\rm FRB}\backsimeq 4\pi D_{\rm L}^{2}f_{\nu}\nu_{c}/(1+z)\simeq 7.3\times\)\(10^{38}\) erg. If this energy is provided by the magnetic energy of the magnetar, one can place a constraint on the strength of the surface polar cap magnetic field of the underlying magnetar.
The emission radius can be approximately estimated as \(r_{e}\sim c\Delta t\backsimeq 5.91\times 10^{7}\) cm, which is consistent with the predicted emission radius of the magnetar that produced the FRB, i.e., a few \(\times 10R_{\rm NS}\sim 10^{7}\) cm (Lyutikov, 2021). The magnetic field strength at \(r_{e}\) should satisfy
\[\frac{B_{e}^{2}}{8\pi}\left(\frac{4\pi}{3}r_{e}^{3}\right)\geq E_{\rm FRB}, \tag{10}\]
where \(B_{e}=B_{\rm p}\) (\(r_{e}/R)^{-3}\)(Wang et al., 2020) and \(B_{\rm p}\), \(R(\sim 10^{6}\) cm) are the magnetic field strength of the surface polar cap and typical radius of the magnetar, respectively. Therefore, the observation of FRB 190309A demands that the magnetic field strength of the surface polar cap is
\[B_{\rm p}\geq\left(\frac{6Er_{e}^{3}}{R^{6}}\right)^{1/2}\simeq 3\times 10^{13} \,\rm G, \tag{11}\]
which is indeed consistent with a magnetar central engine.
In this work, we systematically search for possible associations between SGRBs and FRBs based on a sample of 623 FRBs (601 one-off bursts and 22 repeaters) and 168 SGRBs. We find that FRB 190309A is spatially coincident with GRB 060502B. Moreover, GRB 060502B occurred earlier than FRB 190309A, and its redshift is consistent with the range of the distance derived from the DM of FRB 190309A. Considering the observational information such as spatial location, time of occurrence, and redshift, we obtain a chance probability of the association of \(\sim\) 0.05.
Considering the statistical significance is not high enough (\(<3\sigma\)) to claim a reliable association between GRB 060502B and FRB 190309A, we further investigate if there is any other evidence to support the physical association between them. We find that the (candidate) host galaxy of GRB 060502B, \(G^{*}\), has an SFR similar to the hosts of the FRB population, but for another two host properties we have investigated (i.e., the offset from the host center and the total stellar mass of the galaxy), the \(G^{*}\) ones do not coincide with the distributions of 19 FRBs. However, considering that FRB 190309A may have a different born channel of its underlying magnetar from these FRBs, it may not be enough to exclude the association between FRB 190309A and GRB 060502B on this basis. In addition, adopting the redshift of GRB 060502B, we estimate that FRB 190309A has a DM\({}_{\rm host}\simeq 68.3\) pc cm\({}^{-3}\), which is able to be contributed by the (candidate) host galaxy \(G^{*}\) of GRB 060502B. We also derive the free-free optical depth around the source and find it allows the FRB to be detectable. Finally, adopting the redshift of GRB 060502B, we obtain an isotropic energy of \(E_{\rm FRB}\sim 7.3\times 10^{38}\) erg for FRB190309A; accordingly, the required surface magnetic field to power FRB 190309A is \(B_{\rm p}\geq 3\)\(\times\)\(10^{13}\) G, which is also consistent with the typical magnetic field of the SGRB magnetars. These results indicate that a physical association between GRB 060502B and FRB 190309A is feasible.
Overall, in this paper we do not find a reliable SGRB counterpart that is associated with FRBs; one possible FRB-SGRB association is between GRB 060502B and FRB 190309A, but it has a relatively large chance probability (p\(\sim\)0.05). Even though, considering that this is at present the only pair of FRB and GRB that are spatially associated, it is still worthy of our attention. We therefore detailedly examine the possibility of their physical association from the aspects of the host galaxy, the DM, the energy budget, etc., and find that all of these could not exclude the possibility of their association. For this reason, we suggest that GRB 060502B/FRB 190309A pair is still a promising case of FRB-SGRB association that is worth further studying.
## Acknowledgements
We acknowledge the use of public data from the _Swift_, CHIME, GCN, TNS and frbhosts data archive. This work is supported by the National Natural Science Foundation of China (Nos. U1938201, 12133003, U1731239, 12203013), the Guangxi Science Foundation (grants 2018GXNSFGA281007, 2017AD22006, 2021AC19263).
|
2310.01557 | SmartPlay: A Benchmark for LLMs as Intelligent Agents | Recent large language models (LLMs) have demonstrated great potential toward
intelligent agents and next-gen automation, but there currently lacks a
systematic benchmark for evaluating LLMs' abilities as agents. We introduce
SmartPlay: both a challenging benchmark and a methodology for evaluating LLMs
as agents. SmartPlay consists of 6 different games, including
Rock-Paper-Scissors, Tower of Hanoi, Minecraft. Each game features a unique
setting, providing up to 20 evaluation settings and infinite environment
variations. Each game in SmartPlay uniquely challenges a subset of 9 important
capabilities of an intelligent LLM agent, including reasoning with object
dependencies, planning ahead, spatial reasoning, learning from history, and
understanding randomness. The distinction between the set of capabilities each
game test allows us to analyze each capability separately. SmartPlay serves not
only as a rigorous testing ground for evaluating the overall performance of LLM
agents but also as a road-map for identifying gaps in current methodologies. We
release our benchmark at github.com/Microsoft/SmartPlay | Yue Wu, Xuan Tang, Tom M. Mitchell, Yuanzhi Li | 2023-10-02T18:52:11Z | http://arxiv.org/abs/2310.01557v5 | # SmartPlay : A Benchmark for LLMs as Intelli-gent Agents
###### Abstract
Recent large language models (LLMs) have demonstrated great potential toward intelligent agents and next-gen automation, but there currently lacks a systematic benchmark for evaluating LLMs' abilities as agents. We introduce SmartPlay: both a challenging benchmark and a methodology for evaluating LLMs as agents. SmartPlay consists of 6 different games, including Rock-Paper-Scissors, Tower of Hanoi, Minecraft. Each game features a unique setting, providing up to 20 evaluation settings and infinite environment variations. Each game in SmartPlay uniquely challenges a subset of 9 important capabilities of an intelligent LLM agent, including reasoning with object dependencies, planning ahead, spatial reasoning, learning from history, and understanding randomness. The distinction between the set of capabilities each game test allows us to analyze each capability separately. SmartPlay serves not only as a rigorous testing ground for evaluating the overall performance of LLM agents but also as a road-map for identifying gaps in current methodologies. We release our benchmark at github.com/microsoft/SmartPlay.
## 1 Introduction
Creating intelligent agents (Woolridge & Jennings, 1995), that _perceives_ its environment and perform autonomous _actions_, has been one of the core objectives of A.I. (Laird et al., 1987; Russell, 2010) Recently, large language models (LLMs) (Smith et al., 2022; Chowdhery et al., 2022; OpenAI, 2023; Manyika; Driess et al., 2023; Touvron et al., 2023) have made remarkable progress in various tasks (Bubeck et al., 2023). Some language models demonstrate exceptional planning (Ahn et al., 2022; Wu et al., 2023b), reasoning (Wu et al., 2023a; Shinn et al., 2023), and problem-solving (Madaan et al., 2023; Kim et al., 2023) abilities,
Figure 1: SmartPlay provides a unified and expandable API with text observations and guidance to perform turn by turn LLM inference on Two-armed Bandits, Rock Paper Scissors, Messenger (Hanjie et al., 2021), Crafter (Hafner, 2021), and Minecraft (Fan et al., 2022) creative navigation tasks.
enabling the potential as generalist agents for virtual-reality (Park et al., 2023) or real-world problem-solving.
Such potential has attracted strong interest on applications where LLM systems actively invoke tools and APIs to complete a wide range of tasks goals (Significant-Gravitas; Yoheinakajima; Reworkd; Wang et al., 2023; Qin et al., 2023), and actively interact and make changes in an environment to achieve specific results (Wang et al., 2023; 20; Wu et al., 2023; 20; 20). LLMs as agents could be seen as an important step toward next-gen automation.
Despite great public attention, the capabilities of LLMs as agents have not been systematically studied, partly due to the lack of standardized LLM benchmark for agent-environment interaction. Current LLM benchmarks have been designed for static knowledge and reasoning (Hendrycks et al., 2020; Liang et al., 2022; Srivastava et al., 2022; Zhong et al., 2023), or helpful and harmless conversations (Bai et al., 2022; Zheng et al., 2023; Dubois et al., 2023), overlooking applications to intelligent agents.
We identify 4 key challenges important for general intelligent LLM agents but not captured in previous benchmarks. First, lots of real-world tasks require an agent to do long-horizon **planning and execution**. Second, many events are probabilistic and an intelligent agent may be expected to **understand the odds**. Third, we live in a 3D world, and many real-world tasks require an agent's **spatial reasoning**. Fourth, when encountered with unseen situations, an intelligent agent should be able to **learn from interactions or mistakes**.
On the other hand, games have long been identified as go-to benchmarks for intelligent generalist agents (Pell, 2011; Genesereth et al., 2005; Whiteson et al., 2010; Schaul et al., 2011; Bellemare et al., 2013; Cote et al., 2019; Hafner, 2021; Guss et al., 2021; Fan et al., 2022). At the core of game design (Koster, 2013), successful games often involve "problem-solving", "calculation of odds", "spatial reasoning", "changing difficulties", and "well-defined and quantifiable outcome", therefore offering perfect complement to existing LLM benchmarks. Finally, some game environments are procedurally generated and game states grow exponentially, making games more robust against evaluation dataset contamination as observed in recent works (Touvron et al., 2023). Experimentally, we observe LLMs struggle to memoize intermediate states of a simple 3-disk Tower of Hanoi game.
Taking a unique agent perspective in benchmarking LLMs, we introduce SmartPlay, a benchmark from 6 distinct games augmented with language descriptors for visual observation (Figure 1), offering up to 20 different settings and infinite environment variations. Each game presents unique challenges that span multiple dimensions of intelligent agents, as detailed in Table 3. The games range in complexity, from requiring simple one-step reasoning and rule-following in Bandits, to intricate long-term planning, multi-hop dependencies, and learning from interactions in Crafter (Hafner, 2021) and Hanoi. SmartPlay engages LLM agents in both deterministic and stochastic settings, demanding skills from basic text understanding to 3D spatial reasoning.
Games in SmartPlay have been built with well-defined objectives and evaluation metrics: completion rate, reward, score. Therefore, SmartPlay provides a fully automated pipeline to conduct standardized evaluation for LLMs.We use SmartPlay to compare the agent performance of recent LLMs, and identify several research gaps for applying LLMs as agents. We believe that SmartPlay sets a goal that is reachable in a short time-frame yet formidable to require new breakthroughs.
## 2 Games in SmartPlay
### Research Challenges
The SmartPlay benchmark encapsulates a diverse set of challenges that evaluate various AI capabilities, as itemized in Table 2. For instance, Bandits primarily focuses on simple randomness, requiring minimum text understanding and rule-following. On the other hand, Rock Paper Scissors uniquely puts an emphasis on randomness and multiple game rules. Hanoi presents an advanced setting for object dependency reasoning, strategic planning, and handling mistakes. Messenger puts challenge on 2D spatial reasoning, reading syntactic
variations and conducting multi-hop reasoning. Meanwhile, Minecraft offers a unique challenge in 3D spatial reasoning and generalization within a randomized world. We hope the SmartPlay benchmark would serve as a tool for identifying these nuanced gaps and directing future research.
While each game poses its unique challenges, the SmartPlay benchmark also evaluates an agent's capability to integrate these skills. For example, Crafter stands as the most comprehensive testbed, combining long texts, multiple interactions, concurrent objectives, and error handling into a single environment. Crafter highlight the need for future research to focus not just on isolated skills, but also on combining these skills into a unified, adaptive agent.
### Two Armed Bandits
The two armed bandit benchmark is inspired by popular implementations1 of bandit problems.
Footnote 1: github.com/JKCooper2/gym-bandits
The LLM agent is provided two slot machines with hidden pre-defined reward probabilities \(p_{1},p_{2}\). For slot machine \(i\), the reward for the two possible out-comes are: \(r_{i}\) for pay-off event
Figure 2: We identify a set of 9 important capabilities for an intelligent agent. We identify different degrees of challenge for each capability as shown on the left. Each game in SmartPlay challenges unique set of capabilities at different degrees, as shown in the spider charts. We numerical values fo the spider plots in Table 3.
and \(-r_{i}\) for no-pay-off event. The goal of the game is to find the arm with better return and maximize the reward over the course of 50 rounds.
An agent must keep track of win/losses from its past roll-out and balance exploration across the two slot machines vs. exploitation of the more rewarding one. Overall, the challenges include: 1) long context understanding, 2) understanding randomness, 3) learning from interactions.
The human written manual informs the LLM of the number of slot machines (two) and the game objective.
To prevent game exploitation caused by biased actions, we randomize the score and probabilities for each action by shuffling the order of the paired list: \([(p_{1},r_{1}),(p_{2},r_{2})]\).
### Rock Paper Scissors
This benchmark follows the exact same game rules as the famous zero-sum game Rock Paper Scissors2.
Footnote 2: wikipedia.org/wiki/Rock_paper_scissors
The LLM agent plays against a hand-coded opponent that follows a hidden pre-defined strategy with probabilities \(p_{1},p_{2},p_{3}\) for rock, paper, and scissors respectively. The scores for winning under each action is pre-defined and revealed to the LLM as \(s_{1},s_{2},s_{3}\).
An agent must keep track of win/losses from its past roll-outs to analyze the opponent behavior, and then exploit the opponent to maximize payoff. Overall, the challenges include: 1) long context understanding, 2) understanding randomness, 3) learning from interactions, and 4) instruction following and rule parsing.
The human written manual provides instruction on the possible actions and how the win/draw/lose of each round is calculated.
To prevent game exploitation caused by biased actions, we randomize the score and probabilities for each action by shuffling the order of the paired list: \([(p_{1},s_{1}),(p_{2},s_{2}),(p_{3},s_{3})]\).
### Tower of Hanoi
The Tower of Hanoi3 is a classic puzzle game that challenges the player to move a stack of disks from one rod to another, using a third rod as an auxiliary. The game has two rules: only one disk can be moved at a time, and a larger disk cannot be placed on top of a smaller one.
Footnote 3: github.com/RobertTLarge/gym-hanoi/tree/master
The goal of the game is to move all the disks from the first rod to the last one in the minimum number of moves, and the game can be solved using a recursive algorithm that follows these steps:
1. Move n - 1 disks from the source rod to the auxiliary rod, using the destination rod as an intermediate.
2. Move the largest disk from the source rod to the destination rod.
3. Move n - 1 disks from the auxiliary rod to the destination rod, using the source rod as an intermediate.
The Tower of Hanoi requires the agent to think strategically and plan ahead, and put strict requirements on the LLM agents' ability to understand and follow the rules of the game. The game can become more challenging if the agent makes a mistake. Sometimes, an agent may have to undo several moves to correct an error. Overall, the challenges include: 1) planing, 2) reasoning with dependencies, 3) recovery from mistakes.
The human written manual contains a description of the game set-up and allowed actions. In addition, we also include an example illustration of the starting and goal configuration, alongside an example of allowed/disallowed moves.
### Messenger
MESSENGER (Hanjie et al., 2021) which features multiple game variants with procedurally generated game dynamics and accompanying text manuals. The overall game mechanics of MESSENGER involve obtaining a message and delivering it to a goal. The benchmark is shipped with 3 levels of difficulties (referred as stages in Hanjie et al. (2021)).
To succeed in MESSENGER, an agent must first relate entities and dynamics of the environment to their reference synonyms in the manual, identify message and goal objects, and navigate to bring the message to the goal while avoiding the enemy. The manual, by design, is challenging to understand even for human readers. Level 1 primarily challenges the agent's 1) natural language understanding and 2) generalization. Level 2 includes additional challenge on the agent's 3) reasoning with dependencies and 4) 2D spatial reasoning. Level 3 increases difficulty by adding distraction objects.
The original manuals provided by Hanjie et al. (2021) contain descriptions of the entities and world dynamics obtained through crowd-sourced human writers. We augment the manual with a specification on the game objective, and an "advice" for LLM agent to first identify goal objects and then approach its objective. The "advice" helps to guide LLM in approaching the particularly game manual.
### Crafter
The Crafter environment (Hafner, 2021) is a procedurally generated, open-world survival game designed to test RL algorithms. Inspired by Minecraft, it features a grid-world with top-down observation and a discrete action space of 17. The game includes 22 achievements in a tech-tree of depth 7 and provides information on the player's health, food, water, rest levels, and inventory. Crafter captures many of Minecraft's key research challenges, offering a more streamlined and faster environment for conducting experiments and gathering results.
To succeed in Crafter an LLM agent has to first understand and master a variety of reusable skills composed of 17 actions. The agent needs to learn to navigate through up to 5 2D terrains (biomes), avoiding obstacles and dangerous creatures. The agent also needs to collect different resources and craft more advanced weapons/tools to unlock more skills and achievements, while at the same time balance crafting goals with survival goals like maintaining health, third, food, and rest (Hafner, 2021). Overall, the challenges include: 1) 2D spatial reasoning, 2) Changing environment dynamics, 3) Long context understanding, 4) Planning ahead, 5) Generalization, 6) Correcting from mistakes.
We provide the "context" string from Wu et al. (2023c) as the manual, generated by parsing the LaTeX source-code of (Hafner, 2021). The "context" string has been shown to greatly improve performance of GPT-4 and text-davinci-003 on Crafter (Wu et al., 2023c). Interestingly, the "context" string does not capture all information necessary to succeed in the game, i.e., it requires 2 woods to craft the crafting table, and 8 stones to craft the furnace. The agent has to 7) learn from interaction.
### Minecraft
Minecraft is one of the most popular games in history4. The game world is virtually infinite and procedurally generated. The game observation is composed of rough 3D objects representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. Minecraft has been widely studied as a benchmark for intelligent multi-tasking agents (Guss et al., 2021; Fan et al., 2022; Hafner et al., 2023; Yuan et al., 2023; Wang et al., 2023b, a). However, due to the fact that most current LLMs do not have vision capabilities, we simplify the Minecraft benchmark (Fan et al., 2022) and only consider a small set of creative tasks where the primary objective is to find specific biomes, so an LLM could control a hand-coded agent to perform navigation in the 3D world.
Footnote 4: wikipedia.org/wiki/Minecraft
To succeed in the creative "find" tasks, a LLM agent has to have enough domain knowledge about different biomes in Minecraft, and be able to correlate visual observation (text description of visual world) with domain knowledge, and navigate in a 3D environment. Overall, the challenges include: 1) planning ahead, 2) domain knowledge, 3) 3D spatial reasoning, 4) generalization.
For the human written instruction manual, we inform the agent that its goal is to find a specific biome \(g\) in Minecraft, and offer an advice on how to interpret the visual descriptor output for Minecraft.
## 3 Using SmartPlay
### Environment Interface and Evaluation Protocol
For ease of use and wide compatibility, SmartPlay follows a unified OpenAI Gym interface (Brockman et al., 2016) for all games, with text-based observations, text-based manuals with content as described in Table 1, text-based history describing past actions and observations of length "history length", and flat categorical actions. Due to the randomness in some games, we recommend running each game multiple times and reporting the average metrics.
Input, manual, action space, rollout length, and trial numbers for each game are specified in Table 1. These settings are fixed and should not be modified. However, future research may require longer history length or more trials for some games. These parameters can be adjusted to suit specific needs, but the changes should be explicitly stated. We provide some recommended values (also used in our experiments) for these parameters in Table 1.
### Example Input
We attach an example input of the game Messenger (Hanjie et al., 2021) as produced by the SmartPlay API. For completeness, we provide example inputs for each game in Appendix B. Note that all directions in SmartPlay are described in "east, south, west, north, above, below"
Instruction Manual:
In the game, MSSENGER, each entity can take on one of three roles: an enemy, message, or goal. The agent's objective is to bring the message to the goal while avoiding the enemies. If the agent encounters an enemy at any point in the game, or the goal without first obtaining the message, it loses the game and obtains a reward of -1.
the dangerous enemy can be found next to the plane, which can not be moved. you are being approached by a restricted document that is a robot. the whale is the main objective.
To solve a game, you may find it helpful to list the objects that you see. Then for each object, match it with an entity description, and identify whether it is good or bad to interact with the object. The name specifications of in-game objects may not be exact matches. Please try identifying with synonyms.
Observation Example:
You took action Move South.
You (agent) don't have the message.
\begin{table}
\begin{tabular}{c c c c c c c} \hline Env & Input & Manual & History & Rollout & Action Space & Trials \\ \hline Bandits & Text & Background & 50 & 50 & 2 & 20 \\ \hline RockPaperScissors & Text & Background,Rules & 50 & 50 & 3 & 20 \\ \hline Hanoi & Text & Background,Rules,Examples & 30 & 30 & 6 & 10 \\ \hline Messenger & Visual description & Background,Rules,Advice & 2 & 4\(\sim\)128 & 5 & 100 \\ \hline Crafter & Visual description & Background,Rules,Advice & 5 & 10k & 17 & 10 \\ \hline Minecraft & Visual description & Objective & 2 & 200 & 4 & 20 \\ \hline \end{tabular}
\end{table}
Table 1: Specifications for each game in SmartPlay. In addition to the table, the manual input contains a list available actions for all games. Input, manual, action space, and rollout length should not be modified. History length and trial numbers could be increased to suite future needs.
You see: - airplane? steps to your south - fish 13 steps to your south-east - robot 5 steps to your south-east
In the actual gameplay, SmartPlay API also includes a list of actions for the LLM agent to pick from.
### Evaluation Metrics
We define three metrics: reward, completion rate, score. To ensure compatibility with prior works, **reward** aligns with the score/reward definition in games originally designed for RL (i.e., Bandits, Rock Paper Scissors, Messenger, Crafter (Hanjie et al., 2021; Hafner, 2021)). **Completion rate** measures the rate of successful completion for games with quantifiable objectives (i.e., Hanoi, Messenger, Minecraft). Finally, we introduce **score** for every game in the benchmark to provide a summary of performance. For Bandits and Rock Paper Scissors, the score is defined the number of times the LLM action matches the environment optimal action; for Hanoi, the score is defined as the number of disks successfully moved to the goal peg; for Messenger, the score is the same as the reward (Hanjie et al., 2021) of each round of game; for Crafter, the score is defined as the number of unlocked achievements at every step, summed across the whole game; for Minecraft, the score is defined as the indicator of whether the "find" objective for the game has been completed.
## 4 Experimental Results
Using the SmartPlay API, we follow Wu et al. (2023c) and directly prompt an LLM: "What is the next action to take, let's think step by step.", with manual, history, and current observation as context. We then query the LLM: "Choose the best executable action from the list of all actions. Write the exact chosen action." for an answer directly mapped to one of the environment actions.
### Quantitative Analysis
To reduce the cost of queries, we pick 7 settings that requires a minimal experimentation but provides comprehensive coverage over important agent capabilities. We experiment with 9 recent popular open-source and proprietary LLMs and report the average score in Table 2.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline LLM & Bandit & RPS & Hanoi & MessengerL1 & MessengerL2 & Crafter & Minecraft \\ \hline Human Baseline & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\ \hline GPT-4-0613 & 1.00 & 0.91 & 0.83 & 0.90 & 0.93 & 0.26 & 0.61 \\ \hline GPT-4-0314 & 0.97 & 0.98 & 0.90 & 0.87 & 0.97 & 0.32 & 0.59 \\ \hline text-davinci-003 & 1.04 & 0.40 & 0.50 & 0.62 & 0.46 & 0.07 & 0.45 \\ \hline Claude & 0.72 & 0.47 & 0.67 & 0.44 & 0.60 & 0.05 & 0.50 \\ \hline Bard & 0.86 & 0.30 & 0.67 & 0.61 & 0.40 & 0.04 & 0.54 \\ \hline llama-2-13b & 0.50 & 0.35 & 0.37 & 0.12 & 0.13 & 0.04 & 0.61 \\ \hline llama-13b & 0.68 & 0.50 & 0.33 & 0.16 & 0.06 & 0.04 & 0.50 \\ \hline vicuna-13b & 0.64 & 0.17 & 0.07 & 0.00 & 0.12 & 0.02 & 0.43 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of performance of different LLMs in terms of average score on BanditTwoArmedHighLowFixed-v0, RockPaperScissorBasic-v0, HanoiDisk-v0, MessengerL1-v0, MessengerL2-v0, Crafter-v0, MinedjoCreative0-v0. All scores are normalized relative to human performance (unnormalized version in Table 4). GPT-4 variants out-perform other LLMs by significant margins, but still graderly under-perform human baselines. We observe significant performance gaps between SOTA LLMs and human baseline on Hanoi, Crafter, and Minecraft. Hanoi, Crafter challenges planning and reasoning with object dependencies, and Minecraft challenges 3D spatial reasoning.
Overall, GPT-4 variants significantly out performs other proprietary models, which outperform open-source models by significant margins.
**There is still significant room for improvement for LLM as agents:** Despite the impressive performance of GPT-4 variants, there is still a significant gap between GPT-4 and human baseline performance on more challenging benchmarks, with a 10% gap on 3DiskHanoi, 40% on Minecraft creative tasks, and 70% on Crafter.
**Other proprietary LLMs struggle to keep up with GPT-4:** We observe a more than 20% gap between GPT-4 and other proprietary models like Claude, Bard, and text-davinci-003 across all games except Minecraft. Furthermore, on comprehensive benchmarks like Crafter, GPT-4 variants achieves 3 times higher scores than other proprietary models.
**Open-source LLMs have a long way to go:** Open-source LLMs achieves less than half the performance of GPT-4 variants on simple Bandit and Rock-Paper-Scissors tasks, and 1/8 the performance on more challenging tasks. The fine-tuned Vicuna-13b model achieves much worse performance than the base LLAMA-13b.
**3D Spatial reasoning remains a challenge for LLMs:** The Minecraft benchmark appears equally challenging to all LLMs due to its unique requirement for 3D spatial reasoning. All LLMs behave similarly in Minecraft creative tasks, with the best model at 60% of human baseline performance.
To offer additional insights into the individual agent capabilities of LLMs as identified in Figure 2, we compute, for each capability \(c\), the capability score \(p_{LLM}^{c}\) of an LLM as the average of human normalized score \(s_{g}\) over each game \(g\), weighted by the degree \(d_{e}^{g}\) at game \(g\) presents challenge \(c\): \(p_{LLM}^{c}=\frac{\sum_{g}d_{e}^{g}s_{g}}{\sum_{g}d_{e}^{g}}\). We plot the capability scores into 3 groups in Figure 3: GPT-4 variants, other proprietary models, and open-source models.
The two GPT-4 variants perform similarly overall, with GPT-0614 doing slightly worse on planning and reasoning. We also identify that GPT-4 variants score lower on learning from interactions, error/mistake handling, and spatial reasoning.
Claude demonstrates overall better performance than Bard, especially in planning, reasoning, instruction following. Compared to the other two proprietary models, text-davinci-003 appears biased toward learning from interaction and randomness, and is particularly weaker at instruction following, planning and reasoning.
LLAMA-2-13b and LLAMA-1-13b performs similar on the high level, with LLAMA-2-13b performing better at planning, reasoning, and error handling, but worse in learning from randomness and interactions. Vicuna-13b loses a lot of reasoning, planning, long text understanding, and error/mistake handling capabilities after fine-tuning.
### Qualitative Analysis
**Learning from interactions:** In Bandits and Rock Paper Scissors, proprietary LLMs demonstrate promising potential for learning from history and interactions. We observe the agents
Figure 3: **Left:** comparing the two GPT-4 variants with Human Baseline performance as reference. **Middle:** comparing text-davinci-003, Claude, and Bard. **Right:** comparing open-source llama-2-13b, llama-13b, vicuna-13b models.
first following a exploratory strategy and then exploiting the biased opponent based on the past observations. In Crafter, GPT-4 variants consistently attempts to build crafting table with 1 wood and recovers from the failure to build crafting table with 2 woods.
**Data/environment contamination:** For the Tower of Hanoi, it is expected that the LLMs have been trained on the exact same problem. Surprisingly, although all LLMs are able to provide the solution at the starting configuration where all disks are on the first rod (some may even write out the recurrence for the solution), most LLMs could not solve the problem and gets confused quickly after a few moves, where the disks are distributed over all three rods. We suspect that this is due to the intermediate states do not appear often in the LLM's training sets. Such observation verifies our belief that games could be more robust to dataset contamination.
**Spatial Reasoning:** We observe that LLMs often have a bad sense of spatial locations and struggle with navigating to new locations. For example, in Minecraft, we often observe LLMs often take moves that are contradictory over time, i.e., a bunch of "move north" followed by a bunch of "move south", undoing a lot of its own efforts at exploration.
## 5 Related Works
### LLM Evaluation
The task of evaluating LLM performance has become increasingly challenging given the rapid progression of LLMs. Generalist benchmarks usually employ a wide range of tasks and languages to test general knowledge and reasoning (Hendrycks et al., 2020; Liang et al., 2022; Srivastava et al., 2022; Zhong et al., 2023), where small language models are getting close performance compared to the state-of-the-art large language models Li et al. (2023); Gunasekar et al. (2023); Eldan and Li (2023). However, those benchmarks struggle to cover interaction styles like instruction following Ziegler et al. (2019) or conversations Bai et al. (2022). The go-to approach for evaluating LLM for conversation is pairwise model comparison, which performs pair-wise comparison of output of the LLM and a reference LLMs to produce a ranking (Zheng et al., 2023b). The ranking was originally performed by human, but could be automated with a significantly more powerful LLM (Chiang and Lee, 2023; Zheng et al., 2023a; Dubois et al., 2023). However, such evaluation techniques depend on an expert model or human who can reliably compare the performance of different LLMs, which limits the application to SOTA LLMs like Claude-2 or GPT-4. Moreover, existing benchmarks fail to capture key characteristics of intelligent agents like understanding of randomness, spatial reasoning, and error handling.
### Using Games to Evaluate Generalist Agents
The idea of using games to evaluate the performance of agents has a long history in A.I. Pell (2011); Schaul et al. (2011); Whiteson et al. (2011) presented early ideas and motivation for using games to measure the general capabilities of an agent, and discussed challenges in measuring A.I. agent performance. A series of popular benchmarks (Brockman et al., 2016; Vinyals et al., 2017; Tunyasuvunakool et al., 2020) were created including Atari (Bellemare et al., 2013) and DeepMind lab (Beattie et al., 2016). As the capabilities of A.I. agents improve, researchers developed open-ended generalist games (Savva et al., 2019; Abramson et al., 2020; Hafner, 2021; Srivastava et al., 2022b) like NetHack (Kuttler et al., 2020) or Minecraft (Guss et al., 2021; Fan et al., 2022).
SmartPlay takes a suite of benchmarks (Brockman et al., 2016; Hafner, 2021; Fan et al., 2022) developed over different times to best represent a broad range of difficulties and skills.
### Creating/converting to Text Games
Text games (Cote et al., 2018; Kuttler et al., 2020; Zhong et al., 2019; Hanjie et al., 2021) are interactive simulations where the game state and action space are in natural language, often used to benchmark skills like planning, exploration, and memory. SmartPlay features
a text game (Messenger) with procedural game rule generation (Hanjie et al., 2021) to test the generalization of the LLM agents at language understanding and planning.
To capture real-world challenges like spatial-reasoning, we study converting 2D/3D games into text-games. Shridhar et al. (2020) demonstrated the possibility of converting a 3D embodied indoor environment (Shridhar et al., 2020) into a TextWorld (Cote et al., 2018) game by "listing" all the objects in text. However, such conversion relies on low-level controllers and teleportation, trivializing the environments for current LLMs (Micheli and Fleuret, 2021; Wu et al., 2023). Therefore, we follow Wu et al. (2023) to offer a list of objects/observations with directional relationship to the agent: "to your south-east." Such description allows LLMs to make meaningful progress without low-level controllers (Wu et al., 2023).
## 6 Conclusion
In this work, we introduce SmartPlay, both a challenging benchmark and a methodology for evaluating LLMs' performance as agents. Our initial release of SmartPlay consists of Two-armed Bandits, Rock Paper Scissors, Messenger (Hanjie et al., 2021), Crafter (Hafner, 2021), and Minecraft (Fan et al., 2022) creative navigation tasks. SmartPlay benchmarks not only basic abilities like instruction following and in-context reasoning, but also evaluates capabilities like planning, understanding of randomness, 2D/3D spatial reasoning, and error handling, which are often underrepresented in existing LLM benchmarks. To achieve next-gen automation, we believe that language models should go beyond speaking fluent language (Eldan and Li, 2023), and become more intelligent agents that could interact with the world and human users. We hope that SmartPlay would catalyze research on building more capable and reliable LLM agents.
Finally, SmartPlay offers guidelines for easily adding games to the benchmarking suite. SmartPlay will be continuously improved to provide up-to-date challenges for next-gen LLMs.
|
2305.05587 | Predictive Control of Linear Discrete-Time Markovian Jump Systems by
Learning Recurrent Patterns | Incorporating pattern-learning for prediction (PLP) in many discrete-time or
discrete-event systems allows for computation-efficient controller design by
memorizing patterns to schedule control policies based on their future
occurrences. In this paper, we demonstrate the effect of PLP by designing a
controller architecture for a class of linear Markovian jump systems (MJS)
where the aforementioned ``patterns'' correspond to finite-length sequences of
modes. In our analysis of recurrent patterns, we use martingale theory to
derive closed-form solutions to quantities pertaining to the occurrence of
patterns: 1) the expected minimum occurrence time of any pattern from some
predefined collection, 2) the probability of a pattern being the first to occur
among the collection. Our method is applicable to real-world dynamics because
we make two extensions to common assumptions in prior pattern-occurrence
literature. First, the distribution of the mode process is unknown, and second,
the true realization of the mode process is not observable. As demonstration,
we consider fault-tolerant control of a dynamic topology-switching network, and
empirically compare PLP to two controllers without PLP: a baseline based on the
novel System Level Synthesis (SLS) approach and a topology-robust extension of
the SLS baseline. We show that PLP is able to reject disturbances as
effectively as the topology-robust controller at reduced computation time and
control effort. We discuss several important tradeoffs, such as the size of the
pattern collection and the system scale versus the accuracy of the mode
predictions, which show how different PLP implementations affect stabilization
and runtime performance. | SooJean Han, Soon-Jo Chung, John C. Doyle | 2023-05-09T16:20:20Z | http://arxiv.org/abs/2305.05587v1 | # Predictive Control of Linear Discrete-Time Markovian Jump Systems by Learning Recurrent Patterns
###### Abstract
We consider a linear discrete-time Markovian jump system (MJS) with unknown mode-switching dynamics, and make concrete the broad notion that controller synthesis can be made more efficient by reducing computation time and redundancy via Pattern-Learning for Prediction (PLP), which learns patterns in the underlying mode process, stores them into memory, and predicts their future occurrences.
+
Footnote †: This work is supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1745301 and the Aerospace Corporation.
* The PLP component of the controller architecture leverages martingale methods from prior literature, but with two important extensions that make it more suitable for real-world MJS applications: 1) the distribution of the mode process is unknown, and 2) the realization of the mode process over time is not observable.
* We apply our proposed architecture to fault-tolerant control of a network with dynamic topology, and perform an extensive numerical study which compares the performance of the PLP controller against a baseline and a topology-robust extension of the baseline. Our study also provides insights into important tradeoffs that emphasize the impact of PLP, e.g., the size of the pattern collection and the system scale versus the accuracy of the mode predictions. A controller with PLP is able to match the control effort of the baseline, maintain a disturbance-rejection error similar to the topology-robust controller, and achieve runtime faster than either.
# Predictive Control of Linear Discrete-Time Markovian Jump Systems by Learning Recurrent Patterns
SooJean Han
Soon-Jo Chung
John C. Doyle
Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, 91125, CA, USA
###### Abstract
Incorporating _pattern-learning for prediction (PLP)_ in many discrete-time or discrete-event systems allows for computation-efficient controller design by memorizing patterns to schedule control policies based on their future occurrences. In this paper, we demonstrate the effect of PLP by designing a controller architecture for a class of linear Markovian jump systems (MJS) where the aforementioned "patterns" correspond to finite-length sequences of modes. In our analysis of recurrent patterns, we use martingale theory to derive closed-form solutions to quantities pertaining to the occurrence of patterns: 1) the expected minimum occurrence time of any pattern from some predefined collection, 2) the probability of a pattern being the first to occur among the collection. Our method is applicable to real-world dynamics because we make two extensions to common assumptions in prior pattern-occurrence literature. First, the distribution of the mode process is unknown, and second, the true realization of the mode process is not observable. As demonstration, we consider fault-tolerant control of a dynamic topology-switching network, and empirically compare PLP to two controllers without PLP: a baseline based on the novel System Level Synthesis (SLS) approach and a topology-robust extension of the SLS baseline. We show that PLP is able to reject disturbances as effectively as the topology-robust controller at reduced computation time and control effort. We discuss several important tradeoffs, such as the size of the pattern collection and the system scale versus the accuracy of the mode predictions, which show how different PLP implementations affect stabilization and runtime performance.
keywords: Analytic design, Pattern learning, Statistical approaches, Control for switching systems, Fault tolerant +
Footnote †: journal: Automatica
## 1 Introduction
Model-based controller synthesis methods can be developed for stochastic systems if a theoretical characterization of their stochastic process distribution exists. In the literature, this concept is most notable for Gaussian white noise systems (Doyle, 1978; Reif et al., 1999; Theodorou et al., 2010) or MJS (Xiong et al., 2005; Shi and Li, 2015). Our prior work Han and Chung (2022) suggested the possibility of expanding such methods to Poisson shot noise perturbations. For many discrete-time or discrete-event systems, we can take advantage of the fact that the underlying stochastic process is a sequence of random variables which occurs as repeated patterns of interest. For example, in fault-tolerance control or manufacturing process applications, a pattern of interest may be a specific sequence of modes which corresponds to a critical system fault (Cho and Lim, 1998; Hanmer, 2013). Another example can be found in queuing-based systems such as vehicle intersection networks (Boon and van Leeuwaarden, 2016; van Leeuwaarden, 2006), where repetition arises naturally when counting entities in the queue over time.
Learning pattern repetitions in the underlying stochastic process of many discrete-time or discrete-event stochastic systems allows for at least two ways of more efficient controller design. First, we may store past sample paths of the stochastic process into memory so that if a certain pattern occurs multiple times, we do not need to recompute the corresponding control policy at every occurrence. Second, we may predict the expected occurrence times of patterns in the future and schedule to apply the corresponding control policies at the predicted times. This idea is present in many applications. For example, a collision-avoidance trajectory over a future horizon of time can be computed for a moving vehicle based on repeated experiences of obstacle behavior, instead of relying only on instantaneous measurements of each obstacle's position (Richards and How, 2006; Mesquita and Hespanha, 2012; Shim et al., 2012). In the class of discrete-event systems, labeled transition representations are invoked to solve fault diagnosis and prediction problems because they enable easier identification of repeated patterns over time (Jeron et al., 2008, 2006).
### Related Work
_Reducing Repetitive Computation_: Making control more efficient in terms of computation time by taking advantage of any repetition in the system behavior is a fairly common concept in the engineering community. For example, Chen and Liu (2017) proposes repetitive learning control for a class of nonlinear systems tracking reference signals that are periodic. Zheng et al. (2021) discusses a method to approximate linear Gaussian systems using a hidden Markov model (HMM),
then trains it by exploiting its periodic structures. Some notable machine learning approaches for control, i.e. long short-term memory networks (LSTMs) and imitation learning (Verma et al., 2018), are also designed to reduce redundant learning. In fact, the broad class of meta-learning algorithms refers to algorithms which not only focus on learning the subject matter (e.g. classification tasks), but also on learning the learning procedure itself (O'Connell et al., 2022). For problems that can be solved using deep reinforcement learning methods, _experience replay_(Fedus et al., 2020) manages to improve sample and data efficiency by storing the last few experiences into memory and "replaying" them. A related approach is called _episodic control_(Lengyel and Dayan, 2007; Blundell et al., 2016; Pritzel et al., 2017), which incorporates _episodic memory_(Botvinick et al., 2019) into traditional learning techniques with the goal of speeding up training by recalling specific instances of highly rewarding experiences. Towards this end, numerous episodic control approaches have been proposed, including model-free episodic control Blundell et al. (2016) and neural episodic control Pritzel et al. (2017). In our paper Han et al. (2022), we consider vehicle traffic congestion control over urban networks of signalized intersections where the controller architecture leverages an extension of episodic control which uses equivalence classes to limit the growth of the memory table.
_Computing Pattern-Occurrence Quantities_: Repetition in stochastic processes can be addressed in theory by solving "pattern-occurrence problems", which characterizes one or more specific sequences of values as "patterns" then solves for quantities such as the expected time until their next observation in the stochastic process. Scan statistics (Pozdnyakov and Steele, 2014) is a popular tool founded on martingale theory, and is often used to characterize the distribution of pattern occurrences in applications dealing such as fault-tolerance and anomaly-detection. For example, Gueriero et al. (2009) uses a scan statistics approach for distributed target-sensing using stationary sensors and a moving agent under simplified assumptions on the distribution of the sensors' positions. Formulas for predicting the occurrence of patterns have been derived when the patterns emerge from an i.i.d. sequence (see, e.g., Li (1980), Gerber and Li (1981), and Pozdnyakov and Kulldorff (2006)) and when the patterns are generated from scalar Markov chains (Glaz et al., 2006; Pozdnyakov, 2008).
_Controlling Uncertain Systems_: One notable drawback to current pattern-occurrence methods (Pozdnyakov and Kulldorff, 2006; Glaz et al., 2006; Pozdnyakov, 2008) are their reliance on the assumptions that we are able to precisely observe the stochastic process and that its distribution is known. In fact, there is an abundance of research in system identification and data-driven control, e.g., Dean et al. (2019) and Ho et al. (2021), because these assumptions often do not hold in real world applications. Ho et al. (2021) considers robust and adaptive control for nonlinear systems with large model uncertainties by using a nested convex body chasing approach to optimally choose an approximate model around which the control law is designed. Many of these algorithms involve a natural multi-step procedure where the original uncertain dynamics and constraints are mapped down to an approximate model, which is then used for planning and control. For example, Nakka et al. (2021) first develops a surrogate optimization problem with chance constraints by leveraging polynomial chaos expansion before generating approximate solution trajectories via sequential convex programming.
_Predictions for Structured Control_: Using the memorized previous patterns and state/control trajectories, some algorithms in the literature have also invoked predictions to reduce redundant computation. _Model predictive control (MPC)_(Garcia et al., 1989; A. Cuzzola et al., 2002) is one of the most popular methodologies that demonstrates this, and both short-term and long-term predictions for online control have been proven to be beneficial even in the face of either purely stochastic or adversarial disturbances (Chen et al., 2015). In Yu et al. (2020), this is demonstrated explicitly by applying greedy conventional MPC to the linear quadratic tracking problem, and proving near-optimality in the dynamic regret performance metric. Nagabandi et al. (2018) provides an architecture which combines learning with MPC for robot link manipulation tasks; MPC is used for control law design based on the dynamics of the robotic arm approximated through learning and additional data is used only if the performance of the current model falls short of the desired goal, making the entire procedure efficient in time and data consumption. MPC has also been developed for specific classes of systems; in particular, Park and Kwon (2002) considers MPC for discrete-time MJS when the dynamics are linear and uncertain. The benefit of predictions is especially notable when there is spatial or temporal structure to the problem. _Graph neural networks (GNNs)_Battaglia et al. (2018) are an example of a learning-based approach which encodes the topology of the graph for tasks such as graph classification and representation learning Kipf and Welling (2017). Recently, extensions of GNNs are also being used for congestion control problems in computer networks Rusek et al. (2020) and vehicle traffic forecasting Li et al. (2018); Cui et al. (2020); both applications deal with large-scale networks for which exploitable spatial and temporal repetitions are abundant.
### Contributions
This paper aims to demonstrate the effectiveness and benefit of _Pattern-Learning for Prediction (PLP)_ on controller synthesis for a class of linear MJS whose underlying mode-switching dynamics are unknown. In this context, "patterns" are recurrent finite-length sequences of modes that arise in the MJS; PLP uses these patterns to make control design more efficient by memorizing certain patterns to prevent the re-computation of the control laws associated with them, then scheduling control laws for patterns that may occur in the future. Our architecture for the class of uncertain linear discrete-time MJS consists of three components. First, _Mode Process Identification (ID)_ uses state and control sequences to learn the unknown statistics of the mode process; here, these are the transition probability matrix (TPM) and the mode at the current time. Second, _PLP_ uses the estimated TPM and current mode to compute quantities pertaining to the future occurrence of patterns. Third, _Control Law Design_ performs the appropriate optimization to compute the control law associated with each pattern when it first occurs.
We develop and integrate the PLP component in an otherwise straightforward architecture which leverages well-researched techniques in system identification and predictive control. In our analysis of recurring patterns, we use martingale theory to derive mathematical expressions for two quantities pertaining to the prediction of patterns: the expected minimum occurrence time of any pattern from some (user-defined) collection of patterns, and the probability of a pattern being the first to occur among the collection at the expected time. Our method operates on two key extensions of prior pattern-occurrence literature (e.g., Glaz et al. (2006), Pozdnyakov and Steele (2014)) which makes it applicable to real-world dynamics: the distribution of the mode process is unknown, and the mode process over time is not observable (e.g., the past and current modes the system has been in is unknown). To our knowledge, our proposed architecture is the first to apply a martingale method to the learning-based control of a stochastic system.
We provide an extensive comparison study that demonstrates the effects of PLP on a version of the proposed three-part architecture applied to the control of a network with dynamic topology, where the modes correspond to the different possible topology variations. For the purposes of this application, the controller architecture integrates two additional algorithms from existing literature. First, MPC is used to schedule future control policies to be applied at the occurrence times specified by PLP. Second, the novel system level synthesis (SLS) approach Wang et al. (2018); Anderson et al. (2019) formulates the actual optimization problem to be solved; we especially use the data-driven formulation (Xue and Matni, 2021; Alonso et al., 2022) because of the uncertainties in the system. The comparison is performed against two controllers based on SLS: a baseline SLS controller and an extension of SLS that was explicitly designed for topology robustness (Han, 2020). Our results offer insights into several important tradeoffs among four performance metrics which determine how PLP affects a controller's performance in stabilizing the system. Compared to the baseline controller, we show that a PLP controller is able to achieve better disturbance-rejection at significantly reduced computation time and redundancy. Furthermore, because Pattern-Learning can be viewed as an additional mode estimation algorithm for a suitable collection of patterns, it enables the estimated mode to match the true mode more often than without Pattern-Learning, boosting system identification performance. We show that PLP can reject disturbances as well as the topology-robust controller while consuming less computation time and control effort, then discuss the role of system scale on PLP design criteria such as the choice of pattern collection.
### Organization
The rest of the paper is organized as follows. In Sec. 2, we introduce the relevant notations, assumptions, and set up the uncertain linear discrete-time MJS considered throughout the entire paper. Sec. 3 provides a coherent overview of the three components that make up our proposed controller architecture. The subsequent sections go into further detail about the concrete choice of algorithms used to implement each component: Mode Process Identification (ID) in Sec. 4, PLP in Sec. 5, and (Predictive) Control Law Design in Sec. 6. We implement our controller architecture on a topology-changing network in Sec. 7, and compare its performance against a couple of baseline controllers without PLP. We conclude the paper in Sec. 8.
## 2 Setup and Preliminaries
We consider linear Markovian jump systems (MJS) of the following form:
\[\mathbf{x}[t+1]=A(\xi_{N[t]})\mathbf{x}[t]+B\mathbf{u}[t]+\mathbf{w}[t] \tag{1}\]
Here, \(\mathbf{x}[t]\!\in\!\mathbb{R}^{n_{s}}\) is the state, \(A(\xi_{N[t]})\!\in\!\mathbb{R}^{n_{s}\!\times\!n_{s}}\) is the dynamics matrix which changes according to the phase variable \(\xi_{N[t]}\), \(\mathbf{u}[t]\!\in\!\mathbb{R}^{n_{u}}\) is the control input. The external noise process \(\mathbf{w}[t]\!\in\!\mathbb{R}^{n_{s}}\) is unobservable and all we know about it is its upper norm bound \(\|\mathbf{w}[t]\|_{\infty}\leq\overline{w}\). For each \(t\!\in\!\mathbb{N}\), \(N[t]\) is the number of _modes_ (i.e., number of phase switches, or jumps arising from the underlying Markov chain) that have been observed by time
\begin{table}
\begin{tabular}{|c|c|} \hline Sym. & Definition \\ \hline \hline \(\Delta T\) & Timescale of mode w.r.t. system (Assum. 1) \\ \hline \(\phi_{m}^{(0)}\) & Est. current mode at time \(t\), \(n\triangleq N[t]\) (Sec. 3.1) \\ \hline \(\beta^{(0)}[m_{1},m_{2}]\) & Est. TPM entry for \(m_{1},m_{2}\in\mathcal{X}\) (Sec. 3.1) \\ \hline \(\mathcal{C}[t]\) & Set of consistent modes at time \(t\) (6) \\ \hline \(\Psi[t]\) & Time-varying pattern collection (Defs.2,4) \\ \hline \(\boldsymbol{\psi}_{k}\) & A pattern from \(\Psi\), enum. \(k\in\{1,\cdots,K\}\) (Def.2) \\ \hline \(L\) & Future horizon, pattern length (Def. 4) \\ \hline \(\mathcal{U}\) & Control law table in memory (Prop. 2) \\ \hline \(\hat{\tau}^{(t)}\) & Min. occurrence time of \(\Psi[t]\) (Def.6, Rmk.6) \\ \hline \(\hat{q}_{k}^{(t)}\) & First occurrence prob. of \(\boldsymbol{\psi}_{k}\!\in\!\Psi[t]\) (Def.6, Rmk.6) \\ \hline \(\Gamma\) & Augmented pattern collection (7) \\ \hline \(\gamma_{\ell}\) & Augmented pattern, enum. \(\ell\!\leq\!|\mathcal{X}|^{2}\!|\Psi|\) (Def. 8) \\ \hline \(\mathcal{S}_{I}^{(0)}\) & Set of Case 0 initial-ending strings (Def.9) \\ \hline \(\mathcal{S}_{I}^{(1)}\) & Set of Case 1 initial-ending strings (Def.9) \\ \hline \(\mathcal{S}_{L}\) & Set of later-ending strings (Def.9) \\ \hline \(\mathcal{S}\) & \(=S_{\ell}\cup S_{I}=S_{\ell}\cup(\lambda_{N[t],1]}\mathcal{S}_{I}^{(0)}\) (Def.9) \\ \hline \(K_{I}^{(c)}\) & \(=|\mathcal{S}_{I}^{(c)}|_{\lambda}\!\times\!\in\![0,1]\) cardinality (Def.9) \\ \hline \(K_{L}\) & \(=|\mathcal{S}_{L}|\) cardinality (Def.9) \\ \hline \(\beta_{s}\) & Ending string in \(\mathcal{S}\), enum. \(s\in\{1,\cdots,K_{I}+K_{L}\}\) \\ \hline \(\mathbb{P}(\boldsymbol{\beta}_{s})\) & Prob. that \(\boldsymbol{\beta}_{s}\) terminates \(\{\xi_{s}\}\) (Def. 11) \\ \hline \(c_{\ell}\) & Initial reward of each type-\(\ell\) agent (Def.10) \\ \hline \(R_{r}^{(t)}\) & Type-\(\ell\) cumu. net reward (Def.13) \\ \hline \(R_{r}\) & Cumu. net reward (Def.13) \\ \hline \(W_{\mathit{eff}}\) & Gain matrix entry \((s,\ell)\): total gain earned by type-\(\ell\) agent via ending string \(\boldsymbol{\beta}_{s}\) (Def.12) \\ \hline \end{tabular}
\end{table}
Table 1: Summary of some of the notations used in the controller architecture, listed in pairs of symbols (‘Sym.’) and definitions. Many of these notations are used to develop the Pattern-Learning component (Sec. 5).
\(t\). We say that the current _mode-index_ at time \(t\!\in\!\mathbb{N}\) is \(n\!\in\!\mathbb{N}\) if \(N[t]=n\), and the transition from mode \(\xi_{n-1}\) to \(\xi_{n}\) occurs at time \(T_{n}\!\triangleq\!\min\{s\!\in\!\mathbb{N}\,|\,N[s]=n\}\). The discrete mode process \(\{\xi_{n}\}_{n=1}^{\infty}\) takes values from the set \(\mathcal{X}\!\triangleq\!\{1,\cdots,M\}\), where \(M\!\in\!\mathbb{N}\), and is defined such that \(\xi_{n}\!:\!\Omega\to\mathcal{X}\) on probability space \((\Omega,\mathcal{F},\mathbb{P})\) with filtration \(\{\mathcal{F}_{n}\}_{n=1}^{\infty}\), \(\mathcal{F}_{n}\!\triangleq\!\sigma(\xi_{0},\xi_{1},\cdots,\xi_{n})\). We assume \(B\) is a known constant matrix.
A summary of some of the most important notations used throughout the paper is provided in Table 1. Throughout this paper, the letter \(\xi\) is specifically reserved to denote random variable modes. We distinguish \(\{\xi_{n}\}\) from the sequence of deterministic values \(\{\varphi_{n}\}\) which it takes, i.e., \(\xi_{n}\!=\!\varphi_{n}\) for all past mode-indices \(n\!\in\!\mathbb{N}\). Mode sequences denoted using other Greek letters are deterministic unless explicitly stated otherwise. We henceforth denote all sequences of the form \(\{\cdot\}_{n=1}^{\infty}\) using the shorthand notation \(\{\cdot\}\), e.g., \(\{\xi_{n}\}_{n=1}^{\infty}\!\equiv\!\{\xi_{n}\}\) and \(\{\mathcal{F}_{n}\}_{n=1}^{\infty}\!\equiv\!\{\mathcal{F}_{n}\}\), and denote \(\mathbf{x}\!\left[s\!:\!t\right]=\!\left[\mathbf{x}\!\left[s\right],\cdots, \mathbf{x}\!\left[t\right]\right]\) for any \(s\!<\!t\), likewise for \(\mathbf{u}\!\left[s\!:\!t\right]\), \(\mathbf{w}\!\left[s\!:\!t\right]\). For any two \(n,m\!\in\!\mathbb{N}\) such that \(n_{1}\!<\!n_{2}\), we denote random vectors of mode sequences \(\xi_{n_{1}:m_{2}}\!\triangleq\!(\xi_{n_{1}},\xi_{n_{1}+1},\cdots,\xi_{n_{2}})\), and likewise \(\varphi_{n_{1}:m_{2}}\). We denote the concatenation of \(\mathbf{\alpha}\!\triangleq\!(\alpha_{1},\cdots,\alpha_{n})\) and \(\mathbf{\beta}\!\triangleq\!(\beta_{1},\cdots,\beta_{b})\) as \(\mathbf{\alpha}\circ\mathbf{\beta}\!\triangleq\!(\alpha_{1},\cdots,\alpha_{n},\beta_ {1},\cdots,\beta_{b})\), where \(\mathbf{\alpha}\) and \(\mathbf{\beta}\) are placeholders for either deterministic or random mode sequences.
**Assumption 1**.: The mode process \(\{\xi_{n}\}\) operates on a timescale which is \(\Delta T\!\in\!\mathbb{N}\) times longer than the timescale of the system (1), i.e. if \(N[t]=n\), then \(N[t+a\Delta T]=n+a\) for any \(a\!\in\!\mathbb{N}\). This means \(T_{n}-T_{n-1}=\Delta T\) for all \(n\!\in\!\mathbb{N}\). In certain applications, \(\Delta T\) can be interpreted as the minimum time needed between switching modes, and for simplicity we assume that its value is known. Consequently, we assume that \(N[t]\) and the sequence of transition times \(\{T_{n}\}\) are also known.
The mode process \(\{\xi_{n}\}\) is generated from an irreducible Markov chain over the state-space \(\mathcal{X}\) with transition probability matrix (TPM) denoted by \(P\!\in\!\mathbb{R}^{M\times M}\) and initial probability vector \(\mathbf{p}_{0}\!\triangleq\![\mathbf{p}_{0}(1),\cdots,\mathbf{p}_{0}(M)]^{ \top}\!\in\![0,1]^{M}\). We represent the entries of the TPM using brackets, so that \(P[m_{1},m_{2}]\) denotes the probability of the mode switching from \(m_{1}\) to \(m_{2}\), for any \(m_{1},m_{2}\!\in\!\mathcal{X}\). Suppose the probability distribution of \(\xi_{n}\) is given by \(\mathbf{p}_{n}\!\in\![0,1]^{M}\) at mode-index \(n\!\in\!\mathbb{N}\). Then the mode process dynamics are updated in the usual Markov chain way \(\mathbf{p}_{n+1}^{\top}=\mathbf{p}_{n}^{\top}P\). This implies that given \(\xi_{n}\!=\!\varphi_{n}\!\in\!\mathcal{X}\), we have \(\xi_{n+1}=m\) with probability \(P[\varphi_{n},m]\) for any \(m\in\mathcal{X}\).
**Assumption 2**.: To demonstrate PLP and focus on the mode process, we take the simpler setting where the state \(\mathbf{x}[t]\) is fully-observable; we thus design state-feedback control policies. Following the setup of bounded model errors in robust control theory, we assume \(\widetilde{w}\) is known or otherwise attainable from small-gain theorems (Zhou et al., 1996) or techniques based on structured singular values (Doyle, 1982). For the mode process, in addition to knowing the values of \(\Delta T\), \(N[t]\), and \(\{T_{n}\}\) (see Assumption 1), we consider the following settings. The true realizations \(\{\varphi_{n}\}\) of the mode process \(\{\xi_{n}\}\) are unknown over time, but the set \(\mathcal{X}\) of values that it takes and the initial mode \(\xi_{0}\!=\!\varphi_{0}\) are known. The sparsity structure of the TPM \(P\) is known, but the values of the nonzero entries are unknown.
## 3 Outline of the Controller Architecture
The controller architecture we propose is visualized in Fig. 1. It consists of three main parts: 1) Mode Process Identification (ID), 2) Pattern-Learning for Prediction (PLP) on the mode process, and 3) Control Law Design for the system dynamics. In this section, we provide a brief description of each part-including an introduction of the main notations used-to provide a coherent view of the architecture (Fig. 1) as a whole. The details of each individual part and the choice of algorithms used to implement them are discussed in the subsequent sections: Mode Process ID in Sec. 4, PLP in Sec. 5, and Control Law Design in Sec. 6. We emphasize that our choice of algorithm to implement each component is unique to the uncertain linear discrete-time MJS setup described in Sec. 2 and that alternative implementations can be made for other dynamics. For example, in our paper Han et al. (2022), the controller architecture was designed for the specific application of vehicle traffic congestion control over urban networks of signalized intersections, in which the problem is set up as a Markov decision process.
### Mode Process Identification Overview
For each time \(t\!\in\!\mathbb{N}\) and corresponding mode-index \(n\!\triangleq\!N[t]\), the system maintains the following estimated statistics about the mode process \(\{\xi_{n}\}\) and system dynamics (1): an estimate \(\hat{P}^{(t)}\) of the true TPM \(P\), and an estimate \(\hat{\varphi}_{n}^{(t)}\) of the current mode \(\varphi_{n}\). The first part of our architecture, _Mode Process Identification (ID)_, is responsible for learning these unknown statistics of the mode process. Due to this uncertainty in the dynamics, we use hats and (\(t\)) superscripts to emphasize that these quantities are estimates which change over time; as we will see in Sec. 4, this is because modes are estimated based on state and control trajectories \(\mathbf{x}[0\!:\!t]\), \(\mathbf{u}[0\!:\!t]\).
### Setup of Pattern-Learning for Prediction
Once \(\hat{P}^{(t)}\) and \(\hat{\varphi}_{n}^{(t)}\) are obtained from Mode Process ID (Sec. 3.1) for each \(t\!\in\!\mathbb{N}\) and \(n\!\triangleq\!N[t]\), _Pattern-Learning for Pre
Figure 1: A flow diagram representation of the proposed controller architecture specifically for linear MJS dynamics of the form (1). Circles represent inputs to the algorithm; user-defined inputs are colored blue and unknown/unobservable parameters are colored gray. The architecture consists of three main parts (violet boxes): 1) Mode Process ID (Sec. 3.1; Sec. 4), 2) Pattern-Learning for Prediction (Sec. 3.2; Sec. 5), 3) and Control Law Design (Sec. 3.3; Sec. 6).
diction (PLP)_ in Fig. 1 computes additional statistics about the mode process (called the "pattern-occurrence quantities") that facilitate the creation of _predictions_, which will be used in the Control Law Design component.
**Definition 1** (Prediction Horizon).: Define the constant \(L\in\mathbb{N}\) to be the _prediction horizon_ on the mode process, i.e., the length of the sequences of modes.
In this paper, "patterns" refer to length-\(L\) sequences of modes in the mode process underlying the system (1), formalized in the following definition.
**Definition 2** (Patterns).: Let \(L\in\mathbb{N}\) be the prediction horizon from Definition 1. Define the set \(\Psi\triangleq\{\psi_{1},\cdots,\psi_{K}\}\) to be a _collection of patterns_, where each \(\psi_{k}\triangleq(\psi_{k,1},\cdots,\psi_{k,L})\) is a mode sequence with length \(L\) and elements \(\psi_{k,j}\in\mathcal{X}\). Each \(\psi_{k}\) is referred to as a _(mode) pattern_ if we are interested in observing its occurrence in the mode process \(\{\xi_{n}\}\) over time (e.g., because it models a system fault).
It is possible for the patterns in \(\Psi\) to have different lengths, e.g., \(\psi_{k}\triangleq(\psi_{k,1},\cdots,\psi_{k,k_{k}})\) for any \(d_{k}\in\mathbb{N}\). However, in the context of predicting the future modes of an MJS like (1), it is probabilistically more likely to observe patterns with shorter lengths; for balance, we keep each pattern the same length \(L\).
**Definition 3**.: A pattern or an arbitrary sequence of modes \((\alpha_{1},\cdots,\alpha_{a})\) with length \(a\in\mathbb{N}\) is _feasible with respect to \(\hat{P}^{(t)}\)_ if it can be generated by the Markov chain with TPM \(\hat{P}^{(t)}\), i.e., \(\hat{P}^{(t)}[\alpha_{i},\alpha_{i+1}]>0\) for all \(i\in\{1,\cdots,a-1\}\).
Because the statistics of the mode process are estimates instead of true values, it becomes necessary to consider a pattern collection \(\Psi\) (from Definition 2) which varies with time.
**Definition 4** (Time-Varying Collection).: We construct the collection of patterns \(\Psi[t]\), with time-varying cardinality \(K[t]\), to be a subset of feasible length-\(L\) future sequences of modes given the estimated current mode \(\hat{\varphi}^{(t)}_{n}\):
\[\Psi[t] \triangleq\{\psi^{(t)}_{1},\cdots,\psi^{(t)}_{K[t]}\}\] \[\subseteq\{\text{feasible}\;(\alpha_{1},\cdots,\alpha_{L})[ \hat{P}^{(t)}[\hat{\varphi}^{(t)}_{n},\alpha_{1}]>0,\alpha_{i}\in\mathcal{X}\} \tag{2}\]
**Definition 5** (Pattern-Occurrence Times).: Denote \(n\triangleq N[t]\in\mathbb{N}\) to be the current mode-index at current time \(t\in\mathbb{N}\), and suppose the estimated current mode is \(\xi_{n}=\hat{\varphi}^{(t)}_{n}\). Then for each of the patterns in the collection \(\Psi\) from Definition 2, define the following stopping times for each \(k\in\{1,\cdots,K[t]\}\):
\[\hat{\tau}^{(t)}_{k\in n}\triangleq\min\{i\in\mathbb{N}\,|\,\xi_{n}=\hat{ \varphi}^{(t)}_{n},\xi_{n+i-L+1:n+i}=\psi^{(t)}_{k}\} \tag{3}\]
**Definition 6** (Time and Probability of First Occurrence).: Under the setup of Definition 5 suppose \(\xi_{n+i^{(t)}_{k}\cdots L+1:n+i^{(t)}_{k}}=\psi^{(t)}_{k}\). Then define the following for the collection \(\Psi\):
\[\hat{\tau}^{(t)}_{n}\triangleq\min_{k\in\{1,\cdots,\mathcal{X}[t]\}_{k\in n} ^{(t)}},\quad\quad\hat{q}^{(t)}_{k}\triangleq\mathbb{P}(\hat{\tau}^{(t)}_{n} =\hat{\tau}^{(t)}_{k(t)}) \tag{4}\]
**Problem 1** (Pattern-Occurrence Quantities).: To generate predictions from the mode process, we are interested in characterizing the following _pattern-occurrence quantities_ described in Definition 6.
* the estimate \(\mathbb{E}\{\hat{\tau}^{(t)}_{n}\}\) of the _mean minimum occurrence time_, which counts the number of mode-indices to observe the occurrence of any pattern from \(\Psi[t]\), given the estimated current mode \(\hat{\varphi}^{(t)}_{n}\).
* the estimated _first-occurrence probabilities_\(\{\hat{q}^{(t)}_{k}\}_{k=1}^{K[t]}\), where \(\hat{q}^{(t)}_{k}\in[0,1]\) is the probability that pattern \(\psi_{k}\in\Psi[t]\) is the first to be observed among all of \(\Psi[t]\).
Again, we keep the hat and superscript \((t)\) in the \(\tau\) and \(q_{k}\) quantities because we emphasize they are dependent on \(\hat{P}^{(t)}\) and \(\hat{\varphi}^{(t)}_{n}\) from Sec. 3.1, which may change over time.
### Predictive Control Law Design: General Formulation
Let \(g:\mathbb{R}^{t}\times\mathcal{X}\times\mathbb{R}^{n_{*}}\rightarrow\mathbb{R} ^{n_{*}}\) be a generic function representing the mode-dependent state-feedback control law designed by the Control Law Design component in Fig. 1. The Control Law Design component uses the expected occurrence time \(\mathbb{E}\{\hat{\tau}^{(t)}_{n}\}\) and probabilities \(\{\hat{q}_{k}\}_{k=1}^{K[t]}\) computed from PLP (Sec. 3.2) to store the control policies of previously-occurred patterns and to schedule control policies in advance. This procedure is described more carefully in the following two propositions.
**Proposition 1** (Scheduling Future Control Inputs).: Suppose we are given the estimated pattern-occurrence quantities \(\mathbb{E}\{\hat{\tau}^{(t)}_{n}\}\) and \(\{\hat{q}^{(t)}_{k}\}_{k}\) from PLP. Let \(\tau\equiv\mathbb{E}\{\hat{\tau}^{(t)}_{n}\}\) be the shorthand notation (with a temporary abuse of notation) for the estimated expected minimum occurrence time for the specific pattern collection \(\Psi[t]\) given estimated current mode \(\hat{\varphi}^{(t)}_{n}\). To schedule a control law in advance, we simply choose the pattern \(\psi^{(t)}_{k}\in\Psi[t]\) corresponding to the largest occurrence probability \(\hat{q}^{(t)}_{k}\). Then, until model-index \(\tau\), the future sequence of control inputs \(\mathbf{u}[t:T_{n+\tau+1}-1]\) is
\[\mathbf{u}[s] =g(s,\psi^{(t)}_{k,1},\mathbf{x}[s]),\ s\in[t:T_{n+1}-1] \tag{5}\] \[\vdots\] \[\mathbf{u}[s] =g(s,\psi^{(t)}_{k,L},\mathbf{x}[s]),\ s\in[T_{n+\tau}:T_{n+\tau+1 }-1]\]
Aside from operating on a longer timescale (mode process instead of system dynamics), Proposition 1 is similar in principle to standard model predictive control (MPC): only the first control law in the sequence (5), corresponding to the first mode \(\psi^{(t)}_{k,1}\), is applied at the next mode-index \(n+\lfloor\tau\rfloor\).
**Proposition 2** (Storing Past Control Inputs in Memory).: Define \(\mathcal{U}\) to be a table which maps mode patterns \(\psi^{(t)}_{k}\) to control policies \(\{g(t,\psi^{(t)}_{k,1},\cdot),\cdots,g(t,\psi^{(t)}_{k,L},\cdot)\}\) and the accumulated state and control trajectories over each occurrence time. When \(\psi^{(t)}_{k}\in\Psi[t]\) is first observed, a new entry \(\mathcal{U}[\psi^{(t)}_{k}](t)\), defined by (5) for the specific \(\psi^{(t)}_{k}\), is created. For anticipated future occurrences of \(\psi^{(t)}_{k}\), the system schedules control inputs using \(\mathcal{U}[\psi^{(t)}_{k}](t)\) in the form of (5). The entry for \(\psi^{(t)}_{k}\) is then updated at every occurrence time after its first.
Our controller architecture extends traditional uncertain system controllers (which borrow techniques from system identification and predictive control) via the incorporation of PLP. We now provide in-depth discussions around each component in the
following Sec. 4, 5, and 6, especially the concrete algorithms we choose to implement each component for the topology-switching network application to be demonstrated in Sec. 7. We again emphasize that our choices were made specifically for the MJS setup in Sec. 2 and that other implementations of the controller architecture are possible. For example, our paper Han et al. (2022) describes a version for the problem of vehicle traffic congestion control, which explicitly includes a memory component to reduce the size of the table \(\mathcal{U}\).
## 4 Mode Process Identification
The Mode Process ID component estimates the current mode \(\hat{\varphi}^{(0)}_{N[t]}\) and the TPM \(\hat{P}^{(t)}\). First, \(\hat{\varphi}^{(0)}_{N[t]}\) is estimated using the _consistent set narrowing_ approach, which is a variation of nested convex body chasing used for model approximation in Ho et al. (2021). Second, \(\hat{P}^{(t)}\) is estimated using empirical counts based on \(\hat{\varphi}^{(0)}_{N[t]}\) and on estimates of the previous modes \(\{\hat{\varphi}^{(s)}_{N[t]}\}_{s=0}^{t-1}\).
### Consistent Set Narrowing
Because the distribution of the external noise process \(\mathbf{w}[t]\) is unknown other than its norm bound, we employ consistent set narrowing, which checks the set of modes that are 'consistent' with the state/control trajectories. This method was employed in (4) of Han (2020) and is similar to the more general nested convex body chasing approach described in Ho et al. (2021), which was used for model approximation and selection for designing robust controls.
Denote the current mode-index as \(n\triangleq N[t]\in\mathbb{N}\). By Assumption 1, there are at most \(\Delta T-1\) state and control values, \(\mathbf{x}[T_{n}:t]\) and \(\mathbf{u}[T_{n}:t]\), associated with a single mode \(\varphi_{n}\).
**Definition 7** (Consistent Sets).: Over time, we construct a sequence of _consistent sets_\(\{\mathcal{C}[t]\}_{t\in\mathbb{N}}\) in the following way. For each \(n\in\mathbb{N}\), we initially set \(\mathcal{C}[T_{n}]\triangleq\mathcal{X}\) because no observations about the current mode \(\varphi_{n}\) have been made yet. Then for each \(t\in(T_{n},T_{n+1})\), if \(\mathcal{C}[t-1]\neq\emptyset\), a new consistent set is formed by retaining all modes \(m\in\mathcal{C}[t-1]\) from the previous consistent set \(\mathcal{C}[t-1]\) if each one-step value of state and control \((\mathbf{x}[r],\mathbf{x}[r+1],\mathbf{u}[t])\) satisfies the norm-boundedness condition of the noise \(\mathbf{w}[t]\):
\[\mathcal{C}[t]=\\ \left\{m\in\mathcal{C}[t-1]\big{|}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Empirical Estimation of the TPM
For any \(n\in\mathbb{N}\), the estimate of \(\varphi_{n}\) is most accurate when the maximum possible amount of data from the system has been obtained to create the estimate, i.e., among all \(t\in[T_{n},T_{n+1})\), the value of \(\hat{\varphi}_{n}^{(t)}\) is most accurate at time \(t=T_{n+1}-1\). For general \(T_{N[t]}<t<T_{N[t]+1}\), \(\hat{P}^{(t)}\) is estimated based on \(\hat{\varphi}_{N[t]}^{(t)}\) and only the most accurate estimates of the previous modes \(\{\hat{\varphi}_{N[s]}^{(T_{N[t]}-1)}\}_{s=0}^{t-1}\). Thus, in the TPM estimation procedure, there is only one estimate associated each true mode \(\varphi_{n}\). For simplicity of notation in this section only, we fix \(n\stackrel{{\raisebox{0.0pt}{$\stackrel{{ \raisebox{0.0pt}{$\stackrel{{\raisebox{0.0pt}{$ \stackrel{{\raisebox{0.0pt}{$\raisebox{\raisebox{0.0pt}{$ \raisebox{\raisebox{\raisebox{0.0pt}}{$\raisebox{\raisebox{\raisebox{0.0pt}}{$ \raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{ }}}}}}}}}}}}}}}}{N}\) and denote shorthand \(\hat{\varphi}_{n^{\prime}}\equiv\hat{\varphi}_{n^{(T_{n}-1})\) for \(n^{\prime}<n\) and \(\hat{\varphi}_{n}\equiv\hat{\varphi}_{n}^{(t)}\).
If \(t=T_{n}\) for some \(n\in\mathbb{N}\), estimating \(\hat{P}^{(t)}\) given \(\{\hat{\varphi}_{n^{\prime}}\}_{n^{\prime}=1}^{n}\) is straightforward. By Assumption 2, it is known which entries of the TPM are nonzero. Thus, we initialize \(\hat{P}^{(t)}\) to be an \(M\times M\) matrix with a \(1\) in the nonzero entries; when normalized, this corresponds to a stochastic matrix which has uniform distribution over the feasible transitions (e.g., \(1/3\) probability each for a row with three nonzero entries) but for estimation purposes, we keep the estimate of the TPM unnormalized until the end of the simulation duration. For each consecutive pair of transitions \((\hat{\varphi}_{n^{\prime}},\hat{\varphi}_{n^{\prime}+1})\) for \(n^{\prime}\in[0,\cdots,n-1)\), we take \(\hat{P}^{(t)}[\hat{\varphi}_{n^{\prime}},\hat{\varphi}_{n^{\prime}+1}]=\hat{P} ^{(t)}[\hat{\varphi}_{n^{\prime}},\hat{\varphi}_{n^{\prime}+1}]+1\).
If \(T_{n}<t<T_{n+1}\) for some \(n\in\mathbb{N}\), we have two separate subcases. If \(\hat{\varphi}_{n^{\prime}}^{(t-1)}=\hat{\varphi}_{n^{\prime}}^{(t)}\), then we simply follow the approach above and compute \(\hat{P}^{(t)}\) using the sequence \(\{\hat{\varphi}_{n^{\prime}}\}_{n^{\prime}=1}^{n}\). Otherwise, if \(\hat{\varphi}_{n}^{(t-1)}\neq\hat{\varphi}_{N[t]}^{(t)}\), then we again follow the approach above and compute \(\hat{P}^{(t)}\), but using the sequence \(\{\hat{\varphi}_{n^{\prime}}\}_{n^{\prime}=1}^{n-1}\) instead. To incorporate the mode estimate at current mode-index \(n\), we first need to reset the TPM estimate of the last transition via \(\hat{P}^{(t)}[\hat{\varphi}_{n-1},\hat{\varphi}_{n}^{(t-1)}]=\hat{P}^{(t)}[ \hat{\varphi}_{n-1},\hat{\varphi}_{n}^{(t-1)}]-1\); then we update as usual \(\hat{P}^{(t)}[\hat{\varphi}_{n-1},\hat{\varphi}_{n}^{(t)}]=\hat{P}^{(t)}[\hat{ \varphi}_{n-1},\hat{\varphi}_{n}^{(t)}]+1\). Once the mode sequence estimates have been processed until current time \(t\), we update \(\hat{P}^{(t)}\) such that each row is normalized to sum to \(1\).
**Remark 2**.: The need for including Mode Process ID in the controller architecture Fig. 1 is closely related to the notion of _mode observability_, which has been studied extensively in the literature (Vidal et al., 2002; Alessandri et al., 2005; Baglietto et al., 2007; Schuurmans and Patrinos, 2021). One common setup is that the measurements come from a (linear) noisy measurement equation such that \(\mathbf{y}[t]\neq\mathbf{x}[t]\), and derive mode observability conditions from the imperfect observations \(\mathbf{y}[t]\) of the state \(\mathbf{x}[t]\). Also, the mode process is assumed to operate on the same timescale as the system dynamics. Compared to these methods, the algorithms we chose for implementing Mode Process ID hinge upon assumptions that simplify the mode observability problem. For example, in Assumption 2, the state \(\mathbf{x}[t]\) is observable and in Assumption 1, we fix the mode switching times to be constant and deterministic rather than stochastic.
We again emphasize that this is because the focus of our paper is on the impact of PLP on control design rather than mode observability, and we aimed to set up a simple scenario to show that our approach can be used when the system has uncertainties. Thus, not all of our assumptions are limiting; for example, compared to our approach, Vidal et al. (2002) explicitly imposes that the external noise processes \(\{\mathbf{w}[t]\}_{t},\{\mathbf{v}[t]\}_{t}\) are Gaussian white and neither Vidal et al. (2002) nor Alessandri et al. (2005) consider the impact of control.
**Remark 3**.: We qualitatively discuss some conditions for mode observability in our specific implementation of Mode Process ID. First, the modes \(\{A(1),\cdots,A(M)\}\) cannot be too "similar" to each other with respect to a certain metric \(d\), (e.g., if \(d(A(m_{1}),A(m_{2}))<\epsilon\) for some threshold \(\epsilon>0\) and two distinct modes \(m_{1}\neq m_{2}\) and \(m_{1},m_{2}\in\mathcal{X}\)). Second, when \(\Delta T\) is too short, the consistent set may not converge to a single mode even if \(d(A(m_{1}),A(m_{2}))\geq\epsilon\) for all pairs \((m_{1},m_{2})\in\mathcal{X}\) such that \(m_{1}\neq m_{2}\). Rigorous derivation of these conditions for our specific use case are deferred to future work. This includes designing \(d\) and \(\epsilon\) for the consistent set narrowing approach, and deriving conditions on \(\Delta T\) and the set \(\{A(1),\cdots,A(M)\}\) for guaranteed convergence towards a singleton consistent set. Although these conditions are contingent upon our simplifying assumptions, they are expected to be similar to those derived in the aforementioned literature. For the purposes of our simulations in Sec. 7, \(\Delta T\) and the different modes are empirically selected.
## 5 Pattern-Learning for Prediction
The Pattern-Learning component is implemented by using martingale theory to derive closed-form expressions about the pattern-occurrence quantities from Problem 1, which are two important statistics that will aid with prediction on the mode process. With martingales, the resulting formulas yield better mathematical interpretation. In scan statistics, martingales also allow for a more accurate test of experiment results than hypothesis testing (Guerriero et al., 2009).
### Construction Based on Game Interpretation
**Remark 4**.: We simplify the notation and remove the hats and the superscripts of (\(t\)) in the estimated quantities throughout Sec. 5 only. That is, for each \(n\) and \(t\) satisfying \(N[t]=n\), we denote \(\varphi_{n}\equiv\hat{\varphi}_{n}^{(t)}\), \(P\equiv\hat{P}^{(t)}\), \(\tau_{k\mu}\equiv\hat{\tau}_{k\mu}^{(t)}\), \(\tau_{n}\equiv\hat{\tau}_{n}^{(t)}\), and \(q_{k}\equiv\hat{q}_{k}^{(t)}\). Furthermore, we also remove the bracket \([t]\) in the pattern collection \(\Psi[t]\) (see Definition 4), and use the notation \(\Psi\) instead. However, we emphasize the understanding that the computation done at time \(t\) uses the original estimates and the time-varying pattern collection.
Note that there are constraints on the degrees of freedom on possible Markov chain sample path trajectories. Thus, we take inspiration from Pozdnyakov (2008) and consider the occurrence of feasible augmented patterns up to two extra modes.
**Definition 8** (Augmented Pattern).: Suppose we are given a collection of patterns \(\Psi\) (from Definition 4). An _augmented pattern_\(\gamma\) corresponding to a pattern \(\psi_{k}\in\Psi\) is defined by prefixing two modes \(m_{1},m_{2}\in\mathcal{X}\) such that the resulting sequence is feasible in the sense of Definition 3. We define the _augmented collection_
\[\Gamma\triangleq\{\text{feasible}\;(m_{1},m_{2})\circ\psi_{k}\;|\;m_{1},m_{2} \in\mathcal{X};\psi_{k}\in\Psi\} \tag{7}\]
to be the collection of augmented patterns, and we define \(K_{L}\in\mathbb{N}\) to be its cardinality. We enumerate each augmented
pattern \(\mathbf{\gamma}_{\ell}\) in the augmented collection \(\Gamma\) using subscript \(\ell\in\{1,\cdots,K_{L}\}\). Here, \(\circ\) denotes the concatenation operation and each augmented pattern has length \(L+2\).
It is easier to solve for Problem 1 by conditioning on observing specific types of ending strings, formally defined below.
**Definition 9** (Ending Strings).: Given the collection of patterns \(\Psi\) and current mode-index \(n\in\mathbb{N}\), suppose we let the mode sequence \(\{\xi_{n},\xi_{n+1},\cdots\}\) run until one of the patterns from \(\Psi\) has been observed. Then an _ending string_ associated with pattern \(\mathbf{\psi}_{k}\in\Psi\) terminates the mode process at mode-index \(\tau_{n}>n\) if \(\xi_{\tau_{n}-L+1\tau_{n}}=\mathbf{\psi}_{k}\). We characterize two primary types of ending strings:
* An _initial-ending string_\(\mathbf{\beta}\) occurs when part of an augmented pattern is observed immediately after the current mode. We classify initial-ending strings into two further subcases:
* A _Case \(0\) initial-ending string_\(\mathbf{\beta}\triangleq\mathbf{\psi}_{k}\) occurs when \(\xi_{n+1\pi+L}=\mathbf{\psi}_{k}\). Define \(\mathcal{S}_{I}^{(0)}\) to be the set of Case \(0\) initial-ending strings with cardinality \(K_{I}^{(0)}\in\mathbb{N}\).
* Let \(m_{1}\in\mathcal{X}\) be such that the above ending strings are feasible. A _Case \(1\) initial-ending string_\(\mathbf{\beta}\triangleq(m_{1})\circ\mathbf{\psi}_{k}\) occurs when \(\xi_{n+1\pi+L+1}=(m_{1})\circ\mathbf{\psi}_{k}\). Define \(\mathcal{S}_{I}^{(1)}\) to be the set of Case \(1\) initial-ending strings with cardinality \(K_{I}^{(1)}\in\mathbb{N}\).
* Let \(m_{1},m_{2}\in\mathcal{X}\) be such that the above ending strings are feasible, and let \(*\) be a placeholder for any feasible sequence of modes (see Definition 3) including the empty string. A _later-ending string_\((*,m_{1},m_{2})\circ\mathbf{\psi}_{k}\) occurs when an augmented pattern is observed long after the current mode, i.e., when \(\tau_{n}>n+L+1\) and \(\xi_{\tau_{n}-L+1\tau_{n}}=(m_{1},m_{2})\circ\mathbf{\psi}_{k}\). Define \(\mathcal{S}_{L}\triangleq((*)\circ\mathbf{\gamma}_{\ell}\mid\mathbf{\gamma}_{\ell}\in\Gamma)\) to be the set of later-ending strings, with the same cardinality \(K_{L}\) as \(\Gamma\).
Define \(\mathcal{S}_{I}\triangleq\mathcal{S}_{I}^{(0)}\cup\mathcal{S}_{I}^{(1)}\) with cardinality \(K_{I}=K_{I}^{(0)}+K_{I}^{(1)}\), and let the _set of ending strings_ be \(\mathcal{S}=\mathcal{S}_{I}\cup\mathcal{S}_{L}\). We enumerate each ending string \(\mathbf{\beta}_{s}\) in \(\mathcal{S}\) using the subscript \(s\in\{1,\cdots,K_{I}+K_{L}\}\).
**Example 1** (Ending Strings Construction).: We provide intuition behind the notation described by Definition 9. Let \(M=4\), i.e., \(\mathcal{X}=\{1,2,3,4\}\), and let the (estimated) TPM \(P\) be such that \(P[m_{1},m_{2}]>0\) for all \(m_{1},m_{2}\in\mathcal{X}\) except when \(m_{1}=m_{2}\) and when \((m_{1},m_{2})\in[\{3,2\},(2,3),(3,4),(4,3)]\). The pattern collection consists of \(K=3\) patterns \(\Psi=\{\mathbf{\psi}_{1},\mathbf{\psi}_{2},\mathbf{\psi}_{3}\}\) of length \(L=3\), with \(\mathbf{\psi}_{1}=(213)\), \(\mathbf{\psi}_{2}=(412)\), and \(\mathbf{\psi}_{3}=(314)\). The augmented pattern collection is defined as \(\Gamma\triangleq\cup_{i=1}^{3}\Gamma_{i}\) with \(\Gamma_{1}=\{\mathbf{\alpha}\circ\mathbf{\psi}_{1}|\mathbf{\alpha}\in\{(14),(21),(24),(31),(41)\}\}\), \(\Gamma_{2}=\{\mathbf{\alpha}\circ\mathbf{\psi}_{2}|\mathbf{\alpha}\in\{(12),(21),(31),(41),(42)\}\}\), \(\Gamma_{3}=\{\mathbf{\alpha}\circ\mathbf{\psi}_{3}|\mathbf{\alpha}\in\{(21),(31),(41)\}\}\). The number of later-ending strings is \(K_{L}=13\). Suppose the (estimated) current mode is \(\varphi_{n}=2\). The set of feasible augmented Case \(0\) initial-ending strings is \(\mathcal{S}_{I}^{(0)}=\{\mathbf{\psi}_{2}\}\) since \(P[2,4]>0\). For Case \(1\) initial-ending strings, \(\mathcal{S}_{I}^{(1)}=\{(1)\circ\mathbf{\psi}_{1},(4)\circ\mathbf{\psi}_{1},(1)\circ\bm {\psi}_{2},(1)\circ\mathbf{\psi}_{3},(2)\circ\mathbf{\psi}_{2}\}\). Thus, \(K_{I}^{(0)}=1\) and \(K_{I}^{(1)}=4\).
**Definition 10** (Agents).: Let \(\Gamma\) be the augmented pattern collection associated with original collection \(\Psi\) (see Definition 8). We introduce the notion of an _agent_, which observes the mode process \(\{\xi_{n}\}\) and accumulates _rewards_ at each mode-index with the goal of observing a pattern from \(\Gamma\) (vicariously observing a pattern from \(\Psi\)). We refer to a _type-\(\ell\)_agent_ to be an agent which accumulates rewards by specifically observing the occurrence of \(\mathbf{\gamma}_{\ell}\in\Gamma\) in \(\{\xi_{n}\}\). At each mode-index \(n\in\mathbb{N}\), \(K_{L}\) new agents, one for each type \(\ell\), \(\ell\in\{1,\cdots,K_{L}\}\), are introduced to the mode process; we refer to a type-\(\ell\) agent which is introduced at mode-index \(n\) as _type-\(\ell\) agent \(n\)_. A type-\(\ell\) agent \(n\) observes (estimated) mode realizations in the future sequence \(\{\xi_{n+1},\xi_{n+2},\cdots,\}\) and accumulates rewards at a rate which is inversely-proportional to the action it took, starting with some arbitrary _initial reward_\(c_{\ell}\in\mathbb{R}\). If \(\varphi_{n}=m_{1}\), type-\(\ell\) agent \(n\) aims to observe the event \(\{\xi_{n+1\pi+L+1}=(m_{2})\circ\mathbf{\psi}_{k}\}\). Otherwise, if \(\varphi_{n}\neq m_{1}\), type-\(\ell\) agent \(n\) aims to observe the event \(\{\xi_{n+1\pi+L}=\mathbf{\psi}_{k}\}\).
**Remark 5**.: It becomes necessary to distinguish the occurrence time of a pattern \(\mathbf{\psi}_{k}\) from that of an augmented pattern \(\mathbf{\gamma}_{\ell}\triangleq(m_{1},m_{2})\circ\mathbf{\psi}_{k}\). We define \(\tau_{\ell n}^{a}\) and \(\tau_{n}^{a}\) to be the versions of (3) and (4) for \(\gamma_{\ell}\in\Gamma\).
**Remark 6**.: Due to the stationarity of \(\{\xi_{n}\}\), the distributions of \(\tau_{k|n_{1}}-n_{1}\) and \(\tau_{k|n_{1}}-n_{2}\) are equivalent for each \(k\in\{1,\cdots,K\}\), and any mode-indices \(n_{1},n_{2}\in\mathbb{N}\), such that \(\varphi_{n_{1}}=\varphi_{n_{2}}\). Likewise, the distributions of \(\tau_{n_{1}}-n_{1}\) and \(\tau_{n_{2}}-n_{2}\) are equivalent. For notation simplicity in the following presentation, we remove the subscript \(n\in\mathbb{N}\) in all variables, and use the above stationarity property to shift mode-indices to \(n=0\) in variables such that the current mode is given by \(\varphi_{0}\) instead of \(\varphi_{n}\). Furthermore, we apply the shorthand notation to Definitions 5 and 6 such that \(\tau_{k}\equiv\tau_{k|0}\) and \(\tau\equiv\tau_{0}\); the notation for the augmented patterns (Remark 5) follow similarly as \(\tau_{\ell}^{a}\equiv\tau_{\ell 0}^{a}\) and \(\tau^{a}\equiv\tau_{0}^{a}\).
Figure 4: A visualization of the ending strings and agent-reward construction using the setup of Example 1. The red box marks the current mode-index \(n\in\mathbb{N}\), and each of the three sequences demonstrate the three different types of ending strings which terminate the mode process in the sense of Definition 9. The grey rectangles hide future modes which have not occurred because of termination. For the last case where \(\mathbf{\gamma}_{13}\) terminates the mode process as a later-ending string, type-13 agents at mode-indices \(1,2,\cdots,\tau_{3}-5\) are shown. By the reward construction of Definition 10, type-13 agent \(\tau_{3}-5\) is the only agent who receives a nonzero reward.
**Definition 11** (Ending String Probabilities).: Define \(\mathbb{P}(\mathbf{\beta}_{s})\) to be the probability that an ending string \(\mathbf{\beta}_{s}\in\mathcal{S}\) terminates the mode process \(\{\xi_{n}\}\) in the sense of Definition 9. For initial-ending strings \(\mathbf{\beta}_{s}\in\mathcal{S}_{t}\) which is explicitly denoted as \((\beta_{1},\cdots,\beta_{b_{t}})\) with length \(b_{s}\in\mathbb{N}\), we get \(\mathbb{P}(\mathbf{\beta}_{s})=P[\psi_{0},\beta_{1}]\prod_{j=2}^{b_{s-1}}P[\beta_{ j},\beta_{j+1}]\). We demonstrate how to compute \(\mathbb{P}(\mathbf{\beta}_{s})\) for later-ending strings \(\mathbf{\beta}_{s}\in\mathcal{S}_{L}\) in the following Sec. 5.2, as part of solving Problem 1.
**Definition 12** (Gain Matrix).: Let \(\mathbf{\beta}_{s}\in\mathcal{S}\) be an ending string which is explicitly denoted as \(\mathbf{\beta}_{s}\triangleq(\beta_{1},\cdots,\beta_{b_{s}})\in\mathcal{S}\) with length \(b_{s}\in\mathbb{N}\). Further let augmented pattern \(\mathbf{\gamma}_{\ell}\in\Gamma\) be associated with original pattern \(\mathbf{\psi}_{k}\in\Psi\), i.e., \(\mathbf{\gamma}_{\ell}\triangleq(m_{1},m_{2})\circ\mathbf{\psi}_{k}\) for some \(m_{1},m_{2}\in\mathcal{X}\). Then the _total gain_\(W_{\delta t}\) accumulated over all type-\(\ell\) agents from observing (partial) occurrences of \(\mathbf{\gamma}_{\ell}\) in \(\mathbf{\beta}_{s}\), is given by \(W_{\delta t}\triangleq\sum_{j=1}^{\min(b_{s-1},L+1)}D_{j}^{(1)}(\mathbf{\beta}_{s},\mathbf{\gamma}_{\ell})+\sum_{j=1}^{\min(b_{s},-1,L)}D_{j}^{(2)}(\mathbf{\beta}_{s}, \mathbf{\gamma}_{\ell})\) with \(D_{i}^{(1)}\) and \(D_{i}^{(2)}\) defined based on the reward strategy from Definition 10. First,
\[D_{i}^{(1)}(\mathbf{\beta}_{s},\mathbf{\gamma}_{\ell})\triangleq\left(P[m_{1},m_{2}]P[ m_{2},\psi_{k,1}]\prod_{j=2}^{i-1}P[\psi_{k,j-1},\psi_{k,j}]\right)^{-1}\]
if \(\beta_{b_{s},-i}=m_{1}\) and \(\beta_{b_{s},-i+1}=m_{2},\beta_{b_{s},-i+j}=\psi_{k,j-1}\) for all \(j\in\{2,\cdots,i\}\); else, \(D_{i}^{(1)}(\mathbf{\beta}_{s},\mathbf{\gamma}_{\ell})=0\). Second,
\[D_{i}^{(2)}(\mathbf{\beta}_{s},\mathbf{\gamma}_{\ell})\triangleq\left(P[\beta_{b_{s}, -i},\psi_{1}]\prod_{j=2}^{i}P[\psi_{k,j-1},\psi_{j}]\right)^{-1}\]
if \(\beta_{b_{s},-i}\neq m_{2}\) and \(\beta_{b_{s},-i+j}=\psi_{j}\) for all \(j\in\{1,\cdots,i\}\); else, \(D_{i}^{(2)}(\mathbf{\beta}_{s},\mathbf{\gamma}_{\ell})=0\). A _gain matrix_\(W\in\mathbb{R}^{(K_{\ell}+K_{L})\times K_{L}}\) is constructed with entries \(W_{\delta t}\) for each pair of \(\mathbf{\beta}_{s}\in\mathcal{S}\) and \(\mathbf{\gamma}_{\ell}\in\Gamma\).
**Definition 13** (Cumulative Net Reward).: The expected _type-\(\ell\) cumulative net reward_ over all type-\(\ell\) agents by mode-index \(\tau\) is defined \(\mathbb{E}[R_{\tau}^{(\ell)}]\triangleq c_{\ell}([\mathbb{P}(\mathbf{\beta}_{1}), \cdots,\mathbb{P}(\mathbf{\beta}_{K_{\tau}+K_{L}})]W_{\delta t}-\mathbb{E}[\tau])\), where the \(\mathbb{P}(\mathbf{\beta}_{s})\) are the probabilities from Definition 11 and \(W_{\delta t}\) denotes the \(\ell\)th column of the gain matrix (see Definition 12). Correspondingly, the expected _cumulative net reward_ over all agents by mode-index \(\overline{n}\) is defined as \(R_{\overline{n}}\triangleq\sum_{\ell=1}^{K_{L}}R_{\overline{n}}^{(\ell)}\), and
\[\mathbb{E}[R_{\tau}]\!=\![\mathbb{P}(\mathbf{\beta}_{1})\cdots\mathbb{P}(\mathbf{ \beta}_{K_{\tau}+K_{L}})]W\mathbf{c}\!-\!\left(\sum_{\ell=1}^{K_{L}}c_{\ell} \right)\mathbb{E}[\tau] \tag{8}\]
where \(\mathbf{c}\triangleq[c_{1},\cdots,c_{K_{L}}]^{\tau}\) are the initial rewards (Definition 10).
### Solving the Pattern-Occurrence Problem
We are now ready to use our construction to present our main results, which address the questions in Problem 1.
**Theorem 1** (Expected Time of Occurrence).: Denote \(\tau\) as in Remark 6 with (estimated) current mode \(\varphi_{0}\) for the collection \(\Psi\) from Definition 2 and corresponding augmented collection \(\Gamma\). Then
\[\mathbb{E}[\tau]=\frac{1}{\sum\limits_{\ell=1}^{K_{L}}c_{\ell}^{*}}\!\left[ \left(1-\sum\limits_{s=1}^{K_{L}}\mathbb{P}(\mathbf{\beta}_{s})\right)+\sum\limits _{s=1}^{K_{L}}\mathbb{P}(\mathbf{\beta}_{s})\sum\limits_{\ell=1}^{K_{L}}W_{\delta t }c_{\ell}^{*}\right] \tag{9}\]
where \(\mathbf{\gamma}_{\ell}\in\Gamma\), \(\mathbf{\beta}_{s}\in\mathcal{S}\), \(\mathbb{P}(\mathbf{\beta}_{s})\) is from Definition 11, \(W\) is from Definition 12, and \(\mathbf{c}^{*}\in\mathbb{R}^{K_{L}}\) is the vector of initial rewards (see Definition 12) such that \(\sum_{\ell=1}^{K_{L}}W_{\delta t}c_{\ell}^{*}=1\) for all \(s\in\{K_{I}+1,\cdots,K_{I}+K_{L}\}\).
Proof.: Because the Markov chain is irreducible and finite-state, \(\mathbb{E}[\tau_{\ell}^{*}]<\infty\), for each \(\tau_{\ell}^{*}\) defined in Remark 5. Note that \(\tau_{k}=\min_{\tau_{\ell}\in\Gamma_{\ell}}\tau_{\ell}^{*}\), where \(\Gamma_{k}\) is the subset of \(\Gamma\) containing augmented patterns \(\mathbf{\gamma}\triangleq(m_{1},m_{2})\circ\mathbf{\psi}_{k}\) corresponding to original pattern \(\mathbf{\psi}_{k}\in\Psi\). We have that \(\tau\triangleq\min_{\tau_{k}}\tau_{k}\), and by Definition 6, we also have \(\mathbb{E}[\tau]<\infty\). By the construction of the gain matrix \(W\) and the fact that linear combinations of martingales are martingales, both \([R_{n_{n}\times\tau_{\ell}^{*}}^{(\ell)}]_{n\in\mathbb{N}}\) and \([R_{n_{n}\times\tau_{\ell}}]_{n\in\mathbb{N}}\) are martingales. This implies that \(\mathbb{E}[R_{\tau_{\ell}^{*}}^{(\ell)}]<\infty\) since \(\mathbb{E}[\tau_{\ell}^{*}]<\infty\). Furthermore, \(\mathbb{E}[R_{\tau}]<\infty\) because \(\tau\leq\tau_{\ell}^{*}\) for all \(\ell\). Define the set \(\Omega_{n}^{(\ell)}\triangleq\{\omega\in\Omega|n<\tau_{\ell}^{*}\}\). By Doob's martingale convergence theorem and the triangle inequality, \(\lim_{n\to\infty}\int_{\Omega_{n}^{(\ell)}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
of Definition 9. Clearly, \(\mathbb{P}(\psi_{k}\,|\,\mathbf{\beta}_{s})=1\) if \(\beta_{b_{k}-L+1:b_{k}}=\psi_{k}\) holds, otherwise it is 0. We thus obtain the desired equation.
**Remark 7**: _In order to fit the closed-form expressions of Theorems 1 and 2 into the original architecture described throughout Sec. 3, we unsimplify the notation from Remark 4 and Remark 6 for general time \(t\!\in\!\mathbb{N}\) and corresponding mode-index \(n\!\triangleq\!N[t]\). This yields the original time-dependent pattern-occurrence quantities desired in Problem 1. Namely, with estimated current mode \(\hat{\phi}_{n}^{(t)}\) and TPM \(\hat{P}^{(t)}\), the estimated expected minimum occurrence time \(\mathbb{E}[\tau_{n}^{(t)}]\) is the \(\mathbb{E}[\tau]\) computed from Theorem 1, while the estimated first occurrence probabilities \(\{\hat{q}_{k}^{(t)}\}\) are the \(\{q_{k}\}\) computed from Theorem 2._
## 6 Control Law Design
In this section, we tie the pattern-occurrence quantities developed in Sec. 5 into our choice of implementation for the Control Law Design component. One well-known control method that explicitly incorporates predictions is _model predictive control (MPC)_, and so we use principles similar to MPC to schedule control policies in advance (see Proposition 1). For the purposes of our dynamic topology network case study in Sec. 7, we also discuss non-predictive Control Law Design using the novel system level synthesis approach (Wang et al., 2018; Anderson et al., 2019), including a topology-robust version (Han, 2020) and a data-driven version (Xue and Matni, 2021; Alonso et al., 2022).
### Incorporating Predictions
We implement a table \(\mathcal{U}\) that maps patterns of interest to the optimal control sequences we designed for them in our experiment so far (see Proposition 2); this also includes explicit state and control trajectories. This implementation was inspired by _episodic memory_(Lengyel and Dayan, 2007) which can be added to learning-based control methods (e.g., reinforcement learning) to recall specific experiences and their rewards (Blundell et al., 2016). Our table \(\mathcal{U}\) is implemented according to Proposition 2 and its entries are updated in two ways: 1) the control law is updated in an entry for an existing pattern, or 2) a new entry is created for a newly-observed pattern \(\psi\) at time \(t\), where \(\psi\!\in\!\mathbb{V}[t+1]\) but \(\psi\!\notin\!\mathbb{V}[t]\). We describe the control law synthesis and update procedures in the following Sec. 6.2.
For the prediction component, we specifically recall model predictive control (MPC). Standard MPC for discrete-time linear dynamics seeks to predict a future sequence of controls \(\{\mathbf{u}[t],\mathbf{u}[t+1],\cdots,\mathbf{u}[t+H]\}\) which minimizes some cost functional at each timestep \(t\!\in\!\mathbb{N}\), for some prediction horizon \(H\!\in\!\mathbb{N}\). Once the first control input \(\mathbf{u}[t]\) is applied to the system, the procedure is repeated at the next time \(t+1\). Although intuitive, incorporating both short-term and long-term predictions for online control have been proven to be beneficial, even when the system to be controlled is perturbed by either random and adversarial disturbances (Chen et al., 2015); in Yu et al. (2020), this is demonstrated explicitly with the linear quadratic regulator. For the concreteness of this paper, we are inspired by the methods of Park and Kwon (2002) and Lu et al. (2013), which discuss MPC for MJS, and we extend their approaches to our setting from Sec. 2.
We remark that \(H\), like prediction horizon \(L\) for the mode process, is a user-chosen hyperparameter; one reasonable choice could be to make it time-varying and set it equal to \(\Delta T-(t-T_{N[t]})\) at each \(t\). Given the estimated current mode \(m\!\triangleq\!\hat{\phi}_{N[t]}^{(t)}\), the cost function we seek to optimize is the following mode-dependent quadratic cost function:
\[J(t,m) \triangleq\sum_{s=t}^{H}(\mathbf{x}[s]^{\top}Q(m)\mathbf{x}[s]+ \mathbf{u}[s]^{\top}R(m)\mathbf{u}[s])\] \[\qquad\quad+\mathbf{x}[H]^{\top}Q_{f}(m)\mathbf{x}[H] \tag{11}\]
The main distinction is that the prediction part of MPC is done on the estimated mode process instead of the system dynamics. Let \(t\!\in\!\mathbb{N}\) and \(n\!\triangleq\!N[t]\), and suppose the consistent set narrowing approach of Sec. 3.1 estimates the current mode to be \(\hat{\phi}_{n}^{(t)}\). Again, by Assumption 1, there are at most \(\Delta T-1\) state and control observations \(\mathbf{x}[T_{n}\!:\!t]\) and \(\mathbf{u}[T_{n}\!:\!t]\) associated with each mode \(\varphi_{n}\). Thus, for the control input \(\mathbf{u}[t]=K(t,\hat{\phi}_{n}^{(t)})\mathbf{x}[t]\) at time \(t\), the gain \(K(t,\hat{\phi}_{n}^{(t)})\!\in\!\mathbb{R}^{n_{x},\omega_{n}}\) associated specifically with mode \(\hat{\phi}_{n}^{(t)}\) can be designed using standard linear optimal control tools such as LQR minimization.
### System Level Synthesis
For the purposes of this paper (especially for our case study in Sec. 7), we employ the novel _system level synthesis_ (SLS) (Wang et al., 2018; Anderson et al., 2019) approach for distributed disturbance-rejection in linear discrete-time network systems with static topologies \(\mathcal{G}\!\triangleq\!(\mathcal{V},\mathcal{E})\), expressed as
\[\mathbf{x}[t+1]=A\mathbf{x}[t]+B\mathbf{u}[t]+\mathbf{w}[t] \tag{12}\]
The standard state-feedback control law for systems of this form is given by \(\mathbf{u}[t]=K\mathbf{x}[t]\) and in \(z\)-transform expression, the resulting closed-loop system is given by \(\mathbf{x}=(zI-A-BK)^{-1}\mathbf{w}\). However, for large-scale systems (i.e., large-dimensional matrices \(A\) and \(B\)), optimizing over the transfer function \((zI-A-BK)^{-1}\) by solving for \(K\) is difficult. Thus, a key feature of SLS is that it reparametrizes the control problem: instead of designing just the open-loop feedback gain \(K\), SLS designs for the entire closed-loop system via response maps \(\mathbf{\Phi}\!\triangleq\!\{\mathbf{\Phi}_{x},\mathbf{\Phi}_{u}\}\) such that \(\mathbf{x}[0\!:\!t]\!=\!\mathbf{\Phi}_{x}\mathbf{w}[0\!:\!t]\) and \(\mathbf{u}[0\!:\!t]\!=\!\mathbf{\Phi}_{u}\mathbf{w}[0\!:\!t]\), where \(\mathbf{w}[t]\) is an additive external disturbance.
**Lemma 1**: _For the linear, discrete-time static dynamics (12), the following are true. First, the affine subspace described by_
\[\begin{bmatrix}I-Z\hat{A}&-Z\hat{B}\end{bmatrix}\begin{bmatrix}\mathbf{\Phi}_{ x}\\ \mathbf{\Phi}_{u}\end{bmatrix}=I \tag{13}\]
_parametrizes all possible system responses \(\mathbf{\Phi}\), where \(\hat{A}\triangleq\!\text{blkdiag}(A,\cdots,A,\mathbf{0})\!\in\!\mathbb{R}^{H _{H_{H}}\times H_{H_{H_{H}}}}\), \(\hat{B}\) is defined similarly, \(Z\) is the block-downshift operator, \(n_{x}\!\in\!\mathbb{N}\) is the state dimension, and \(H\!\in\!\mathbb{N}\) is a chosen finite horizon over which control is performed. Second, for any \(\mathbf{\Phi}\) which satisfies the condition in (13), the feedback gain \(K\!\triangleq\!\mathbf{\Phi}_{u}\mathbf{\Phi}_{x}^{-1}\) achieves the desired internally-stabilizing system response._
The state-feedback controller is then implemented with:
\[\hat{\mathbf{x}}[t] =\sum_{s=2}^{H}\Phi_{x}[s]\hat{\mathbf{w}}[t+1-s],\ \hat{\mathbf{w}}[t]= \mathbf{x}[t]-\hat{\mathbf{x}}[t]\] \[\mathbf{u}[t] =\sum_{s=1}^{H}\Phi_{u}[s]\hat{\mathbf{w}}[t+1-s] \tag{14}\]
where \(\hat{\mathbf{w}}\) is the controller's _internal state_ and \(\hat{\mathbf{x}}\) is the controller's estimate of the state. This form also makes SLS more suitable for distributed and localized control law design in large-scale linear systems, and so \(\mathbf{\Phi}\) is often implemented as \(\mathbf{\Phi}^{(i)}\triangleq\{\Phi_{u}^{(i)}[s],\Phi_{u}^{(i)}[s]\}\) for each node \(i\!\in\!\mathcal{V}\) and its local subsystem \(\mathcal{L}_{i,h}\). Here, \(s\!\in\!\{1,\cdots,H\}\) is the index of the _spectral component_, \(\mathcal{L}_{i,h}\) is the set of all \(j\!\in\!\mathcal{V}\) which is within \(h\!\in\!\mathbb{N}\) edges away from \(i\), and \(h\) is some number of _hops_. Both time horizon \(H\) and number of neighboring hops \(h\) are parameters chosen by design based on properties such as the scale and topology of \(\mathcal{G}\).
We can also extend SLS to account for dynamic topologies \(\mathcal{G}(m)\triangleq\mathcal{(V,E(m))}\) for \(m\!\in\!\mathbb{N}\) representing the index of the topology; this was done in Han (2020). Let \(\mathbf{\Phi}_{m}^{(i)}\!\triangleq\!\{\Phi_{x,m}^{(i)}[s],\Phi_{u,m}^{(i)}[ s]\}\) define the \(i\)th local response map \(\mathbf{\Phi}^{(i)}\) which is created specifically for topology \(m\!\in\!\{1,\cdots,M\}\). As we demonstrate for our case study in Sec. 7, the mode in our original dynamics (1) corresponds to the index of the current topology the system is in. _Topology-robust SLS_ essentially attempts to design a single \(\{\Phi_{x},\Phi_{u}\}\) response that can simultaneously stabilize multiple topologies (i.e., distinct \(A\) matrices). Conditions for _simultaneous stabilization_ for a collection of discrete-time LTI systems have been studied extensively in past literature: some results (e.g., Blondel et al. (1993)) express the condition by ensuring that the closed-loop transfer function between every possible plant-controller pair does not have any pole-zero cancellations, while others (e.g., Cao et al. (1999)) derive conditions based on the algebraic Riccati equation. To keep our discussion focused, we do not state these conditions here (see Remark 3).
To be able to use PLP with the SLS approach, we require a formulation of SLS which is driven by data. Towards that end, we leverage _data-driven SLS_(Xue and Matni, 2021; Alonso et al., 2022), which extends traditional SLS using a characterization based on Willems' fundamental lemma (Willems and Polderman, 1997), which parametrizes state and input trajectories based on past trajectories under the conditions of persistence of excitation. Define the Hankel matrix
\[\hat{H}_{r}(\mathbf{x}[0:H])\triangleq\begin{bmatrix}\mathbf{x}[0]&\mathbf{x }[1]&\cdots&\mathbf{x}[H-r]\\ \mathbf{x}[1]&\mathbf{x}[2]&\cdots&\mathbf{x}[H-r+1]\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{x}[r-1]&\mathbf{x}[r]&\cdots&\mathbf{x}[H-1]\end{bmatrix}\]
for finite time horizon \(H\) and some \(r\!\in\!\mathbb{N}\). We say the finite-horizon state trajectory \(\mathbf{x}[0:H]\) is _persistently-exciting_ of order \(r\) if \(\hat{H}_{r}(\mathbf{x}[0:H])\) is full rank. In the data-driven formulation of SLS, the achievable subspace described by (13) can be equivalently written as the set
\[\begin{cases}\left[\begin{matrix}\hat{H}_{H}(\mathbf{x}[0:H])\\ \hat{H}_{H}(\mathbf{u}[0:H])\end{matrix}\right]G\ \text{s.t.}\ \hat{H}_{1}(\mathbf{x}[0:H])G=I\end{cases} \tag{15}\]
Now, let \(n\!\in\!\mathbb{N}\) and \(n^{\prime}\!\in\!\mathbb{N}\), \(n^{\prime}\!>\!n\), be such that at times \(T_{n}\) and \(T_{n^{\prime}}\), the system (1) have switched to the same mode \(m\!\in\!\mathcal{X}\). For our PLP approach, the state/control trajectories \(\{\mathbf{x}[T_{n-1}:T_{n}-1],\mathbf{u}[T_{n-1}:T_{n}-1]\}\) and \(\{\mathbf{x}[T_{n^{\prime}-1}:T_{n^{\prime}}-1],\mathbf{u}[T_{n^{\prime}-1}:T_{ n^{\prime}}-1]\}\) can be collectively used to design the optimal control law for mode \(m\), i.e., we use horizon \(T_{n-1}\!:T_{n}-1\) in place of \([0:H]\) in (15). To implement memory, we store (in \(\mathcal{U}\)) previous trajectories of the system corresponding to the same mode, and continue to append to it as the simulation progresses. To apply Proposition 1, SLS is run more than once to compute a new \(\mathbf{\Phi}\) for every new estimated mode \(m\triangleq\phi_{u}^{(i)}\), hence the dependence of \(\mathbf{\Phi}_{m}^{(i)}\) on time \(t\!\in\!\mathbb{N}\). By Proposition 2, the \(\mathbf{\Phi}_{m}^{(i,t)}\) are stored and updated over time in the table \(\mathcal{U}\), then used to compute \(\mathbf{u}[t]\) via (14).
## 7 Case Study: Topology-Switching Network
Controlling networks that undergo parametric and/or topological changes (e.g., due to faults or connectivity changes of mobile agents) is an important and widely-studied problem in large-scale networked systems. In the recent literature, an adaptive, consensus-based control scheme for complex networks with time-varying, switching network topology was discussed in Chung et al. (2013). Distributed target-detection and tracking using a dynamic sensor network was studied in Bandyopadhyay and Chung (2018), while Saboori and Khorasani (2015) described fault-tolerance against actuator failures in a multiagent system connected by a switching topology network.
For the purposes of this paper, we demonstrate the proposed controller architecture to the following extension of (1), which switches among a finite number of different topologies \(\mathcal{G}(m)\triangleq(\mathcal{V},\mathcal{E}(m)),m\in\{1,\cdots,M\}\), \(M\!\in\!\mathbb{N}\).
\[\mathbf{x}_{i}[t+1] =A_{ii}(\xi_{N[t]})\mathbf{x}_{i}[t]\] \[\quad+\sum_{j\in N_{i}(\xi_{N[t]})}A_{ij}(\xi_{N[t]})\mathbf{x}_ {j}[t]+B_{i}\mathbf{u}[t]+\mathbf{w}_{i}[t] \tag{16}\]
Here, \(n\!\triangleq\!|\mathcal{V}|\), \(i\!\in\!\{1,\cdots,n_{s}\}\), the neighboring nodes of subsystem \(i\) are \(\mathcal{N}_{i}(m)\!\triangleq\!\{j\!\in\!\mathcal{V}^{\prime}\!:(i,j)\!\in\! \mathcal{E}(m)\}\), and \(A(m)\triangleq\!\left[A_{ij}(m)\right]\!\in\!\mathbb{R}^{n_{s}\!:\!n_{s}}\) for each topology \(m\!\in\!\{1,\cdots,M\}\). The assumptions from Sec. 2 still hold, and the mode process \(\{\xi_{n}\}\) is the index of the current topology at time \(t\!\in\!\mathbb{N}\) with \(N[t]\) being the number of topology changes made by time \(t\).
### Experiment Setup
The overall control objective is to minimize the mode-dependent quadratic cost function (11) subject to constraints imposed by various implementations of SLS from Sec. 6.2. Namely, we consider three versions of the controller architecture Fig. 1; a visual distinction among the three is shown in Fig. 5.
* **Baseline** [_First row of Fig. 5_]: here, Fig. 1 is implemented only using Mode Process ID; both PLP and MPC are not used. The Control Law Design component is implemented with the basic SLS approach from Sec. 6.2. We minimize the cost (11) subject to the achievability constraint described by (13) and the locality constraint described with the sets
\(\{\mathcal{L}_{i,k}\}_{i\in\mathcal{V}}\). Because the topology changes over time and basic SLS is not designed for time-varying topologies, this requires the optimization to be solved multiple times.
* **Topology-Robust [_Second row of Fig. 5_]: we have the same architecture as above, but SLS is replaced with the method of Han (2020), an extension of SLS to network dynamics under time-varying topological changes. A single common control law \(\boldsymbol{\Phi}^{(t,t)}\) is designed for all consistent modes in \(\mathbb{C}[t]\), and this common law is used until time \(t^{*}>t\) when \(|\mathbb{C}[t^{*}]|=1\), after which standard SLS is used.
* **PLP [_Third row of Fig. 5_]: we combine the original architecture proposed by Fig. 1 with the extended SLS approach described in Sec. 6.2. We minimize the cost (11) subject to the data-driven achievability constraint described by (15) and the locality constraint described with the sets \(\{\mathcal{L}_{i,h}\}_{i\in\mathcal{V}}\). Given pattern collection \(\Psi[t]\) at time \(t\in\mathbb{N}\) and mode-index \(n\triangleq N[t]\), if \(\boldsymbol{\psi}\triangleq(\psi_{1},\cdots,\psi_{L})\in\Psi[t]\) is expected to occur at mode-index \(n+\mathbb{E}[\hat{x}_{n}^{(t)}]\in\mathbb{N}\), the control law for node \(i\in\mathcal{V}\) is scheduled to be \(\boldsymbol{\Phi}_{m}^{(t,s)}\), where \(m=\psi_{1}\) and \(s\in[T_{n\in\mathbb{Z}[\hat{x}_{n}^{(t)}]},t^{*})\), where \(T_{n}\) is defined in Assumption 1 and \(t^{*}\) is the time after \(T_{n\in\mathbb{Z}[\hat{x}_{n}^{(t)}]}\) when \(|\mathbb{C}[t^{*}]|=1\). For times \(s\in[t,\,T_{n\in\mathbb{Z}[\hat{x}_{n}^{(t)}]})\) where a prediction is not available, we revert to the baseline controller.
The three architectures are each tested on two specific network systems of the form given in (16). For both systems, the specific \(A\) and \(B\) matrices in (16) are the linearized discrete-time power grid dynamics given in Sec. 5 of Han (2020), which we do not repeat here for the sake of brevity.
* every ending string in \(\mathcal{S}\) is an initial-ending string, \(\mathbb{E}[\hat{x}_{n}^{(t)}]=L\) for each \(t\in\mathbb{N}\), \(n\triangleq N[t]\), and determining \(\operatorname*{argmax}_{k}[\hat{q}_{k}^{(t)}]\) reduces to a maximum likelihood problem.
* **(Large-Scale) Rectangular Grid System**: the network system (16) consists of a \(10\times 10\) rectangular grid arrangement of \(n_{s}=100\) nodes and \(M=20\) topologies (see Fig. 7). The true TPM is a \(M\times M\) stochastic matrix with no self-transitions. When PLP is included, the collection \(\Psi[t]\) is constructed with strict subset in (2), which means the formulas from Theorems 1 and 2 must be used to solve Problem 1.
Even though the Control Law Design component of all three architectures is localized and distributed by the nature of SLS, we initially assume Mode Process ID and PLP are centralized. This is reasonable under Assumption 1, which imposes that communications among subsystems are much faster compared to the switching of the topologies. This is often the case in fault-tolerance for large-scale network applications such as the power grid and the internet, where faults are expected to occur rarely. Furthermore, in Sec. 7.3, we introduce the implementation of localized, distributed Mode Process ID and PLP. For simplification of terminology in this section only, we overload the terminology "PLP" to refer to both the controller with PLP (third row of Fig. 5) and a component of the controller architecture in Fig. 1 that leverages other algorithms, with the understanding that PLP truly refers to the latter.
### Tradeoff Comparison Results
Each simulation is run by applying one of the three controller architectures to one of the two network systems. We run a total of 20 Monte-Carlo experiment trials and each trial
Figure 5: The time-varying control law for each of the three versions of the controller architecture, designed based on the estimated mode \(\hat{x}_{n}^{(t)}\) and the consistent set \(\mathbb{C}[t]\). Each horizontal bar represents a time duration of length \(\Delta T\). The baseline uses the previous law until the consistent set converges to a singleton set (white sub-bars). Topology-Robust is able to control multiple modes simultaneously, so it uses a robust law (red sub-bars) until the convergence. PLP (future horizon \(L=3\)) uses the law corresponding to the predicted next mode (blue sub-bars) until convergence; note that when the mode in the converged consistent set is equivalent to the predicted next mode, the control policy need not be changed.
Figure 6: [Left] The different possible topologies of the Hexagon System. [Right] The underlying Markov chain for topology transitions.
Figure 7: The different possible topologies of the \(10\times 10\) Rectangular Grid System.
is run for \(T_{\text{sim}}\!=\!400\) timesteps with \(\Delta T\!=\!10\). The PLP architecture also uses a future horizon of \(L\!=\!3\). A sample trajectory of the states and control versus time for all three architectures is shown in Fig. 8 for the hexagon system; we reduce the time horizon to 80 timesteps for this figure only so that there is better clarity in distinguishing the lines. Because the objective is the reject external disturbances, the state values waver around the zero line. Moreover, under Topology-Robust, the state has the smallest oscillations around zero (green), followed by PLP (red), and finally the baseline (blue). A sample evolution of the consistent set narrowing approach applied for Mode Process ID is also shown in Fig. 9 for the baseline and PLP architectures; again, we plot for a shorter horizon of time (120 timesteps) for easier visibility. The PLP architecture manages to successfully narrow the consistent set down to a singleton within the \(\Delta T\) time interval more often than the baseline, and consequently also manages to track the true mode more precisely.
The comparisons among the different scenarios are performed by evaluating one of the following four performance metrics. First, to measure the control effort, an LQR-like cost (17a) is averaged over the simulation time \(T_{\text{sim}}\). Second, to measure the disturbance-rejection performance, we consider the time-average error norm (17b). Third, we measure the proportion (17c) of the simulation duration in which the matching control law is used to control the current topology. Here, if the true mode is given by \(\varphi_{n}\) at time \(t\), we say that the _matching control law_\(\{\mathbf{\Phi}_{n}^{(t,t)}\!:i\in\mathcal{V}\}\) is used if \(m\!\triangleq\!\hat{\varphi}_{n}^{(t)}\!=\!\varphi_{n}\). Fourth, the total runtime is recorded.
\[\frac{1}{T_{\text{sim}}}\sum_{t=1}^{T_{\text{sim}}}\mathbf{x}[t]^ {\top}I_{n_{e}}\mathbf{x}[t]+\mathbf{u}[t]^{\top}I_{n_{e}}\mathbf{u}[t] \tag{17a}\] \[\frac{1}{T_{\text{sim}}}\sum_{t=1}^{T_{\text{sim}}}\|\mathbf{x}[t ]\|_{2}\] (17b) \[\frac{1}{T_{\text{sim}}}\sum_{t=1}^{T_{\text{sim}}}\mathds{1}\{ \hat{\varphi}_{n}^{(t)}=\varphi_{n}\} \tag{17c}\]
where \(I_{n_{e}},I_{n_{e}}\) are identity matrices with appropriate dimensions.
The metrics (17) are further averaged over 20 Monte-Carlo simulations with varying initial condition \(\mathbf{x}_{0}\), noise process \(\mathbf{w}[t]\), and true realization \(\{\varphi_{n}\}\) of the mode process \(\{\xi_{n}\}\). The results are tabulated in Table 2 with the three architecture names abbreviated: 'Base' as the baseline, and 'TR' as Topology-Robust. The proportion of time the matching control law is irrelevant for Topology-Robust because it computes a single law to be used for multiple topologies, hence the '\(-\)' entries. We also plot a sample evolution of \(\left\|P\!-\!\hat{P}^{(0)}\right\|\) for one Monte-Carlo trial in Fig. 10, where the norm taken is the Frobenius norm. Because \(\hat{P}^{(0)}\) begins with uniform probabilities in the nonzero positions, there are some variations in the norm difference, but overall, the curve decreases with time, indicating convergence to within a small error ball of the true TPM. This also allows for the pattern-occurrence quantities to be solved more accurately, which improves the prediction performance of PLP. As Table 2 shows, this also enables better controller performance (LQR Cost and Error Norm) of PLP over the other two architectures.
Figure 8: States and control versus time for one Monte-Carlo trial in the hexagon system. We abbreviate the baseline controller as ‘Base’, and Topology-Robust as ‘TR’.
Figure 10: Frobenius norm of the difference between the true TPM \(P\) and estimated TPM \(\hat{P}^{(t)}\) versus time for one Monte-Carlo trial of the hexagon system.
Figure 9: Modes versus time for one Monte-Carlo trial of the hexagon system. In the bottom subfigure, black vertical lines indicate intervals of length \(\Delta T\).
The values in both sub-rows of the 'LQR Cost' row in Table 2 suggest that the time-average LQR cost of all three controller architectures increase as the scale of the system gets larger. This is expected because the same values of horizon \(H\) and number of hops \(h\) (defined in Sec. 6.2) were chosen for the SLS implementation of both systems. In practice, \(H\) and \(h\) must be adjusted as the scale of the system changes, but for fairer comparison we use the same values for both the hexagon and grid systems. Furthermore, assuming a small margin of error, Topology-Robust should theoretically stabilize the system better than the baseline at the expense of increased control effort because Topology-Robust uses a single common law is for multiple different modes. This can be validated empirically by the entries in the 'LQR Cost' and 'Error Norm' rows, and is also supported by Fig. 8, where the state's oscillations around the zero line are the largest in magnitude with the baseline and the least with Topology-Robust.
More interestingly, the PLP architecture manages to balance the performance metrics better compared to the the other architectures: LQR cost similar to the baseline architecture, error norm similar to the Topology-Robust architecture, and runtime faster than either the baseline or the topology-robust extension. The improved runtime comes from the PLP component's ability to refrain from recomputing parts of the original SLS optimization by preserving the control inputs of previously-observed topologies and state/control trajectories (see Proposition 2). Moreover, the ability of PLP to predict the expected occurrence times of future mode patterns allows for the scheduling of SLS controllers in advance (see Proposition 1); as seen in Fig. 5, this improves the error norm when Pattern-Learning manages to predict the future mode correctly. The 'Prop Match' row of Table 2 shows that this is indeed the case: the PLP architecture consistently uses the matching control law more often than the baseline regardless of network system. This is expected since PLP can be viewed as an additional mode estimation algorithm, and so the estimate \(\hat{\varphi}_{n}^{(t)}\) is on average better with PLP than without. In general, this suggests that appending PLP to a baseline controller that is neither predictive nor robust to time-varying topologies could be used as an alternative to Topology-Robust, especially in complex systems where simultaneous stabilization isn't possible or is expensive.
We remark that the difference in the construction of the pattern collection \(\Psi[t]\) in the hexagon system versus the grid system also has a role in the relationship among the performance metrics, especially in the error-norm performance and the proportion of time the matching control law is used. Recall that for the hexagon system, \(\Psi[t]\) is created by accumulating every feasible mode sequence of length \(L\), which implies \(\mathbb{E}[\hat{\tau}_{n}^{(t)}]=L\). In contrast, for the grid system, a random subset of feasible mode sequences is chosen per time \(t\), and so the formulas from Theorems 1 and 2 were used to solve Problem 1. In the PLP column of the 'Prop. Match' row, we see the matching control law is used less often in the grid system than the hexagon system, which is expected since \(\mathbb{E}[\hat{\tau}_{n}^{(t)}]\geq L\) for the grid system and predictions for a longer horizon of mode-indices become less accurate. Thus, increasing the number of patterns in the pattern collection decreases the expected minimum occurrence time, which yields more accurate estimates of future modes. The Base and PLP columns in the 'Error Norm' row suggest that better predictions enable better disturbance-rejection; this implies that PLP will more closely resemble the error norm of the baseline when less patterns are included in \(\Psi[t]\).
### Localized Pattern-Learning and Prediction
Table 2 shows that performance deteriorates with larger scale, and this can be attributed to the fact that both Pattern-Learning and Mode Process ID are implemented in a centralized fashion, which conflicts with the localized, distributed nature of SLS. We now briefly discuss an extension of PLP to a localized, distributed implementation of PLP. Since the previous section already compared the performance of PLP to those of the controllers without PLP, we focus our discussion here on how the localized implementation of PLP compares to the centralized version.
Let current time be \(t\in\mathbb{N}\) and \(n\triangleq N[t]\). Based on information from its own local subsystem (16), each node \(i\in\mathcal{V}\) stores and updates three objects: a) its own estimates of the current mode \(\hat{\varphi}_{n}^{(i,t)}\) and TPM \(\hat{P}^{(i,t)}\) (computed via Sec. 3.1), b) its own estimates of the pattern-occurrence quantities \(\mathbb{E}[\hat{\tau}^{(i,t)}]\), \(\{\hat{q}_{k}^{(i,t)}\}_{k=1}^{K}\) (computed via Sec. 3.2 and 5.2), and c) its own pattern collection \(\Psi^{(0)}[t]\) and pattern-to-control law table \(\mathcal{U}^{(i)}\) (see Sec. 3.3). Each node \(i\in\mathcal{V}\) employs the consistent set narrowing approach of (6) to update its own set \(\mathcal{C}^{(i)}[t]\) of consistent topologies over time \(t\). Each subsystem \(i\in\mathcal{V}\) then extracts \(\hat{\varphi}_{n}^{(i,t)}\), \(\mathcal{C}^{(i)}[t]\), and estimates \(\hat{P}^{(i,t)}\) by empirically counting the proportion of transitions across the entire estimated past history \(\hat{\varphi}_{0xn}^{(i,t)}\). For the TPMs, we also implement consensus averaging of the estimates to neighboring subsystems that are one link away, similar to the method of Sec. 4 in Han (2020). Overall, the key distinction is that we add an additional enumeration \(i\in\mathcal{V}\) to the usual sets, tables, and estimated quantities from Sec. 4 and Sec. 5 to emphasize that each subsystem maintains local estimates of everything.
The estimated pattern-occurrence quantities for this localized extension of PLP applied to the hexagon system are shown in Fig. 11. To demonstrate the evolution of the pattern-occurrence quantities over time, each subsystem \(i\)'s pattern col
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Metric / Controller & Base & TR & PLP \\ \hline \hline LQR Cost & 36.4537 & 42.5596 & 34.7242 \\ & 445.8137 & 472.1195 & 442.1264 \\ \hline Error Norm & 2.2146 & 1.5546 & 1.5236 \\ & 6.1294 & 5.9453 & 5.8244 \\ \hline Prop. Match & 0.4304 & – & 0.615 \\ & 0.1533 & – & 0.16 \\ \hline Runtime & 11.8314 & 67.2254 & 2.2689 \\ & 101.3741 & X & 38.5824 \\ \hline \end{tabular}
\end{table}
Table 2: The average performance metrics [row] over 20 Monte-Carlo simulations of \(T_{\text{sim}}=400\) timesteps, for each pair of controller architecture [column]. In each cell, the top value is recorded for the hexagon system and the bottom is for the grid system. For space, we abbreviate ‘Base’ as the baseline controller, and ‘TR’ as Topology-Robust.
lection \(\Psi^{(i)}[t]\) is chosen to contain more than half of the full combinatorial set of feasible length-\(L\) mode sequences initially considered in Sec. 7.2, such that the true value of \(\mathbb{E}[\mathbb{I}^{\hat{\pi}(i,\sigma)}]\) is 5.83328 via Theorem 1. The evolution of the estimated minimum occurrence time \(\mathbb{E}[\mathbb{I}^{\hat{\pi}(i,\sigma)}]\) over \(t\) is shown at the top, while the Frobenius norm difference \(\|P-\hat{P}^{(i,\sigma)}\|\) of the TPM estimate is shown at the bottom. We use varying groups of subsystems for these figures in order to demonstrate the locality property.
Note in Fig. 11, that as time increases, the estimates \(\mathbb{E}[\mathbb{I}^{\hat{\pi}(i,\sigma)}]\) of tend to converge towards the true value 5.83328 as more of the TPM gets learned. The piecewise nature arises because the pattern collection \(\Psi^{(i)}[t]\) may change over time, which in turn changes each subsystem's estimate of the expected minimum occurrence time. At the bottom of Fig. 11, the matrix norm difference between the true and estimated TPMs for each of the three subsystems decreases over time, which is expected as each subsystem gathers more data to learn the true transition probabilities of \(P\). Compared to the centralized TPM estimate evolution over time (Fig. 10) there is more rapid variation in each subsystem's estimate in the bottom figure of Fig. 11; this could be attributed to the consensus averaging among the subsystems. Viewing topologies at a local level can make the modes look similar to one another, and so a localized implementation of consistent set narrowing may perform worse than the centralized implementation. This is a well-known tradeoff between centralized and distributed control: for more efficient computation, we are trading performance optimality.
## 8 Conclusion
_Pattern-learning for prediction (PLP)_ learns patterns in the behavior of stochastic uncertain systems to make controller design efficient by memorizing patterns to prevent the re-computation of the control laws associated with previously-occurred patterns (see Proposition 2) and by scheduling of control laws associated with patterns that may occur in the future (see Proposition 1). In this paper, we aimed to demonstrate the advantages of including PLP in an otherwise straightforward controller architecture (which borrows techniques from system identification and predictive control) for a class of linear MJS whose underlying mode-switching dynamics are unknown; here, the aforementioned patterns are recurrent finite-length sequences of modes which arise in the MJS. Our controller architecture consists of three parts. First, Mode Process ID (Sec. 3.1 and 4) identifies the unknown statistics of the mode process. Second, PLP (Sec. 3.2 and 5) uses the estimated statistics of the mode process to compute the pattern-occurrence quantities from Problem 1: the expected minimum occurrence time of any pattern from a user-defined pattern collection, and the probability of a pattern being the first to occur among the collection. The computation of the pattern-occurrence quantities uses martingale methods from the literature with two key extensions that make it more applicable to the real-world: 1) the distribution of the mode process is unknown, and 2) the mode process is not observable; closed-form expressions of the quantities are derived in Theorems 1 and 2. Third, Control Law Design (Sec. 3.3 and 6) computes the optimal control action corresponding to each pattern when it first occurs. We implement PLP on a fault-tolerant controller of a network with dynamic topology by integrating the pattern-occurrence quantities into MPC and using variations of SLS (Sec. 6.2) for the Control Law Design component. We provide an empirical comparison study of its performance against a baseline controller and a topology-robust extension of the baseline. Because PLP can be viewed as an additional mode estimation algorithm, it enables the estimated mode to match the true mode more often, although this is mainly possible for an optimal choice of pattern collection. Compared to the baseline, PLP is able to achieve better disturbance-rejection at reduced computation time, redundancy, and control cost, which suggests its potential to be used in place of a robust controller for more complex applications where designing for robustness is expensive. The merit of our work can be summarized as follows: computation-efficient control design for stochastic systems with uncertain dynamics can be performed by learning patterns in the system's behavior, which eliminates redundancy by storing past patterns into memory and predicting the future occurrence of patterns.
## Acknowledgments
The authors would like to thank John Brader and Benjamin Bycroft of the Aerospace Corporation for their technical inputs.
|
2301.07667 | Alignment-based optically pumped magnetometer using a buffer gas cell | Alignment-based optically pumped magnetometers (OPMs) are capable of
measuring oscillating magnetic fields with high sensitivity in the fT/sqrt(Hz)
range. Until now, alignment-based magnetometers have only used paraffin-coated
vapour cells to extend the spin relaxation lifetimes of the alkali vapour. The
drawback of these cells is that they are hand-blown and are therefore
time-intensive, and somewhat unreliable, to produce. Buffer gas cells, on the
other hand, can be manufactured on a mass scale using microfabrication
techniques. We present the first demonstration of an alignment-based
magnetometer using a buffer gas vapour cell containing caesium (Cs) alkali
vapour and nitrogen (N2) buffer gas. The OPM is operated at 55 degrees C and we
achieve a 325 fT/sqrt(Hz) sensitivity to 10 kHz oscillating magnetic fields
with an 800 Hz bandwidth. The alignment-based magnetometer uses a single laser
beam for optical pumping and probing and could potentially allow for more rapid
commercialisation of radio-frequency OPMs, due to the robustness of the
one-beam geometry and the potential for mass-scale microfabrication of buffer
gas cells. | L. M. Rushton, L. Elson, A. Meraki, K. Jensen | 2023-01-18T17:23:51Z | http://arxiv.org/abs/2301.07667v1 | # Alignment-based optically pumped magnetometer using a buffer gas cell
###### Abstract
Alignment-based optically pumped magnetometers (OPMs) are capable of measuring oscillating magnetic fields with high sensitivity in the fT/\(\sqrt{\text{Hz}}\) range. Until now, alignment-based magnetometers have only used paraffin-coated vapour cells to extend the spin relaxation lifetimes of the alkali vapour. The drawback of these cells is that they are hand-blown and are therefore time-intensive, and somewhat unreliable, to produce. Buffer gas cells, on the other hand, can be manufactured on a mass scale using microfabrication techniques. We present the first demonstration of an alignment-based magnetometer using a buffer gas vapour cell containing caesium (Cs) alkali vapour and nitrogen (N\({}_{2}\)) buffer gas. The OPM is operated at 55\({}^{\circ}\)C and we achieve a 325 fT/\(\sqrt{\text{Hz}}\) sensitivity to 10 kHz oscillating magnetic fields with an 800 Hz bandwidth. The alignment-based magnetometer uses a single laser beam for optical pumping and probing and could potentially allow for more rapid commercialisation of radio-frequency OPMs, due to the robustness of the one-beam geometry and the potential for mass-scale microfabrication of buffer gas cells.
## I Introduction
Optically pumped magnetometers (OPMs) [1; 2; 3] based on spin-polarized atoms (e.g. alkali atoms such as caesium (Cs) or rubidium) can measure magnetic fields with high sensitivity in the fT/\(\sqrt{\text{Hz}}\) range [4; 5; 6; 7]. Current commercial OPMs [8; 9; 10] are operated close to zero magnetic field in the spin-exchange relaxation-free (SERF) regime measuring one, two or three components of the magnetic field, or in the Earth's field as scalar magnetometers measuring the total magnetic field amplitude. These OPMs use one or two beams of circularly polarized light generated from a single laser diode inside the OPM, making the sensors compact and robust. The circularly polarized light effectively generates spin-orientation along the light propagation (i.e., the atomic spins point in a certain direction) which responds to magnetic fields and can be measured by detecting the transmitted light. When detecting oscillating magnetic fields in the kHz-MHz frequency range, radio-frequency (RF) OPMs [11; 12; 13; 14; 15] must be used. One type of RF OPM using only a single laser beam is the alignment-based magnetometer [16; 17; 18], which uses linearly polarized light capable of effectively aligning the atoms in the direction perpendicular to its propagation. As a result, as the RF field affects such alignment being created, its presence can be sensed directly by measuring properties of the same beam.
High sensitivity optical magnetometry requires a long atomic spin-coherence time. This can be achieved using vapour cells coated on the inside with an anti-relaxation coating (e.g. paraffin), such that the moving alkali atoms can bounce off the inner glass walls of the vapour cell many times without losing their spin-coherence [19; 20]. Alternatively, a long coherence time can be achieved by filling the vapour cell with buffer gas (e.g. N\({}_{2}\)). Rapid collisions between the buffer gas atoms and the alkali atoms make the alkali atoms diffuse slowly, which mitigates the effects of spin-destroying wall collisions. Alkali vapour cells for magnetometry are typically hand-blown, however buffer gas cells for magnetometry can be produced on a mass scale using microfabrication techniques [21; 22]. Such microfabrication techniques have not, as of yet, been compatible with anti-relaxation coating.
So far, alignment-based optical magnetometry has been demonstrated using hand-blown, anti-relaxation coated cells [16; 17]. The presence of buffer gas leads to pressure broadening of the alkali vapour absorption spectrum, reducing the light-atom coupling and affecting the optical pumping preparing the aligned state. The buffer gas N\({}_{2}\) is also a quenching gas [23] which causes the alkali atoms not to de-excite via spontaneous emission. Rapid collisional mixing in the excited state [23] also occurs in buffer gas cells, but not in paraffin-coated cells. We show here that, despite these complexities, it is possible to realise an alignment-based magnetometer using a buffer gas cell. We experimentally demonstrate an alignment-based magnetometer using a Cs alkali vapour and 65 Torr N\({}_{2}\) buffer gas cell with a sensitivity of 325 fT/\(\sqrt{\text{Hz}}\) to oscillating magnetic fields at 10 kHz. We also demonstrate an alignment-based magnetometer with a paraffin-coated cell placed in the same experimental setup to verify the methods and for comparison. Our results open up the possibility for miniaturisation [15; 24] and commercialisation of RF OPMs, with potential impact in areas such as medical physics [25; 26; 27], remote sensing [28; 24] and non-destructive testing [29; 30].
## II Alignment-based optical magnetometry
The theory underpinning the alignment-based magnetometer [12; 17; 18; 2] will now be revised and discussed. Consider atoms with a \(F=1\to F^{\prime}=0\) optical transition with ground-state sublevels \(|F,m\rangle=\{|1,1\rangle,|1,0\rangle,|1,-1\rangle\}\) and an excited state \(|F^{\prime},m^{\prime}\rangle=|0^{\prime},0^{\prime}\rangle\) as shown in Fig. 1(a). A single laser beam is used for optical pumping and probing of the atoms. Assume the light is linearly (\(\pi\)) polarized along the direction of a static magnetic field \(B_{0}\hat{\mathbf{z}}\). In this case, the atoms will be optically pumped into the \(m=\pm 1\) sublevels with equal probability, creating a so-called "spin-aligned state". This is a dark state, such that with perfect optical pumping, the light will be fully transmitted through the atomic vapour. Now assume further that there is a transverse oscillating (RF) magnetic field
which we would like to detect. That RF field will affect the optical pumping and thereby the transmitted light which can be detected by measuring its intensity or polarization.
The total Hamiltonian \(\hat{H}\) which describes the system is given by
\[\hat{H}=\hat{H}_{0}+\hat{H}_{l}+\hat{H}_{B}, \tag{1}\]
where \(\hat{H}_{0}\), \(\hat{H}_{l}\) and \(\hat{H}_{B}\) are the unperturbed, light-atom interaction and magnetic field-atom interaction Hamiltonians, respectively. The unperturbed Hamiltonian \(\hat{H}_{0}\) written in the overall basis \(\{\left|1,1\right\rangle\equiv\left|1\right\rangle,\left|1,0\right\rangle \equiv\left|0\right\rangle,\left|1,-1\right\rangle\equiv\left|-1\right\rangle, \left|0^{\prime},0^{\prime}\right\rangle\equiv\left|0^{\prime}\right\rangle\}\) is
\[\hat{H}_{0}=\begin{pmatrix}0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&\hbar\omega_{0}\end{pmatrix} \tag{2}\]
where \(\hbar\) is the reduced Planck constant, \(\omega_{0}=2\pi c/\lambda\) is the optical transition frequency, \(\lambda\) its wavelength and \(c\) the speed of light. The light-atom interaction is governed by
\[\hat{H}_{l}=-\mathbf{E}\cdot\hat{\mathbf{d}}, \tag{3}\]
where \(\mathbf{\hat{d}}\) is the dipole operator and \(\mathbf{E}=E_{0}\cos(\omega t)\mathbf{\hat{z}}\) is the electric field of the light. The light-atom interaction Hamiltonian \(\hat{H}_{l}\) is
\[\hat{H}_{l}=\hbar\Omega_{R}\cos\omega t\begin{pmatrix}0&0&0&0\\ 0&0&0&-1\\ 0&0&0&0\\ 0&-1&0&0\end{pmatrix}, \tag{4}\]
where \(\Omega_{R}=\langle 1||d||0^{\prime}\rangle\,E_{0}/(\sqrt{3}\hbar)\) is the Rabi frequency and \(\langle 1||d||0^{\prime}\rangle\) is the transition dipole matrix element. Assuming \(B_{x}=B_{\mathrm{RF}}\cos\omega_{\mathrm{RF}}t\), \(B_{y}=0\) and \(B_{z}=B_{0}\), the magnetic field-atom interaction is given by
\[\hat{H}_{B} =-\hat{\mu}\cdot\mathbf{B}=\frac{gF\mu_{B}}{\hbar}(\hat{F}_{x}B_{ x}+\hat{F}_{z}B_{z}) \tag{5}\] \[=g_{F}\mu_{B}\begin{pmatrix}B_{0}&\frac{B_{\mathrm{RF}}\cos\omega _{\mathrm{RF}}t}{\sqrt{2}}&0&0\\ \frac{B_{\mathrm{RF}}\cos\omega_{\mathrm{RF}}t}{\sqrt{2}}&0&\frac{B_{ \mathrm{RF}}\cos\omega_{\mathrm{RF}}t}{\sqrt{2}}&0\\ 0&\frac{B_{\mathrm{RF}}\cos\omega_{\mathrm{RF}}t}{\sqrt{2}}&-B_{0}&0\\ 0&0&0&0\end{pmatrix}.\]
Here \(\hat{\mu}=g_{F}\mu_{B}(\hat{F}_{x}\hat{\mathbf{s}}+\hat{F}_{y}\hat{\mathbf{s} }+\hat{F}_{z}\hat{\mathbf{z}})/\hbar\) is the Cs atom's magnetic dipole operator, \(g_{F}\) is the hyperfine Lande g-factor [31], and \(\mu_{B}\) is the Bohr magnet. Defining the Larmor frequency \(\Omega_{L}=g_{F}\,\mu_{B}B_{0}/\hbar\) and letting the strength of the RF field be represented by \(\Omega_{\mathrm{RF}}=g_{F}\,\mu_{B}B_{\mathrm{RF}}/\hbar\), the total Hamiltonian \(\hat{H}=\hat{H}_{0}+\hat{H}_{B}+\hat{H}_{l}\) is
\[\hat{H}=\hbar\begin{pmatrix}\Omega_{L}&\frac{\Omega_{\mathrm{RF}}\cos\omega _{\mathrm{RF}}t}{\sqrt{2}}&0&0\\ \frac{\Omega_{\mathrm{RF}}\cos\omega_{\mathrm{RF}}t}{\sqrt{2}}&0&\frac{ \Omega_{\mathrm{RF}}\cos\omega_{\mathrm{RF}}t}{\sqrt{2}}&-\frac{\Omega_{ \mathrm{RF}}\cos\omega t}{\sqrt{3}}\\ 0&\frac{\Omega_{\mathrm{RF}}\cos\omega_{\mathrm{RF}}t}{\sqrt{2}}&-\Omega_{L} &0\\ 0&-\frac{\Omega_{\mathrm{RF}}\cos\omega}{\sqrt{3}}&0&\omega_{0}\end{pmatrix}. \tag{6}\]
Going to a rotating frame at the optical frequency \(\omega\), followed by going to another rotating frame at the RF frequency \(\omega_{\mathrm{RF}}\), then neglecting the fast oscillating terms using the rotating wave approximation and setting \(\Delta=\omega-\omega_{0}\), \(\Delta_{\mathrm{RF}}=\omega_{\mathrm{RF}}-\Omega_{L}\), the Hamiltonian \(\hat{H}\) in the rotating frame is
\[\hat{H}=\hbar\begin{pmatrix}-\Delta_{\mathrm{RF}}&\frac{\Omega_{\mathrm{RF}}} {2\sqrt{2}}&0&0\\ \frac{\Omega_{\mathrm{RF}}}{2\sqrt{2}}&0&\frac{\Omega_{\mathrm{RF}}}{2\sqrt{2 }}&-\frac{\Omega_{\mathrm{RF}}}{2\sqrt{3}}\\ 0&\frac{2\sqrt{2}}{2\sqrt{3}}&\Delta_{\mathrm{RF}}&0\\ 0&-\frac{\Omega_{\mathrm{RF}}}{2\sqrt{3}}&0&-\Delta\end{pmatrix}. \tag{7}\]
Next the relaxation \(\hat{\Gamma}\) and repopulation \(\hat{\Lambda}\) matrices must be taken into account and are given by
\[\hat{\Gamma}=\begin{pmatrix}\gamma&0&0&0\\ 0&\gamma&0&0\\ 0&0&\gamma&0\\ 0&0&0&\gamma+\Gamma\end{pmatrix}, \tag{8}\]
\[\hat{\Lambda}=\begin{pmatrix}\frac{\gamma+\Gamma\hat{\rho}\varphi_{0^{\prime}, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
excited state). For a buffer gas cell, the excited atoms mainly decay via quenching, as discussed in the next section. Note that the model above only includes one excited state and therefore does not describe collisional mixing (between multiple excited states). The atoms also have a spin-coherence (or transverse relaxation) time \(T_{2}=1/\gamma\). In a buffer gas cell, the alkali atoms diffuse slowly due to collisions with the buffer gas, increasing \(T_{2}\). The spins relax when the alkali atoms hit the glass walls due to electron randomisation collisions or via spin-exchange or spin-destruction collisions between two alkali atoms [1; 32; 16]. In a paraffin-coated cell the alkali atoms can bounce off the walls thousands of times before spin relaxation occurs [19]. Power broadening due to laser light also reduces \(T_{2}\).
The Liouville equation for the density matrix \(\hat{\rho}\) in the rotating frame is given by
\[i\hbar\frac{\partial\hat{\rho}}{\partial t}=[\hat{H},\hat{\rho}]-i\hbar\frac{1 }{2}(\hat{\Gamma}\hat{\rho}+\hat{\rho}\hat{\Gamma})+i\hbar\hat{\Lambda}. \tag{10}\]
In the steady state \(d\hat{\rho}/dt=0\) and the right-hand-side of the equation can be solved to determine \(\hat{\rho}\) in the rotating frame. The density matrix is returned to the lab frame \(\hat{\rho}\) by using a transformation matrix. The polarisation \(\mathbf{P}=n\mathrm{Tr}(\hat{\rho}\hat{\mathbf{d}})\) of the atomic vapour can then be calculated, where \(n\) is the alkali atom number density. The formalism in Ref. [2] allows for the in-phase and out-of-phase rotations of a linearly polarised beam to be extracted [17; 18]. The expressions for the in-phase \(\partial\phi^{\mathrm{in}}/\partial l\) and quadrature \(\partial\phi^{\mathrm{out}}/\partial l\) values are [17]
\[\frac{\partial\phi^{\mathrm{in}}}{\partial l} =\frac{n\Delta_{\mathrm{RF}}\lambda^{2}\Omega_{\mathrm{RF}}(2 \gamma^{2}+8\Delta_{\mathrm{RF}}^{2}-\Omega_{\mathrm{RF}}^{2})\Omega_{R}^{2} }{36\pi\Gamma\gamma(\gamma^{2}+4\Delta_{\mathrm{RF}}^{2}+\Omega_{\mathrm{RF}} ^{2})[4(\gamma^{2}+\Delta_{\mathrm{RF}}^{2})+\Omega_{\mathrm{RF}}^{2}]}\] \[\approx\frac{n\lambda^{2}}{72\pi}\cdot\frac{\Omega_{R}^{2}}{ \Gamma}\cdot\Omega_{\mathrm{RF}}\cdot\frac{\Delta_{\mathrm{RF}}/\gamma}{ \Delta_{\mathrm{RF}}^{2}+\gamma^{2}}\quad\mathrm{for}\quad\Omega_{\mathrm{ RF}}^{2}\ll\gamma^{2}, \tag{11}\]
\[\frac{\partial\phi^{\mathrm{out}}}{\partial l} =\frac{n\lambda^{2}\Omega_{\mathrm{RF}}(4\gamma^{2}+16\Delta_{ \mathrm{RF}}^{2}+\Omega_{\mathrm{RF}}^{2})\Omega_{R}^{2}}{72\pi\Gamma(\gamma^ {2}+4\Delta_{\mathrm{RF}}^{2}+\Omega_{\mathrm{RF}}^{2})[4(\gamma^{2}+\Delta_ {\mathrm{RF}}^{2})+\Omega_{\mathrm{RF}}^{2}]}\] \[\approx\frac{n\lambda^{2}}{72\pi}\cdot\frac{\Omega_{R}^{2}}{ \Gamma}\cdot\Omega_{\mathrm{RF}}\cdot\frac{1}{\Delta_{\mathrm{RF}}^{2}+\gamma ^{2}}\quad\mathrm{for}\quad\Omega_{\mathrm{RF}}^{2}\ll\gamma^{2}, \tag{12}\]
where \(l\) is the length of the vapour cell. In the limit when \(\Omega_{\mathrm{RF}}^{2}\ll\gamma^{2}\), as is the case throughout this paper, \(\partial\phi^{\mathrm{in}}/\partial l\) and \(\partial\phi^{\mathrm{out}}/\partial l\) are proportional to the RF magnetic field amplitude \(B_{\mathrm{RF}}\propto\Omega_{\mathrm{RF}}\) and have dispersive- and absorptive-Lorentzian lineshapes, respectively, when varying the RF detuning \(\Delta_{\mathrm{RF}}\). The light polarization rotation is measured using a balanced photodetector and lock-in detection (at the RF frequency), yielding the lock-in outputs which can be written as
\[X \propto\frac{\partial\phi^{\mathrm{out}}}{\partial l}\propto B_{ \mathrm{RF}}\cdot\frac{1}{(\omega_{\mathrm{RF}}-\Omega_{L})^{2}+\gamma^{2}}, \tag{13}\] \[Y \propto\frac{\partial\phi^{\mathrm{in}}}{\partial l}\propto B_{ \mathrm{RF}}\cdot\frac{\left(\omega_{\mathrm{RF}}-\Omega_{L}\right)/\gamma}{ \left(\omega_{\mathrm{RF}}-\Omega_{L}\right)^{2}+\gamma^{2}},\] (14) \[R =\sqrt{X^{2}+Y^{2}}=|X+iY|\propto B_{\mathrm{RF}}\cdot\left|\frac{ 1+i\left(\omega_{\mathrm{RF}}-\Omega_{L}\right)/\gamma}{\left(\omega_{\mathrm{ RF}}-\Omega_{L}\right)^{2}+\gamma^{2}}\right|. \tag{15}\]
## III Caesium
### Optical pumping in a paraffin-coated cell
A caesium (Cs) atom [31] has two ground states with hyperfine quantum numbers \(F=3\) and \(F=4\) separated by the hyperfine splitting \(\mathrm{v_{hf}=9.192}\) GHz. The first excited states have \(F^{\prime}=3\) and \(F^{\prime}=4\) with a hyperfine splitting of 1.2 GHz. The optical transition of interest for this experiment is the Cs D1 \(F=4\to F^{\prime}=3\) transition using \(z\)-linearly polarised light (see Fig. 1b) with a wavelength around 895 nm. The optical pumping can be understood by determining the populations of the 16 magnetic sublevels of the Cs ground state using rate equations [33]. We will first consider a paraffin-coated cell, where the decay from the excited state is by spontaneous emission. An example of the rate of change of the population of the magnetic sublevel \(F=3,m=3\), \(dp_{\mathrm{F=3,m=3}}/dt\), i.e., the diagonal element of the density matrix, is
\[\frac{dp_{4,3}}{dt} =R_{p}(-p_{4,3}c_{4,3+y^{\prime},3^{\prime}}+p_{4,2}c_{4,2+3^{ \prime},2^{\prime}}c_{4,3+y^{\prime},2^{\prime}} \tag{16}\] \[\quad+p_{4,3}c_{4,3+y^{\prime},3^{\prime}}c_{4,3+y^{\prime},3^{ \prime}})-\Gamma_{1}p_{4,3}+\Gamma_{1}/16,\]
where \(R_{p}\) is the optical pumping rate, \(p_{4,3}=p_{4,3}(t)\) is the population of the magnetic sublevel at time \(t\), and \(c_{4,2+3^{\prime},2^{\prime}}\) is the Clebsch-Gordon coefficient squared [31] for the \(\pi\) transition from \(F=4,m=2\leftrightarrow F^{\prime}=3,m^{\prime}=2\), for example. The longitudinal relaxation rate \(\Gamma_{1}\) was not measured experimentally in this work but is typically much smaller than the transverse relaxation rate \(\gamma\)[34]. The negative terms in Eq. 16 depopulate the magnetic sublevel and the positive terms repopulate the sublevel. The populations of the 16 magnetic sublevels in the \(F=3\) and \(F=4\) ground states in the steady state (\(dp/dt=0\)) are determined numerically and we plot these in Fig. 2a. We see that the atoms in the \(F=4\) sublevels are symmetrically distributed with most atoms in the \(m=\pm 4\) sublevels, corresponding to a spin-aligned state. Note that many atoms are "lost" to the other ground state \(F=3\). These atoms are not probed as they are 9.192 GHz detuned from the light.
### Optical pumping in a buffer gas cell
If a buffer gas such as 65 Torr of N\({}_{2}\) is present in a Cs vapour cell without any paraffin coating, then the Cs atoms will mostly decay via quenching rather than via spontaneous emission [35; 23], as will now be shown. The many vibrational and rotational states of the quenching gas molecule, in this case N\({}_{2}\), mean that when a Cs atom in the excited state collides with a N\({}_{2}\) molecule, the Cs atom can de-excite without the emission of a photon, instead transferring its energy to the many vibrational and rotational modes of the N\({}_{2}\) molecule. The quenching rate \(R_{Q}\) is given by
\[R_{Q}=n_{Q}\sigma_{Q}v_{\mathrm{Cs,N_{2}}}, \tag{17}\]
where \(n_{Q}=P/(k_{B}T)=1.91\times 10^{24}\) m\({}^{-3}\) is the number density of N\({}_{2}\) molecules at \(T\sim 55^{\circ}\)C, \(P\) is the pressure, \(k_{B}\) is the
Boltzmann constant, \(\sigma_{Q}=5.5\times 10^{-19}\) m\({}^{2}\)[23] is the quenching gas cross-section for Cs and N\({}_{2}\) (at 100\({}^{\circ}\)C) and \(v_{\text{Cs,N}_{2}}=\sqrt{8k_{B}T/\pi M}=548\) m/s is the relative velocity between a Cs atom and N\({}_{2}\) molecule. The mass \(M=3.84\times 10^{-26}\) kg is the effective mass of a Cs atom and N\({}_{2}\) molecule, given by \(M=m_{\text{Cs}}m_{\text{N}_{2}}/\left(m_{\text{Cs}}+m_{\text{N}_{2}}\right)\). The quenching factor \(Q\) helps determine the dominant decay mechanism, whether by spontaneous emission (\(Q=1\)) or by quenching (\(Q=0\)), and is given by [23]
\[Q=\frac{1}{1+R_{Q}\tau_{\text{nat}}}. \tag{18}\]
Calculating \(R_{Q}=5.9\times 10^{8}\) s\({}^{-1}\) from the parameters stated above for Cs and 65 Torr N\({}_{2}\) and taking the natural lifetime of the D1 excited state to be \(\tau_{\text{nat}}=35\) ns, then \(Q=0.05\). This means that, for the 65 Torr N\({}_{2}\) buffer gas cell used in our experiments, the dominant de-excitation mechanism from the excited state is quenching. During quenching, the decay probabilities to the ground states are not governed by the Clebsch-Gordon coefficients. Instead the atoms decay with equal probability (1/16) to any of the \(F=3\) and \(F=4\) ground state magnetic sublevels. Crucially, though, the \(F=4,m=\pm 4\) states will still be dark states in the presence of N\({}_{2}\), a quenching gas. An example of a rate equation for the population \(dp_{4,3}/dt\) of the \(F=4,m=3\) magnetic sublevel is given by
\[\begin{split}\frac{dp_{4,3}}{dt}=&\ R_{p}(-p_{4,3}c _{4,3\leftrightarrow 3^{\prime},3^{\prime}}+\frac{1}{16}[p_{4,3}c_{4,3 \leftrightarrow 3^{\prime},3^{\prime}}\\ &+p_{4,2}c_{4,2\leftrightarrow 3^{\prime},2^{\prime}}+p_{4,1}c_{4, 1\leftrightarrow 3^{\prime},1^{\prime}}+p_{4,0}c_{4,0\leftrightarrow 3^{ \prime},0^{\prime}}\\ &+p_{4,-1}c_{4,-1\leftrightarrow 3^{\prime},-1^{\prime}}+p_{4,-2}c_{4, -2\leftrightarrow 3^{\prime},-2^{\prime}}\\ &+p_{4,-3}c_{4,-3\leftrightarrow 3^{\prime},-3^{\prime}}])- \Gamma_{1}p_{4,3}+\frac{\Gamma_{1}}{16}.\end{split} \tag{19}\]
The 16 rate equations are solved in the steady state and an illustrative example of optical pumping with a buffer gas is shown in Fig. 1(b). We see that the distribution of atoms in the ground state sublevels is similar for both buffer gas and paraffin-coated vapour cells. It is assumed that \(Q=0\), which is a safe assumption to make for the 65 Torr N\({}_{2}\) buffer gas cell in this paper. In the above we assumed that the excited states \(F^{\prime}=3\) and \(F^{\prime}=4\) are resolved such that the light is only resonant with the \(F=4\to F^{\prime}=3\) transition. We note, however, that the \(F^{\prime}=4\) excited state would need to be incorporated into the rate equations if the buffer gas pressure becomes significantly larger, as discussed in Sec. V.
### Non-linear Zeeman splitting
A Cs atom in the \(F=4\) ground state has \(2F+1=9\) sublevels \(\left|F,m\right>\) which, when placed in a small magnetic field \(B_{0}\), have the energy \(E(m)=mh\nu_{L}\) due to the linear Zeeman effect. Here \(\nu_{L}\) is the Larmor frequency in Hz. That is to say, the splittings between neighbouring sublevels are all equal to the Larmor frequency \(\Delta\nu_{m,m-1}\equiv\left(E(m)-E(m-1)\right)/h=\nu_{L}\). In this case, a single magnetic resonance will be observed when sweeping the RF frequency \(\nu_{\text{RF}}\) (in Hz) across the Larmor frequency \(\nu_{L}\) and measuring the polarization rotation of the transmitted light (see Eq. 13, 14 and 15). However, at larger magnetic fields, the splittings between sublevels are slightly different due to the non-linear Zeeman effect. We calculate [31; 36; 34]
\[\Delta\nu_{m,m-1}=\nu_{L}-\delta\left(m-\frac{1}{2}\right), \tag{20}\]
Figure 2: Optical pumping from \(F=4\to F^{\prime}=3\) with \(\pi\)-polarised light. The populations of the \(F=3\) and \(F=4\) ground state magnetic sublevels in the steady state are plotted, with a longitudinal relaxation rate \(\Gamma_{1}=R_{p}/20\) for (a) a paraffin-coated cell where the dominant de-excitation mechanism from the excited state is spontaneous emission, and (b) a buffer gas cell where the dominant de-excitation mechanism is quenching.
where the non-linear Zeeman splitting (in Hz) is
\[\delta=\frac{2\nu_{L}^{2}}{\nu_{\rm hf}} \tag{21}\]
as illustrated in Fig. 1(b). In particular, the difference in transition frequencies between \(\Delta\nu_{4,3}\) and \(\Delta\nu_{-3,-4}\) is
\[\left|\Delta\nu_{4,3}-\Delta\nu_{-3,-4}\right|=7\delta. \tag{22}\]
In other words, at larger magnetic fields a total of 8 magnetic resonances should be observed when sweeping the RF field across the Larmor frequency with the outermost resonances split by \(7\delta\).
## IV Paraffin-coated cell
A schematic of the experimental setup is shown in Fig. 3(a). A diode laser system outputs light resonant with the \(F=4\to F^{\prime}=3\) Cs D1 transition (895 nm). The light is passed through an optical fiber and is collimated at its output. The linearly polarised light with an electric field amplitude \(E_{0}\mathbf{\hat{z}}\) then passes through a cubic (5 mm)\({}^{3}\) hand-blown paraffin-coated vapour cell (see Fig. 3(b)). The vapour cell is kept at room temperature (\(\sim 18.5^{\circ}\)C) and placed inside a magnetic shield (Twinleaf MS-1). Static \(B_{0}\mathbf{\hat{z}}\) and oscillating \(B_{\rm RF}\cos(2\pi\nu_{\rm RF})\mathbf{\hat{x}}\) magnetic fields can be applied using coils inside the magnetic shield. Here \(\nu_{\rm RF}\) is the RF frequency in Hz, while \(\omega_{\rm RF}=2\pi\nu_{\rm RF}\) is the RF frequency in rad/s. Polarimetry is then performed using a half-wave plate, a polarising beam splitter, and a balanced photodetector (Thorlabs PDB210A/M) to detect the polarization rotation of the transmitted light. The resultant photodetector voltage is demodulated at the RF frequency \(\nu_{\rm RF}\) using a lock-in amplifier (SR830) such that in-phase \(X\) and out-of-phase \(Y\) signals are obtained.
The optical pumping of an aligned state can be experimentally verified by exploiting the non-linear Zeeman effect. These measurements were done at a relatively large static magnetic field (\(B_{0}=5.84\) G) corresponding to a Larmor frequency close to 2 MHz. When the RF frequency was swept over the range 2.037-2.051 MHz, we observe a magnetic resonance spectrum with several peaks (see Fig. 4). The two largest peaks correspond to the transitions \(m=4\to m=3\) and \(m=-3\to m=-4\) with transition frequencies \(\Delta\nu_{4,3}\) and \(\Delta\nu_{-3,-4}\), respectively. The difference in transition frequencies \(\left|\Delta\nu_{4,3}-\Delta\nu_{-3,-4}\right|\) is experimentally found to be 6.38(0.02) kHz, agreeing with the value \(7\delta=6.37\) kHz calculated from Eqs. 21 and 22, confirming that we are observing the non-linear Zeeman splitting. This difference in transition frequencies was extracted by fitting the data of \(R\) in Fig. 4 to the function [34]
\[\mathrm{R}=\left|\sum_{m=-3}^{4}\frac{A_{m,m-1}\left[1+i\left(\nu_{\rm RF}- \nu_{m,m-1}\right)/\tilde{\gamma}\right]}{\left(\nu_{\rm RF}-\nu_{m,m-1} \right)^{2}+\tilde{\gamma}^{2}}\right|. \tag{23}\]
which is a sum of eight magnetic resonances with resonance frequencies \(\nu_{m,m-1}=\nu_{L}-\delta\left(m-\frac{1}{2}\right)\) and half width at half maximum (HWHM) \(\tilde{\gamma}=1/(2\pi T_{2})\) (in Hz) as seen by comparison with Eq. 15 and illustrated in Fig. 1. The data was fitted with seven free parameters: four amplitudes \(A_{4,3}\), \(A_{3,2}\), \(A_{2,1}\), \(A_{1,0}\) (as the magnetic resonance spectrum is symmetric such that \(A_{0,-1}=A_{1,0}\), \(A_{-1,-2}=A_{2,1}\), \(A_{-2,-3}=A_{3,2}\), \(A_{-3,-4}=A_{4,3}\)), the Larmor frequency \(\nu_{L}\), the non-linear Zeeman splitting \(\delta\), and the width \(\tilde{\gamma}\).
In total, the spectrum has eight peaks, although the middle two are hardly visible in Fig. 4 due to their smaller height. The height of the individual peaks corresponding to \(A_{m,m-1}/\tilde{\gamma}^{2}\) in Eq. 23 are proportional to the difference in populations of neighbouring magnetic sublevels [34]. This is why there are eight peaks in the non-linear Zeeman splitting, but nine populations in Fig. 2. As the outermost peaks are largest and have equal height, we conclude that an aligned state is created in the \(F=4\) ground state, with the majority of the atoms pumped into the \(F=4,m=\pm 4\) states. The optical pumping is not perfect as some of the atoms are pumped into the other magnetic sublevels. This is due to the non-zero longitudinal relaxation rate \(\Gamma_{1}\).
We now proceed with characterising the magnetic field sensitivity of the paraffin-coated vapour cell. These mea
Figure 3: (a) Schematic of an alignment-based magnetometer. The laser light propagates along the \(y\)-direction and is \(z\)-polarised. Components include: half-wave plates (\(\lambda/2\)), polarising beam splitters (PBS), a vapour cell (Cell) and a balanced photodetector (BPD). Static \(\mathbf{B}_{0}=B_{0}\mathbf{\hat{z}}\) and oscillating magnetic fields \(\mathbf{B}_{\rm RF}(t)=B_{\rm RF}(t)\mathbf{\hat{x}}\) are applied at the position of the vapour cell. (b) Photo of the paraffin-coated cell. (c) Photo of the buffer gas cell. (d) Photo of the buffer gas cell surrounded by a Shapal ceramic cylinder, heating wires and Kapton tape.
surements were carried out at a smaller static magnetic field \(B_{0}\) corresponding to a Larmor frequency of around 10 kHz. A 10 \(\mu\)W light power beam passed through the cell and a 4.22 nT\({}_{\text{RMS}}\) (20 mV\({}_{\text{RMS}}\)) oscillating magnetic field was applied. A magnetic resonance signal is shown in Fig. 5(a), where the RF frequency was swept between 9 kHz and 11.5 kHz and \(\nu_{L}=10.25\) kHz. From this, the peak of the resonance signal is extracted and divided by the applied oscillating magnetic field to give a conversion between the lock-in amplifier readout and the corresponding RF field amplitude \(B_{\text{RF}}\). Once this calibration was completed in Fig. 5(a), the lock-in demodulation frequency was fixed to the Larmor frequency, the RF amplitude was set to zero (\(B_{\text{RF}}=0\)), and a 4 minute time trace of the intrinsic noise of the OPM taken (see Fig. 5(b)). Following on from this the light hitting the balanced photodetector was completely blocked and another time trace obtained (data not shown). The sensitivity to small oscillating magnetic fields, i.e., the intrinsic OPM noise, is 480 fT/\(\sqrt{\text{Hz}}\) for \(X\) and 460 fT/\(\sqrt{\text{Hz}}\) for \(Y\). This was calculated using the methods described in Ref. [24] by taking the standard deviations (SDs) of 240\(\times\)1 s averaged segments, which are included in the legend of Fig. 5(b). The SDs with the light blocked are only just below at 410 fT/\(\sqrt{\text{Hz}}\) for \(X\) and 380 fT/\(\sqrt{\text{Hz}}\) for \(Y\). This noise is mainly due to electronic noise of the balanced photodetector and also due to a small contribution from the electronic noise of the data-coquisition system. The signal size and thereby the sensitivity could be improved by heating the vapour cell [23; 16] and using a larger vapour cell. A fundamental limit to the sensitivity is given by the spin-projection noise [32; 16]
\[\delta B_{\text{spin}}=\frac{2\hbar}{g_{F}\mu_{B}\sqrt{nVT_{2}}}, \tag{24}\]
where \(g_{F}=1/4\) for the \(F=4\) Cs ground state, \(n\sim 2.2\times 10^{16}\) m\({}^{-3}\) (\(T\sim 18.5^{\circ}\)C) is the number density of Cs atoms, \(T_{2}\sim 1/(\pi(230\ \text{Hz}))\sim 1.4\) ms is the transverse relaxation time and \(V=(5\ \text{mm})^{3}\) is the volume of the whole cell, as all the atoms in the cell are probed. The sensitivity is estimated to be \(\delta B_{\text{spin}}\sim 50\) fT/\(\sqrt{\text{Hz}}\) using the numbers above. A balanced photodetector with reduced electronic noise would help us get closer to this quantum-limited sensitivity.
## V Buffer gas cell
We now carry out experiments with a hand-blown cylindrical buffer gas cell (5 mm length, 5 mm diameter) filled with Cs as well as N\({}_{2}\) buffer gas (see Fig. 3(c)). The buffer gas cell is surrounded by a Shapal ceramic cylinder, which is chosen for its high thermal conductivity. The ceramic cylinder is wrapped in a non-magnetic resistive twisted wire and wrapped with heat insulator aerogel and Kapton tape as shown in Fig. 3(d). The buffer gas cell can then be heated and kept at an elevated temperature by running current through the twisted wire.
The N\({}_{2}\) buffer gas pressure was determined using absorption spectroscopy as described by Andalkar [37]. The laser power was kept low for these absorption measurements to avoid any optical pumping effects. An absorption spectrum of the buffer gas cell is obtained, plotted on top of an absorption spectrum of a pure Cs cell (75 mm length and kept at room-temperature) in Fig. 6. The pure cell only contains Cs (and neither contains paraffin or buffer gas) and is used as a frequency reference. The absorption spectrum for the pure cell shows four absorption resonances separated by ground and excited state hyperfine splittings (9.2 GHz, 1.2 GHz) as expected for Cs D1 spectroscopy. The absorption resonances have a Voigt lineshape, which is a convolution of a Lorentzian and Gaussian lineshape. For the pure cell, the Gaussian Doppler width is much larger than the Lorentzian natural linewidth 4.6 MHz full width at half maximum (FWHM) of the Cs excited state. For a buffer gas cell, collisions between buffer gas atoms and Cs atoms lead to Lorentzian pressure broadening as well as frequency shifts of the absorption resonances, as seen in Fig. 6. The pressure broadening is extracted by fitting the \(F=3\to F^{\prime}=3\) and \(F=3\to F^{\prime}=4\) absorption resonances to a sum of two Voigt profiles and using their relative hyperfine strengths (1/4 and 3/4, respectively) and then repeating the procedure for \(F=4\to F^{\prime}=3\) and \(F=4\to F^{\prime}=4\), with hyperfine strengths of 7/12 and 5/12, respectively. The Doppler width \(\Gamma_{G}\) is fixed (374 MHz FWHM at 51\({}^{\circ}\)C) and the Lorentzian \(\Gamma_{L}\) (1.26(0.05) GHz) is fitted, corresponding to a pressure of 65(3) Torr, using the conversion of 19.51 MHz/Torr from [37] for the D1 pressure broadening with N\({}_{2}\). The pressure can also be extracted from the shift - 0.54(0.01) GHz in resonance frequencies, which corresponds to a pressure of 65(1) Torr.
Our alignment-based magnetometer uses \(\pi\)-polarized light resonant with the \(F=4\to F^{\prime}=3\) transition (see Fig. 1(b)), as in this case, the \(F=4,m=\pm 4\) states are dark states and atoms become optically pumped into those states with
Figure 4: Non-linear Zeeman splitting of the magnetic resonances using a paraffin-coated cell. The magnitude \(R\) is fitted to Eq. 23. The fit is included as a dotted line. The magnetic resonances for \(m=4\to m=3\) and \(m=-3\to m=-4\), with different Larmor frequencies, are indicated.
equal probability, creating the spin-alignment, as depicted in Fig. 2b. Note that for \(\pi\)-polarized light resonant with the \(F=4\to F^{\prime}=4\) transition, the \(F=4,m=0\) sublevel will be a dark state instead. With buffer gas pressure broadening, the \(F=4\to F^{\prime}=3\) and \(F=4\to F^{\prime}=4\) resonances begin to overlap. From our fit, we deduce that the overlap is only \(\sim 10\%\) for our pressure of 65 Torr N\({}_{2}\) (see Fig. 6 and the thin dotted vertical line). At higher pressures the two transitions will overlap even more. This is problematic for an alignment-based magnetometer as the light in this case will drive both \(F=4\to F^{\prime}=3\) and \(F=4\to F^{\prime}=4\) transitions at the same time. The \(F=4,m=\pm=4\) are then not dark states and significantly less spin-alignment is created.
To verify whether optical pumping into the \(F=4,m=\pm 4\) states is possible with the 65 Torr N\({}_{2}\) buffer gas cell where the excited hyperfine states partially overlap (\(\sim 10\%\)) and where quenching is the main de-excitation mechanism as described previously, once again the static field is adjusted to be large (\(B_{0}=8.38\) G) and a magnetic resonance spectrum is recorded (see Fig. 7). Again we see the magnetic resonances split due to the non-linear Zeeman effect, and the two outermost resonances have the largest and equal heights. The frequency difference between the \(m=4\to m=3\) transition and the \(m=-3\to m=-4\) transition is found experimentally to be \(|\Delta\nu_{4,3}-\Delta\nu_{-3,-4}|=13.2(0.1)\) kHz from a fit of the data in Fig. 7 to Eq. 23, which agrees well with the value \(7\delta=13.1\) kHz calculated from Eqs. 21 and 22.
This experimentally demonstrates that it is possible to gen
Figure 5: (a) (b) Sensitivity measurement of the paraffin-coated alignment-based magnetometer (\(T\sim 20^{\circ}\)C) at a Larmor frequency of \(\nu_{L}=10.25\) kHz. (a) Magnetic resonance with the RF frequency swept between 9 and 11.5 kHz. (b) A 240 s time trace of the intrinsic OPM noise with the lock-in amplifier demodulating signals at \(\nu_{L}\). (c) (d) Sensitivity measurement of the 65 Torr N\({}_{2}\) (\(T\sim 55^{\circ}\)C) at a Larmor frequency of \(\nu_{L}=10.04\) kHz. (c) Magnetic resonance with the RF frequency swept between 8 and 12.5 kHz. (d) A 240 s time trace of the intrinsic OPM noise with the lock-in amplifier demodulating signals at \(\nu_{L}\).
erate a spin aligned stated in the 65 Torr N\({}_{2}\) buffer gas cell by optically pumping more Cs atoms into the \(m=\pm 4\) states than the other magnetic sublevels in the \(F=4\) ground state. It is expected that better optical pumping into the \(m=\pm 4\) states will be achieved if a smaller buffer gas pressure is used, as there will be less unwanted pumping to the \(F=4\to F^{\prime}=4\) transition. A higher ratio \(R_{p}/\Gamma_{1}\) (see Eq. 19) will also increase pumping into the \(m=\pm 4\) states. The drawback of a lower buffer gas pressure, however, is that the atoms will diffuse more quickly to the walls, leading to a smaller \(T_{2}\) time and hence a less sensitive OPM. These two processes compete and need to be taken into consideration when selecting the optimal buffer gas pressure for an alignment-based magnetometer.
We now characterise the magnetic field sensitivity of the buffer gas cell using the same procedure which was used for the paraffin-coated cell. The optimal light power was found to be 30 \(\mu\)W. A magnetic resonance signal at 10 kHz was obtained with the 65 N\({}_{2}\) Torr cell in Fig. 5(c). A 240 s time trace with the RF field turned off is shown in Fig. 5(d). The sensitivity of the OPM, defined as the SD of the 240\(\times\)1 s data points in Fig. 5(d), is 310 fT/\(\sqrt{\rm Hz}\) for \(X\) and 340 fT/\(\sqrt{\rm Hz}\) for \(Y\). The sensitivity of the buffer gas cell therefore exceeds the paraffin-coated cell in this paper. The sensitivity is mainly limited by laser shot noise and electronic noise of the balanced photodetector.
We use Eq. 24 to calculate the predicted quantum-limited spin-projection noise. The number density \(n=60\times 10^{16}\) m\({}^{-3}\) at \(T=55^{\circ}\)C and \(T_{2}=1/(\pi(800\ \rm Hz))\). In a buffer gas cell only the atoms inside the beam are probed, unlike in a paraffin-coated cell where all the atoms in the cell are probed. We therefore use the volume inside the beam \(V=V_{\rm beam}=3.9\times 10^{-9}\) m\({}^{3}\), where the diameter of the beam is \(\sim 1\) mm and length of the cell is 5 mm. Inserting the numbers above, we estimate the atomic noise to be \(\delta B_{\rm spm}\sim\ 100\) fT/\(\sqrt{\rm Hz}\). A better sensitivity could be obtained by increasing the diameter and length of the cell, whilst increasing the size of the beam. If a 5 mm diameter beam was used, probing the whole cell, the atomic noise is estimated to be \(\delta B_{\rm spm}\sim 20\) fT/\(\sqrt{\rm Hz}\). Note that many atoms are lost to the \(F=3\) ground state (see Fig. 2), reducing the number of Cs atoms that are probed. Using a second laser beam (typically called a repumper) bringing the atoms out of \(F=3\) and back into \(F=4\) would also increase the number of probed atoms, improving the sensitivity of the RF OPM.
## VI Conclusions
The results presented in this paper demonstrate the first implementation of a one-beam radio-frequency optically pumped magnetometer (RF OPM), the alignment-based magnetometer, being used with a buffer gas cell. The sensitivity of the alignment-based magnetometer with Cs alkali vapour and 65 Torr N\({}_{2}\) buffer gas was 325 fT/\(\sqrt{\rm Hz}\). This sensitivity could be further improved upon by using a balanced photodetector with lower electronic noise. Further studies could investigate the optimal vapour cell size, operating temperature and buffer gas pressure. Although our experiments were carried out using hand-blown vapour cells, we expect similar performance with microfabricated buffer gas cells. Our work opens up the possibility of the commercialisation of compact, robust and portable RF OPMs using only one laser beam with buffer gas cells, a much more scalable and commercially viable option than using paraffin-coated vapour cells.
Figure 6: Absorption spectrum of the D1 line with a 65(3) Torr N\({}_{2}\) cell alongside a frequency reference which is a pure Cs cell. The buffer gas cell is heated to 51\({}^{\circ}\)C corresponding to a density of 43.7\(\times\)10\({}^{16}\) m\({}^{-3}\) Cs atoms and a Doppler linewidth \(\Gamma_{G}=374\) MHz. The \(F=3\to F^{\prime}=3,4\) and \(F=4\to F^{\prime}=3,4\) transitions are fitted to Voigt profiles and the Lorentzian width \(\Gamma_{L}\) and pressure shift are extracted.
Figure 7: Non-linear Zeeman splitting of the magnetic resonances using a 65 Torr N\({}_{2}\) buffer gas cell heated to \(\sim 55^{\circ}\)C. The magnitude \(R\) is fitted to Eq. 23. The magnetic resonances for \(m=4\to m=3\) and \(m=-3\to m=-4\) are indicated.
## Acknowledgments
This work was supported by the UK Quantum Technology Hub in Sensing and Timing, funded by the Engineering and Physical Sciences Research Council (EPSRC) (Grant No. EP/T001046/1), the QuantERA grant C'MON-QSENS! by EPSRC (Grant No. EP/T027126/1), the Nottingham Impact Accelerator/EPSRC Impact Acceleration Account (IAA), and the Novo Nordisk Foundation (Grant No. NNF200C0064182). We thank Janek Kolodynski and Marcin Kozbial for reading and commenting on the manuscript.
## Data Availability Statement
Further data can be available from the authors upon request.
|
2310.00891 | Photon Spacecraft and Aerocapture: Enabling Small Low-Circular Orbiters
at Mars and Venus | With advancements in low-cost launchers and small interplanetary spacecraft,
NASA has recognized the potential of small missions to perform focused
planetary science investigations at Mars and Venus. The EscaPADE, part of the
NASA SIMPLEx program will deliver two small spacecraft to elliptical orbits
around Mars using the Photon spacecraft. Orbit insertion, particularly to
low-circular orbits requires significant propellant, taking up a substantial
fraction of the Photon wet mass and present a significant challenge for small
missions. The large $\Delta$V requirements for low-circular orbit make it
difficult to insert small satellites into these orbits even with the highly
capable Photon, as the total $\Delta$V for Earth escape and orbit insertion
exceeds its capability. Drag modulation aerocapture offers a promising
alternative, using the atmospheric drag to obtain the large $\Delta$V. The
study shows how the Photon when combined with drag modulation aerocapture can
deliver small orbiters to low-circular orbits, enabling a wide range of small
orbiter missions. Aerocapture eliminates the need for Photon to provide 2 to
3.5 km/s of $\Delta$V for orbit insertion, which translate into mass and cost
savings, and can enable frequent low-cost small orbiters and small satellite
constellations at Mars and Venus in the near future. | Athul Pradeepkumar Girija | 2023-10-02T04:17:20Z | http://arxiv.org/abs/2310.00891v1 | # Photon Spacecraft and Aerocapture: Enabling Small Low-Circular Orbiters at Mars and Venus
###### Abstract
With advancements in low-cost launchers and small interplanetary spacecraft, NASA has recognized the potential of small missions to perform focused planetary science investigations at Mars and Venus. The EscaPADE, part of the NASA SIMPLEx program will deliver two small spacecraft to elliptical orbits around Mars using the Photon spacecraft. Orbit insertion, particularly to low-circular orbits requires significant propellant, taking up a substantial fraction of the Photon wet mass and present a significant challenge for small missions. The large \(\Delta\)V requirements for low-circular orbit make it difficult to insert small satellites into these orbits even with the highly capable Photon, as the total \(\Delta\)V for Earth escape and orbit insertion exceeds its capability. Drag modulation aerocapture offers a promising alternative, using the atmospheric drag to obtain the large \(\Delta\)V. The study shows how the Photon when combined with drag modulation aerocapture can deliver small orbiters to low-circular orbits, enabling a wide range of small orbiter missions. Aerocapture eliminates the need for Photon to provide 2 to 3.5 km/s of \(\Delta\)V for orbit insertion, which translate into mass and cost savings, and can enable frequent low-cost small orbiters and small satellite constellations at Mars and Venus in the near future.
Photon, Aerocapture, Low-Cost Mission, Mars, Venus
## I Introduction
With advancements in low-cost launchers and small interplanetary spacecraft, NASA has recognized the potential of low-cost missions to perform focused planetary science investigations at Mars and Venus in the near future [1]. The Escape and Plasma Acceleration and Dynamics Explorers (EscaPADE), part of Small Innovative Missions for Planetary Exploration (SIMPLEx) program mission will deliver two identical 200 kg spacecraft to Mars in 2024 [2]. The Photon is first delivered to low-Earth orbit, where it uses its engines to perform a series of orbit raising burns, and then performs a trans-Mars injection burn. Upon arrival at Mars, the Photon burns again to capture into orbit and then performs orbit reduction maneuvers to deliver the two spacecraft into their 160 x 8400 km science orbit. At a cost of only $79 million including the launch vehicle, EscaPADE will be the first demonstration of standalone low-cost interplanetary orbiter mission. Rocket Lab has also made the commitment to fly a private mission to deliver a 20 kg probe for in-situ sampling of the Venusian clouds, also using the Photon high-performance spacecraft (shown in Figure 1) with an estimated budget under $10 million [3]. Orbit insertion, particularly to low-circular orbits requires significant propellant and can require about 20-40% of the Photon wet mass and present a significant challenge for small missions [4, 5]. Aerocapture can be used to eliminate the substantial propellant need for orbit insertion [6, 7]. The present study shows how the Photon spacecraft when combined with drag modulation aerocapture, can enable a wide range of low-cost small orbiter missions and small satellite constellations at Mars and Venus in the near future.
Figure 1: (Left) Photon upper stage with the Venus probe inside the Electron rocket fairing, courtesy of Rocket Lab. (Right) Schematic of the drag modulation aerocapture concept.
## II. Challenge of Orbit Insertion
The high-energy Photon is a highly-capable spacecraft which can provide nearly 3 km/s of \(\Delta\)V. The EscaPADE mission for example uses about 1.1 km/s for the orbit raise and escape burns combined, and about 1.3 km/s for orbit insertion and reduction to a 165 x 6400 km elliptical orbit. For the EscaPADE mission, the Photon spacecraft weighs approximately 500 kg wet when delivered to LEO, of which about 150 kg is used for orbit raise and Earth escape. Mars orbit insertion and reduction consumes an additional 130 kg of propellant, which leaves about 200 kg in the elliptical science orbit. However, orbit insertion to low-circular orbits require significantly more \(\Delta\)V, approaching 2 km/s at Mars and 3.5 km/s at Venus as seen in Figure 2. The large \(\Delta\)V requirements for low-circular orbit make it difficult to insert small satellites into these orbits even with the highly capable Photon spacecraft as the total \(\Delta\)V for Earth escape and orbit insertion exceeds its capability. Drag modulation aerocapture offers a promising alternative, using the atmospheric drag to obtain the large \(\Delta\)V with almost no propellant [8, 9]. Instead of the Photon spacecraft capturing itself into orbit with propulsion, it will release one or more small drag modulation aerocapture vehicles once it approaches the planet's sphere of influence. The small spacecraft then independently enters the Martian atmosphere and performs aerocapture. Thus the Photon spacecraft only needs a propulsion system big enough to perform the Earth escape and correction maneuvers (about 1.2 km/s), while the 2-3.5 km/s required for orbit insertion is obtained from aerocapture. The mass savings from not using propulsion for orbit insertion can be used to accommodate more science payload, or alternatively used to realize a smaller and cheaper Photon with significantly less propellant.
Figure 2. Orbit insertion \(\Delta\)V at Mars and Venus, as function of the orbit apoapsis.
## III Aerocapture at Mars
Insertion of a small satellite into a 200 x 400 km low-circular orbit at Mars is considered from the interplanetary trajectory used by the Mars 2020 mission. The orbit insertion \(\Delta\)V is 2054 m/s. However, instead of performing a propulsive burn, the Photon targets the aim point for a aerocapture and releases the small satellite on a course for an entry trajectory within the drag modulation aerocapture corridor [-9.93, -8.96 deg]. Figure 3 shows the drag modulation aerocapture trajectory for entry at -9.1 deg, near the shallow limit [10]. During the aerocapture manuever, \(\Delta\)V of 2086 m/s is obtained from atmospheric drag. At the first apoapsis, a 41 m/s periapsis raise burn is performed by the small satellite to achieve its desired 200 x 400 km orbit. The implications of the \(\Delta\)V offered by aerocapture are discussed below. Without aerocapture, assuming 1.2 km/s for orbit raising and escape, and 2 km/s for orbit insertion, the total \(\Delta\)V required to be supplied by the Photon is about 3.2 km/s. Though this is technically viable with the Photon, it puts it at the upper limit of the specifications even when not considering margins. With aerocapture, the Photon only needs to supply the 1.2 km/s required for Earth escape plus any margins, implying the Photon propulsion system can be much smaller and less expensive. When considering the mass of the aerocapture system, the performance benefit compared to propulsion is only about 20%, but neverthless it may still be provide significant cost savings [11].
Figure 3: Drag modulation aerocapture trajectory at Mars.
## IV Aerocapture at Venus
Insertion of a small satellite into a 200 x 400 km low-circular orbit at Venus is considered from the interplanetary trajectory used by the Akatsuki mission. The orbit insertion \(\Delta\)V is 3515 m/s, which is higher that what the Photon system can reasonably allocate for orbit insertion. Hence insertion into low-circular orbits at Venus is not viable with Photon. However, aerocapture can provide such large \(\Delta\)V and thus can enable small satellite missions to Venus which require such low-circular orbits. The Photon will target the aerocapture entry corridor at [-5.54, -5.12] deg, and release the aerocapture vehicle on its approach trajectory and will then flyby Venus. Figure 3 shows the drag modulation aerocapture trajectory for entry at -5.2 deg, providing a \(\Delta\)V of 3466 m/s. At the first apoapsis, a 30 m/s periapsis raise burn is performed by the small satellite to achieve its desired 200 x 400 km orbit. Without aerocapture, the total \(\Delta\)V required to be supplied by the Photon is about 4.7 km/s. A recent NASA study assessing the cost drivers of small satellite missions found that \(\Delta\)V is the largest driver of mission cost [12]. With aerocapture providing the 3.5 km/s, the Photon only needs to provide 1.2 km/s which translates into a much smaller Photon propulsion system design and significant cost savings. Thus the combination of Photon spacecraft which can launch independently on Electron, or as a secondary payload to GTO and aerocapture can enable frequent small low-cost Venus orbiter missions.
Figure 4: Drag modulation aerocapture trajectory at Venus.
## V Applications for Future Missions
The Photon spacecraft has demonstrated its viability for enabling small interplanetary science missions through the planned ESCAPADE and the Venus probe missions. When combined with drag modulation aerocapture, Photon offers even more capabilities for small low-cost orbiter missions, some of which are discussed in this section.
The ESCAPADE mission will demonstrate it is possible for small missions to enter Martian orbit using the Photon propulsion system. At Mars, the performance benefit in terms of delivered mass offered by aerocapture is small. However, it still has applications for mission such as an aerocapture technology demonstration. Due to the different atmospheric structure, Mars offers a much more benign aerothermal environment that Earth or Venus [13, 14]. The Photon can launch on an Electron or as a secondary payload to GTO. The Photon only needs about 1.2 km/s for Earth escape, and will act as a cruise stage carrying a small low-cost aerocapture technology demonstrator [15].
A single Photon can only deliver a single spacecraft to an orbit around a planet if its propulsion system is needed for orbit insertion. However, Photon when acting as cruise stage can carry multiple small satellites to form constellations around Mars and Venus. As seen in LEO, small imaging and radar satellite constellations can perform investigations previously done with large satellites at a fraction of the cost. By not having to do the orbit insertion burn, the Photon itself just needs enough propellant for the Earth escape and trajectory correction maneuvers. On approach the Photon can perform small divert maneuvers to target each small satellite into their approach trajectories different inclination orbits. The different satellites will aerocapture into their respective orbits, while the Photon will flyby the planet. Considering an ESCAPADE like Photon which weighs 500 kg in LEO, and assuming about 150 kg of propellant is used for Earth escape, about 350 kg remains. Of this, assuming 100 kg is required for the Photon bus, this leaves about 250 kg for the small satellites. Assuming each small satellite weighs 50 kg with the aerocapture system mass included, this allows five satellites to be carried by a single Photon. Each of these satellites can be delivered to a different orbital plane or inclination using small divert manuevers by the Photon, enabling a constellation of five satellites to be established in a single launch. At Mars, these could be for example imaging satellites which can study the Martian surface or its climate. At Venus, these could be small radar satellites which can map the enitre surface.
At Venus, the combination of Photon and aerocapture is also applicable to larger missions which have the Photon as a secondary payload cruise stage for independent small satellites [16], and for low-circular orbits required for atmospheric sample return missions [17]. At Mars, in addition to a technology demonstration, it could enable frequent low-cost orbiter missions at a much faster cadence than the current missions. A technology demonstration mission at Mars will bring aerocapture into the realm of flight heritage after several decades of demonstration attempts [18].
In addition to near term missions at Mars and Venus, Rocket Lab also envisages small missions to the outer Solar System with the high-energy Photon [19]. Though the large distances and the need for radioisotope power make such small missions less realistic, it may become viable with advances in telecom and power system technologies. However, the demonstration of aerocapture at Mars or Venus will enhance its readiness for outer planet missions where its performance benefit is significantly greater [20, 21]. Even though aerocapture is not currently considered for the Uranus Orbiter and Probe which is the top priority for the Flagship mission of the next decade [22, 23], studies have shown aerocapture offers significant mission design advantages [24]. Aerocapture has also been shown to provide signficant benefits for a New Frontiers class Titan orbiter [25, 26], and a Neptune Flagship mission [27, 28].
## 6 Conclusions
The study showed how the Photon spacecraft when combined with drag modulation aerocapture can deliver small orbiters to low-circular orbits, enabling a wide range of small orbiter missions at Mars and Venus. Aerocapture eliminates the need for Photon to provide 2-3.5 km/s of \(\Delta\)V for orbit insertion, which translates into mass and cost savings, and can enable small satellite imaging and radar constellations at Mars and Venus in the near future.
## 7 Data Availability
The results presented in the paper can be reproduced using the open-source Aerocapture Mission Analysis Tool (AMAT) v2.2.22. The data and code used to make the study results will be made available by the author upon request.
|
2302.06029 | Emotion Detection in Unfix-length-Context Conversation | We leverage different context windows when predicting the emotion of
different utterances. New modules are included to realize variable-length
context: 1) two speaker-aware units, which explicitly model inner- and
inter-speaker dependencies to form distilled conversational context, and 2) a
top-k normalization layer, which determines the most proper context windows
from the conversational context to predict emotion. Experiments and ablation
studies show that our approach outperforms several strong baselines on three
public datasets. | Xiaochen Zhang, Daniel Tang | 2023-02-13T00:06:47Z | http://arxiv.org/abs/2302.06029v1 | # Emotion Detection in Unfix-length-Context Conversation
###### Abstract
Emotion Detection in conversation is playing a more and more important role in the dialogue system. Existing approaches to Emotion Detection in Conversation (EDC) use a fixed context window to recognize speakers' emotions, which may lead to either scantiness of key context or interference of redundant context. In response, we explore the benefits of variable-length context and propose a more effective approach to EDC. In our approach, we leverage different context windows when predicting the emotion of different utterances. New modules are included to realize variable-length context: 1) two speaker-aware units, which explicitly model inner- and inter-speaker dependencies to form distilled conversational context, and 2) a top-k normalization layer, which determines the most proper context windows from the conversational context to predict emotion. Experiments and ablation studies show that our approach outperforms several strong baselines on three public datasets.
Keywords:Conversation Emotion Detection Transformer
## 1 Introduction
Emotion Detection in Conversation (EDC) is the task of predicting the speaker's emotion in conversation according to the previous context and current utterance. Great technical breakthroughs of EDC promote the development of applications in an army of domains, such as healthcare, political elections, consumer products and financial services [23, 21, 24, 22, 20, 11, 15, 18]. Figure 1 shows an example of EDC. Existing approaches [4, 6] consider a fixed context window (i.e., the number of preceding utterances), which may suffer from two issues: (1) semantic missing due to a small window; or (2) redundancy problem in big context text, making it difficult to choose the right context in the task CHQA. Therefore, knowing the current speaker is Harry is beneficial to choosing the right context window since one of the preceding utterances explicitly mentions Harry, which indicates that it may contain information relevant to the current utterance. That is, speaker dependencies are the key indicators to determine the right context window. speaker dependencies are both critical to conversation understanding [5], where speaker dependencies can be further categorized into inner- and inter-speaker dependencies [8]. Firstly, we model the above dependencies in an attention-based utterance encoder and two speaker-aware units to generate
conversational context representation, where inner- and inter-speaker dependencies are explicitly modeled to help detect the ideal context windows. Next, a top-k normalization layer generates top-k best context windows and their probability weights based on the dimension-reduced context representation. Lastly, we predict the emotion of the current utterance by softly leveraging the top-k best context windows. Experiments show that our approach achieves competitive performance on three public conversational datasets: 66.35% F1 on IEMOCAP [2]; 61.22% F1 on DailyDialog [10]; and 38.93% F1 on EmoryNLP [25]. Extensive ablation studies demonstrate the contribution of each component in our approach as well as the necessity of using variable-length context.
We summarize our contributions as threefold:
* For the first time, we alleviate the context scantiness and context redundancy problems in EDC by varying the length of context.
* We propose a new approach that considers different context windows for different instances to conduct emotion prediction, where 1) speaker dependency is explicitly modeled by new speaker-aware units to help the detection of ideal context windows and 2) a new top-k normalization layer that generates top-k best context windows as well as their weights.
* We achieve competitive results on three public EDC datasets and conduct an elaborate ablation study to verify the effectiveness of our approach.
Figure 1: A multi-party EDC example. The ideal context window to Harry’s emotion would include exactly two preceding utterances, among which Tony provides evidence for Harry being happy. Utterances ahead of Tony are redundant since they are irrelevant to the current turn of conversation.
## 2 Related Work
Recent EDC studies are based on Deep Learning, which can be further categorized into three main kinds: RNN-based, GCN-based and Transformer-based models. RNN-based models have been well explored in the last few years. Poria et al. (2017) [16] first modeled the conversational context of EDC using Recurrent Neural Networks (RNNs) [14]. Hazarika et al. (2018) [8] took speaker information into account and Hazarika et al. (2018) [7] first modeled Inter-speaker dependencies. Majumder et al. (2019) [13] kept track of speakers' states and their method could be extended to multi-party conversations. Lu et al. (2020) [12] proposed an RNN-based iterative emotion interaction network to explicitly model the emotion interaction between utterances. Ghosal et al. (2019) [6] and Sheng et al. (2020) [19] adopted relational Graph Convolutional Networks (GCN) to model EDC, where the whole conversation was considered as a directed graph and they employed graph convolutional operation to capture the dependencies between vertices (utterances). However, converting conversations to graphs loses the temporal attributes of the original conversation. Owing to the excellent representation power of transformers [3], some researchers adapted them to EDC and got favorable results [9]. Recently, Ghosal et al. (2020) [4] incorporated commonsense knowledge extracted from pre-trained commonsense transformers COMET [1] into RNNs and obtained favorable results on four public EDC datasets. However, none of the above models regarded the context scantiness or the context redundancy problem as us.
## 3 Our Method
### Problem Formulation
A conversation consists of n temporally ordered utterances \(\{x_{1},\ldots,x_{n}\}\) and their speakers \(\{s_{1},\ldots,s_{n}\}\). \(x_{i}\) is the _i-th_ word in the sequence. At time step \(t\), the goal of EDC is to identify the most-likely categorical emotion label \(\hat{y}_{t}\) for speaker \(s_{t}\) given the current and preceding utterances as well as their speakers: \(\hat{y}_{t}=argmax(y_{t}|x_{1:t},s_{1:t})\), where \(1:t\) means set of the former \(t\) elements.
### Model
As depicted in Figure 2, our approach consists of the following modules: (1) an utterance encoder that encodes sequential dependencies among utterances; (2) two speaker-aware units that explicitly encode inner-and inter-speaker dependencies to help detect the ideal context windows; (3) a multi-layer perception and a top-k normalization layer that generates distribution over different context windows, from which we determine top-k best context windows and their corresponding weights; and (4) a prediction module that generates emotion distribution from the top-k best context windows with different probability weights. Utterance Encoder The input of the utterance encoder is a sequence of tokens
with speaker information. At time step t, we generate the input sequence by prepending speaker information (i.e. the name of the speaker) to each utterance and then concatenating utterances up to time step t into a single sequence of tokens. The name of the speaker and the utterance are separated by special [SEP] token. The input sequence is fed into the base version of Roberta [12] to encode the sequential dependencies among utterances and generate contextual representation for each utterance:
\[\begin{split} u_{i}&=s_{i}\oplus[SEP]\oplus x_{i},\\ [g_{1},\dots,g_{t}]&=RoBERTa(\oplus_{i=1}^{t}u_{i}) \end{split} \tag{1}\]
where \(g_{i}\) represents the contextual representation for utterance at \(i\), which is the Roberta output corresponding to the first token of \(u_{i}\) With a context window considering up to M previous time steps, the encoder outputs a sequences of vectors \([g_{t-M},\dots,g_{t-1},g_{t}]\), where \(g_{i}\in\mathcal{R}^{d}\).
Speaker-Aware Units: Our approach incorporates speaker dependencies to guide the detection of ideal context windows. Concretely, we propose two speaker-aware units to explicitly capture inner-speaker and inter-speaker dependencies. The two units have the same attention-based structure, but they do not share parameters. We first divide utterance contextual representations \([g_{t-M},\dots,g_{t-1}]\), into two subsets Ginner and Ginter depending on whether their corresponding speakers are the same as the current one. Each speaker-aware unit then takes the corresponding subset G and gt as input, and applies multi-head attention
Figure 2: Overall architecture of our approach.
with layer normalization to incorporate speaker dependencies:
\[\begin{split} o&=LayerNorm(c+g_{t}),\\ c&=Concat((head_{1},\dots,head_{h}),\Phi_{1}),\\ head_{i}&=Attention((g_{t},G,G)^{T}(\Phi_{2},\Phi_{ 3},\Phi_{4})),\Phi\in\mathcal{R}\end{split} \tag{2}\]
where \(\Phi\)s are the parameters of the different layer in our model. Finally, we concatenate \(o^{inter}\) and \(o^{inner}\) into the vector \(z\) as \(z=[o^{inter};o^{inner}]\in\mathcal{R}^{2}d\).
Context Window Distribution: Using the distilled vector z, we generate a the probability distribution over context windows ranging from 0 to M. This is done via: (1) a multi-layer perceptron (MLP) which maps the distilled vector to scores of context windows, and (2) a top-k normalization layer that generates distribution over context windows.
Specifically, we first feed the distilled vector z into a two-layer MLP to get scores of context windows s:
\[\begin{split} h&=ReLU(z,\Phi_{5})\in\mathcal{R} _{h}^{d},\\ s&=MLP(h;\Phi_{6})\in\mathcal{R}^{M+1}\end{split} \tag{3}\]
Emotion Prediction from top-K best Context Windows Instead of using the context window with the highest probability to predict emotion, we use \(q=softmax(s+m)\) as soft labels and leverage all top-K context windows in prediction. As shown in Figure 2, our prediction module contains M + 1 context fields from 0 to M, where field i corresponds to the use of context window i. The input of each field, with a [CLS] at its front, is encoded by a field-specific contextual encoder, which has the same architecture as our utterance encoder. We use a field-specific linear classifier to the encoder output for [CLS],
\[g_{CLS}^{i}\in\mathcal{R}^{d} \tag{4}\]
, to compute the emotion label distribution \(p^{i}\) given context window i:
\[p^{i}=softmax(g_{[CLS]}^{i};\Phi_{7})\in\mathcal{R}^{c}. \tag{5}\]
The final emotion label distribution \(\hat{p}\) combines top-K context window distribution and emotion label distributions given different context windows:
\[\hat{p}=\Sigma_{i\in top-K}1[i]p^{i}\in\mathcal{R}^{c}. \tag{6}\]
### Training
We optimize cross-entropy loss \(\mathcal{L}\) for each mini-batch \(\mathbf{B}\) of conversations:
\[\mathcal{L}=\Sigma_{i=1}^{|B|}\Sigma_{j=1}^{|B_{i}|}-log\hat{p}^{ij}[y_{ij}], \tag{7}\]
## 4 Experiment Design
### Dataset
We evaluate our approach on four publicly available datasets, IEMOCAP [2], DailyDialog [10], MELD [17] and EmoryNLP [25]. They differ in the number of interlocutors, conversation scenes, and the emotion labels. As shown in Table 1, the average conversation lengths of the four datasets differ a lot, with the maximum 49.23 for IEMOCAP and minimum 7.85 for DailyDialog. Moreover, the datasets hold varied data capacity and average utterance lengths. Following existing approaches, models for the datasets are independently trained and evaluated. For preprocessing, we follow Zhong et al. (2019) [26] to lowercase and tokenize the utterances in the datasets using Spacy.
### Baselines
To demonstrate the effectiveness of our approach, we compare it with several strong baselines as follows:
* DialogueRNN, an RNN-based ERC model that keeps track of the states of context, emotion, speakers, and listeners by several separate GRUs.
* DialogueGCN, a GCN-based ERC model, where adopt relational graph neural networks to model different types of relations between utterances in the conversation according to their temporal order and speakers.
* KET, a transformer-based model, which leverages external knowledge from emotion lexicon NRC VAD and knowledge base ConceptNet to enhance the word embeddings. They adopt a hierarchical attention-based strategy to capture contextual information.
* RoBERTa-BASE, the base version of Roberta. The inputs are concatenated utterances and the representation of the first subword from the last layer is fed to a simple linear emotion classifier. If the input length exceeds the limitation of Roberta, we discard the remote utterances at the utterance level.
* COSMIC, a strong ERC model which extracts relational commonsense features from COMET and utilizes several GRUs to incorporate the features to help emotion classification.
## 5 Experimental Result
### main results
The main results are reported in Table 2. Our approach achieves the best performance on IEMOCAP, DailyDialog and EmoryNLP datasets, surpassing COSMIC by 1.07%, 2.74%, and 0.82% F1 scores respectively. We owe the better performance of our approach over COSMIC to the consideration of variable-length context. Moreover, unlike COSMIC, our approach does not rely on external knowledge. For MELD, the result of our approach is also competitive,
outperforming all the baselines except COSMIC. We show that the slightly better performance of COSMIC is due to the use of commonsense knowledge (CSK). Our approach performs better than COSMIC without CSK. This indicates that external knowledge could be beneficial to the prediction of short utterances. We leave adding
#### 4.2.2 Ablation Study
In order to expose the contribution of different components in our approach, we conduct ablation experiments on the main components: the speaker-aware units and the generation method of context window distribution. Speaker-Aware Units We compare the speaker-aware units with the following modeling methods of speaker dependencies: N-Unit: N-Unit shares the same structure with the inner- (inter-) speaker-aware unit. Different from the speaker-aware units, the keys, and values of its inputs are all the previous utterance representations regardless of their inner and inter-speaker relationships. N-unit is non-speaker-aware. S-Unit: S-Unit concatenates one-hot vectors, which indicates the speaker of each utterance, to the utterance representations and conducts the same operation as N-Unit. GCNs: Method from [5], where multiple graph convolution layers capture the speaker dependencies. Nodes are utterances and edge weights are obtained by a similarity-based attention module. We add a max pooling layer and a linear layer after it to get the vector z. The inputs of GCNs are the outputs of our utterance encoder. Fig. fig: ablation shows the comparison results. We attribute the superior performance of our method over S-Unit to the explicit modeling of inner- and inter-speaker dependencies. S-Unit surpasses N-Unit, indicating that speaker information is indispensable in the context modeling of EDC. Moreover, our speaker-aware units gain over the best of the other three methods by 0.33% and 0.72% F1 scores on dyadic datasets (IEMOCAP and DailyDialog), less than those on multiparty datasets (MELD and EmoryNLP), 1.03% and 0.85%. We attribute this to more complex speaker dependencies in multi-party conversations than dyadic conversations. Our method is better at capturing speaker dependencies when more speakers
Figure 3: Main results. The best F1 scores are highlighted in bold. - signifies the unreported results. CSK is the abbreviation of commonsense knowledge. \(\star\) means the results obtained by our implementation
participated in the conversation. The generation method of Context Window Distribution q (see Equation 11) controls the activation of context fields and acts as attention, weights to merge the output distributions of activated context fields. In our method, we adopt an MLP and a top-k normalization layer to generate q. We try several other generation methods of q and compare them with our method. Based on the two functions of q, top-k activation of context fields, and output distribution weighting, we consider the following variants of our method:
All-Soft: The top-k normalization layer in our method is replaced by a softmax layer to get q, which means that all of the M + 1 context fields are always activated and the output distributions of context fields are merged by attention weights.
Top-Hard: After the top-k normalization layer, K non-zero probabilities in q are set to 1K, meaning that the output distributions of K-activated context fields are weighted equally.
All-Hard: Regardless of the sequential and speaker dependencies, all the probabilities in q are set as M1+1, which means that all of the M + 1 context fields are always activated and the output distributions of context fields are weighted equally.
Topk-Soft: Method in our proposed approach. F1 scores of the test sets are shown in Fig fig:tb2. Compared to All-Hard, AllSoft only has better performance on EmoryNLP. We attribute this to the fact that the attention weights of
Figure 4: Ablation for the speaker-aware units on the test sets of four datasets.
proper context windows are not significantly larger than those of improper ones. Therefore, directly deactivating improper context fields in our approach is more reasonable than activating them and giving them less attention weights. In response to the above analysis, Topk-Hard outperforms All-Hard nearly across all the datasets, indicating again that we should avoid activating improper context fields. Our top-k normalization layer promotes the attention weights of the K-activated context fields, which is signified by the superior performance of Topk-Soft over Topk-Hard. According to the above analysis, our generation method of context window distribution not only avoids activating improper context fields but also gives the activated ones more reasonable attention weights. As a result, our method outperforms other generation methods. How to further reduce the attention weights of improper context fields deserves more exploration in the future.
## 6 Conclusion
To alleviate the context scantiness and context redundancy problems in EDC, we present a new EDC approach capable of recognizing speakers' emotions from variable-length context. In our approach, we first generate a probability distribution over context windows according to sequential and speaker dependencies, where speaker dependencies are explicitly modeled by the newly proposed inner- and inter-speaker units. Then, we introduce a new top-k normalization layer to leverage all top-k best context windows to conduct emotion prediction conditioned on the context window distribution. Elaborate experiments and ablation study demonstrate that our approach can effectively alleviate the context scantiness and context redundancy problems in EDC while achieving competitive performance on three public datasets. In the future, we tend to improve the context window distribution through external knowledge or auxiliary tasks. Also, we'll explore more effective mechanisms for the detection of proper context windows.
Figure 5: Ablation for the generation method of context window distribution on the test sets of four datasets. |
2303.06300 | Enumeration of non-crossing partitions according to subwords with
repeated letters | An avoidance pattern where the letters within an occurrence of which are
required to be adjacent is referred to as a subword. In this paper, we
enumerate members of the set NC_n of non-crossing partitions of length n
according to the number of occurrences of several infinite families of subword
patterns each containing repeated letters. As a consequence of our results, we
obtain explicit generating function formulas counting the members of NC_n for n
>= 0 according to all subword patterns of length three containing a repeated
letter. Further, simple expressions are deduced for the total number of
occurrences over all members of NC_n for the various families of patterns.
Finally, combinatorial proofs can be given explaining three infinite families
of subword equivalences over NC_n, which generalize the following equivalences:
211 = 221, 1211 = 1121 and 112 = 122. | Mark Shattuck | 2023-03-11T04:09:10Z | http://arxiv.org/abs/2303.06300v1 | # Enumeration of non-crossing partitions according to subwords with repeated letters
###### Abstract.
An avoidance pattern where the letters within an occurrence of which are required to be adjacent is referred to as a _subword_. In this paper, we enumerate members of the set \(NC_{n}\) of non-crossing partitions of length \(n\) according to the number of occurrences of several infinite families of subword patterns each containing repeated letters. As a consequence of our results, we obtain explicit generating function formulas counting the members of \(NC_{n}\) for \(n\geq 0\) according to all subword patterns of length three containing a repeated letter. Further, simple expressions are deduced for the total number of occurrences over all members of \(NC_{n}\) for the various families of patterns. Finally, combinatorial proofs can be given explaining three infinite families of subword equivalences over \(NC_{n}\), which generalize the following equivalences: \(211\equiv 221\), \(1211\equiv 1121\) and \(112\equiv 122\).
Key words and phrases:non-crossing partition, subword pattern, Catalan number, generating function 2010 Mathematics Subject Classification: 05A15, 05A05
## 1. Introduction
A collection of disjoint nonempty subsets of a set whose union is the set is known as a _partition_, with the constituent subsets referred to as _blocks_ of the partition. Let \([n]=\{1,2,\ldots,n\}\) for \(n\geq 1\), with \([0]=\varnothing\). The set of partitions of \([n]\) containing exactly \(k\) blocks will be denoted by \(\mathcal{P}_{n,k}\), with \(\mathcal{P}_{n}=\cup_{k=0}^{n}\mathcal{P}_{n,k}\) denoting the set of all partitions of \([n]\). A partition \(\Pi=B_{1}/B_{2}/\cdots/B_{k}\in\mathcal{P}_{n,k}\) is said to be in _standard form_ if its blocks \(B_{i}\) are such that \(\min(B_{i})<\min(B_{i+1})\) for \(1\leq i\leq k-1\). A partition \(\Pi\) in standard form can be represented sequentially by writing \(\pi=\pi_{1}\cdots\pi_{n}\), where \(i\in B_{\pi_{i}}\) for each \(i\in[n]\) (see, e.g., [6]). The sequence \(\pi\) is referred to as the _canonical sequential form_ of the partition \(\Pi\). Then \(\Pi\) in standard form implies \(\pi_{i+1}\leq\max(\pi_{1}\cdots\pi_{i})+1\) for \(1\leq i\leq n-1\), which is known as the _restricted growth_ condition (see, e.g., [11]).
A partition \(\Pi\) is said to be _non-crossing_[4] if its sequential representation \(\pi\) contains no subsequence of the form \(a\)-\(b\)-\(a\)-\(b\), where \(a<b\) (i.e., if \(\pi\) avoids the pattern 1-2-1-2 in the classical sense). Let \(NC_{n}\) denote the set of non-crossing partitions of \([n]\); recall that \(|NC_{n}|=C_{n}\) for all \(n\geq 0\), where \(C_{n}=\frac{1}{n+1}\binom{2n}{n}\) is the \(n\)-th Catalan number. We will denote the Catalan number generating function \(\sum_{n\geq 0}C_{n}x^{n}=\frac{1-\sqrt{1-4x}}{2x}\) by \(C(x)\).
Let \(\tau=\tau_{1}\cdots\tau_{m}\) be a sequence of positive integers whose set of distinct letters comprise \([\ell]\) for some \(1\leq\ell\leq m\). Then the sequence \(\rho=\rho_{1}\cdots\rho_{n}\) is said to _contain_\(\tau\) as a _subword_ (pattern) if some string of consecutive letters of \(\rho\) is order-isomorphic to \(\tau\). That is, there exists an index \(i\in[n-m+1]\) such that \(\rho_{i}\rho_{i+1}\cdots\rho_{i+m-1}\) is isomorphic to \(\tau\). If no such index \(i\) exists, then \(\rho\)_avoids_\(\tau\) as a subword.
Here, we will be interested in counting the members of \(NC_{n}\) according to the number of occurrences of certain subword patterns, focusing on several infinite families of patterns. Let
\(\mu_{\tau}(\pi)\) denote the number of occurrences of the subword \(\tau\) in the partition \(\pi\). We compute the generating function \(F=F_{\tau}\) for the distribution of \(\tau\) on \(NC_{n}\) where
\[F=\sum_{n\geq 0}\left(\sum_{\pi\in NC_{n}}q^{\mu_{\tau}(\pi)}\right)x^{n}\]
in several cases when \(\tau\) has one or more repeated letters. This extends recent work initiated in [9] which focused on subwords where all of the letters in a pattern were distinct. We remark that other finite discrete structures with sequential representations that have been enumerated according to the number of subwords include \(k\)-ary words [1], set partitions [10] and involutions [7]. For examples of other types of statistics which have been studied on non-crossing partitions, we refer the reader to [5, 8, 12, 15, 16].
This paper is organized as follows. In the next section, we enumerate members of \(NC_{n}\) according to four infinite families of subword patterns and compute the corresponding generating function \(F\) in each case. Simple formulas for the total number of occurrences on \(NC_{n}\) for the various patterns are deduced from our formulas for \(F\). Further, an explicit bijection is defined which demonstrates the equivalence of the subwords \((\rho+1)1^{a}\) and \((\rho^{\prime}+1)1^{a^{\prime}}\) of the same length. In the third section, the pattern \(12\cdots(m-1)m^{a}\) is treated using the _kernel method_[3] and a formula for the generating function of its joint distribution with an auxiliary parameter on \(NC_{n}\) is found. Finally, a bijection is given which demonstrates the equivalence of \(1^{a}23\cdots m\) and \(12\cdots(m-1)m^{a}\) as subwords on \(NC_{n}\) for all \(a,m\geq 2\).
As special cases of our results, we obtain \(F_{\tau}\) for all \(\tau\) of length three containing a repeated letter. See Table 1 below, where the equation satisfied by \(F_{\tau}\) is given for each \(\tau\). Note that the case 212 is trivial since any partition \(\pi\) containing a string \(x\) of the form \(x=bab\) where \(a<b\) must contain an occurrence of 1-2-1-2, upon considering the leftmost occurrence of the letter \(a\) in \(\pi\) together with \(x\).
## 2. Distributions of some infinite families of patterns
We first consider the patterns \(\tau=1^{a}\) and \(\rho=1^{b}2\), where \(a,b\geq 1\), and treat them together as a joint distribution on \(NC_{n}\). We shall determine a formula for the generating function (gf)
\begin{table}
\begin{tabular}{|l|l|l|} \hline Subword & Generating function equation & Reference \\ \hline \hline
111 & \(x(1-qx+(q-1)x^{2})F^{2}=(1-qx+(q-1)x^{3})(F-1)\) & Corollary 2 \\ \hline
112 & \(x(1+(q-1)x)F^{2}=(1+(q-1)x^{2})F-1\) & Corollary 2 \\ \hline
121 & \(xF^{2}=(1-(q-1)x^{2})(F-1)\) & Theorem 6 \\ \hline
122 & \(x(1+(q-1)x)F^{2}=(1+(q-1)x^{2})F-1\) & Theorem 10 \\ \hline
211 & \(x(1+(q-1)x)F^{2}=(1+2(q-1)x^{2})F-1-(q-1)x^{2}\) & Theorem 4 \\ \hline
212 & \(xF^{2}=F-1\) & Trivial \\ \hline
221 & \(x(1+(q-1)x)F^{2}=(1+2(q-1)x^{2})F-1-(q-1)x^{2}\) & Theorem 4 \\ \hline \end{tabular}
\end{table}
Table 1. Generating functions \(F=F_{\tau}\) for \(\tau\) of length three containing a repeated letter
of this distribution given by
\[\sum_{n\geq 0}\left(\sum_{\pi\in NC_{n}}p^{\mu_{\tau}(\pi)}q^{\mu_{\rho}(\pi)} \right)x^{n},\]
which we will denote by \(F\). We will make use of the _symbolic_ enumeration method (see, e.g., [2]) in finding \(F\).
**Theorem 1**.: _If \(a\geq b\geq 1\), then the generating function \(F\) enumerating the members of \(NC_{n}\) for \(n\geq 0\) jointly according to the number of occurrences of \(1^{a}\) and \(1^{b}2\) satisfies_
1. \((x-px^{2}+q(p-1)x^{a}+(q-1)(1-px)x^{b})F^{2}=(1-px+q(p-1)x^{a}+(q-1)(1-px)x^{b} )F-1+px-(p-1)x^{a}\)_._
_If \(1\leq a<b\), then \(F\) satisfies_
1. \((x-px^{2}+(p-1)x^{a}+(q-1)(1-x)p^{b-a+1}x^{b})F^{2}=(1-px+(p-1)x^{a}+(q-1)(1-x) p^{b-a+1}x^{b})F-1+px-(p-1)x^{a}\)_._
Proof.: First assume \(a\geq b\geq 1\) and consider the following cases on \(\pi\in NC_{n}\): (i) \(\pi=1^{n}\) for some \(0\leq n\leq a-1\), (ii) \(\pi=1^{n}\), where \(n\geq a\), (iii) \(\pi=1^{r}\alpha\beta\), where \(1\leq r\leq b-1\), \(\alpha\) is nonempty and contains no \(1\)'s and \(\beta\) starts with \(1\) if nonempty, (iv) \(\pi\) as in (iii), but where \(b\leq r\leq a-1\), or (v) \(\pi\) as in (iii), but where \(r\geq a\). Combining cases (i)-(v) implies \(F\) is determined by
\[F=\frac{1-x^{a}}{1-x}+\frac{px^{a}}{1-px}+\left(\frac{x-x^{b}}{1-x}+q\frac{x^{ b}-x^{a}}{1-x}+pq\frac{x^{a}}{1-px}\right)F(F-1).\]
Note that the sections \(\alpha\) and \(\beta\) of \(\pi\) are determined by the factors \(F-1\) and \(F\), respectively, in cases (iii)-(v). Further, \(r\geq b\) in (iv) and (v) implies that there is an extra occurrence of \(\rho\) (accounted for by the lone \(q\) factor) arising due to the initial run of \(1\)'s within \(\pi\) and the first letter of \(\alpha\). After simplification, the preceding equation for \(F\) rearranges to give (1).
If \(1\leq a<b\), then by similar reasoning we have that \(F\) satisfies
\[F=\frac{1-x^{a}}{1-x}+\frac{px^{a}}{1-px}+\left(\frac{x-x^{a}}{1-x}+\frac{px^ {a}-p^{b-a+1}x^{b}}{1-px}+\frac{p^{b-a+1}qx^{b}}{1-px}\right)F(F-1),\]
which simplifies to gives (2).
Taking \(q=1\) and \(p=1\) in Theorem 1, and solving for \(F\) (replacing \(p\) by \(q\) in the resulting formula in the first case), yields the following result.
**Corollary 2**.: _The generating functions counting members of \(NC_{n}\) for \(n\geq 0\) according to the number of occurrences of the patterns \(1^{m}\) and \(1^{m}2\) where \(m\geq 1\) are given respectively by_
\[\frac{1-qx+(q-1)x^{m}-\sqrt{(1-qx+(q-1)x^{m})((1-4x)(1-qx)-3(q-1)x^{m})}}{2x(1 -qx+(q-1)x^{m-1})}\]
_and_
\[\frac{1+(q-1)x^{m}-\sqrt{(1-(q-1)x^{m})^{2}-4x}}{2x(1+(q-1)x^{m-1})}.\]
Differentiating the formulas in Corollary 2 with respect to \(q\), and extracting the coefficient of \(x^{n}\), yields simple expressions for the total number of occurrences of the respective subwords on \(NC_{n}\).
**Corollary 3**.: _The total number of occurrences of \(1^{m}\) and \(1^{m}2\) within all the members of \(NC_{n}\) for \(n\geq m\geq 1\) are given by \(\binom{2r}{r+1}\) and \(\binom{2r-1}{r+1}\), respectively, where \(r=n-m+1\)._
Proof.: It is also possible to provide a combinatorial explanation of these formulas. For the first, suppose that there is a letter \(x\) in the \(i\)-th position within a member of \(NC_{r}\), where \(1\leq i\leq r\). Then insert \(m-1\) additional copies of \(x\) to directly follow the one already present in position \(i\) and mark the occurrence of the subword \(1^{m}\) in the resulting member of \(NC_{n}\). Note that all occurrences of \(1^{m}\) within members of \(NC_{n}\) arise uniquely in this way, which yields \(rC_{r}=\binom{2r}{r+1}\) total occurrences. For the second formula, consider an ascent \(xy\) in \(\pi\in NC_{r}\) and insert \(m-1\) additional copies of \(x\) between \(x\) and \(y\). This results in an occurrence of \(\tau=1^{m}2\) within a member of \(NC_{n}\) in which the role of the '2' is played by \(y\). Thus, counting occurrences of \(\tau\) in \(NC_{n}\) is equivalent to counting ascents in \(NC_{r}\). Note that the number of ascents in a non-crossing partition \(\pi\) equals \(\mu(\pi)-1\) for all \(\pi\), where \(\mu(\pi)\) denotes the number of blocks of \(\pi\). Since \(\mu\) has a Narayana distribution on \(NC_{r}\) (see, e.g., [13, A001263]), it follows that the number of blocks over \(NC_{r}\) equals \(\sum_{i=1}^{r}\frac{i}{r}\binom{r}{i}\binom{r}{i-1}=\binom{2r-1}{r}\). Since the number of ascents is always one less than the number of blocks, we have that the total number of occurrences of \(\tau\) in all the members of \(NC_{n}\) is given by \(\binom{2r-1}{r}-C_{r}=\binom{2r-1}{r+1}\).
Given a sequence \(\rho\) and a number \(x\), let \(\rho+x\) denote the sequence obtained by adding \(x\) to each entry of \(\rho\). Let \(\tau=(\rho+1)1^{b}\), where \(\rho\) is a sequential representation of a non-crossing partition of length \(a\geq 1\). Assume further that \(\rho\) starts with a single \(1\) if \(b\geq 2\) (with no such restriction if \(b=1\)). Then we have the following general formula for \(F_{\tau}\).
**Theorem 4**.: _Let \(\tau=(\rho+1)1^{b}\), where \(\rho\) is of length \(a\geq 1\) as described and \(b\geq 1\). Then the generating function counting the members of \(NC_{n}\) for \(n\geq 0\) according to the number of occurrences of \(\tau\) is given by_
\[\frac{1+2(q-1)x^{a+b-1}-\sqrt{1-4x-4(q-1)x^{a+b}}}{2x(1+(q-1)x^{a+b-2})}.\]
Proof.: Let \(G=G_{\tau}\) be the gf that enumerates \(\pi\in NC_{n}\) for \(n\geq 0\) according to the number of occurrences of \(\tau\) in \(\pi 0^{b}\) and \(F=F_{\tau}\) denote the usual gf. We first establish the relation
\[G=F+(q-1)x^{a-1}(F-1). \tag{3}\]
To do so, first note that \(F\) and \(G\) assign the same \(q\)-weights to non-crossing partitions except for those of the form \(\pi=\alpha\beta\), where \(\beta\) corresponds to an occurrence of the subword \(\rho\). We now describe how such partitions can be formed. Let \(x\) denote the first letter of \(\beta\). Let \(\rho=\rho_{1}\rho_{2}\cdots\rho_{a}\) and \(\rho^{\prime}=\rho_{2}\cdots\rho_{a}\). Let \(\rho^{*}\) be the sequence obtained from \(\rho^{\prime}\) by replacing each \(1\) in \(\rho^{\prime}\) with \(x\) and each letter \(i>1\) with \(i+m-1\), where \(m=\max(\alpha\cup\{x\})\). Then appending \(\rho^{*}\) to the partition \(\alpha x\) gives \(\pi\) of the form stated above, with \(\alpha x\) representing an arbitrary member of \(NC_{n}\) for some \(n\geq 1\). Further, since \(\rho\) starts with a single \(1\) if \(b>1\), we have that appending \(\rho^{*}\) as described to \(\alpha x\) does not introduce an occurrence of \(\tau\) involving the last letters of \(\alpha x\) and the first of \(\rho^{*}\) (as \(\rho^{*}\) must start with \(m+1\) if nonempty when \(b>1\)). Then \(F\) and \(G\) differ with respect to the assigned \(q\)-weight (only) on partitions of the form \(\pi=\alpha x\rho^{*}\), where \(\alpha\) and \(\rho^{*}\) are as described. Such \(\pi\) are enumerated by \(x^{a-1}(F-1)\), since \(\alpha x\) is non-empty and arbitrary and the \(a-1\) appended letters comprising \(\rho^{*}\) are determined once \(\alpha\) is specified. Subtracting the weight of such \(\pi\) from the count for \(G\), and adding them back with an extra factor of \(q\), implies (3).
We now write a formula for \(F\). To do so, note that \(\pi\in NC_{n}\) for some \(n\geq 0\) may be expressed as (i) \(\pi=1^{n}\), (ii) \(\pi=1^{r}\alpha\), where \(r\geq 1\) and \(\alpha\) is nonempty and does not contain \(1\), (iii) \(\pi=1^{r}\alpha 1^{s}\beta\), where \(0\leq s\leq b-2\) and \(\beta\) is nonempty and starts with exactly one \(1\), (iv) \(\pi=1^{r}\alpha 1^{b-1}\beta\), where \(\beta\) is nonempty but may start with any positive number of \(1\)'s in
this case. Note that the gf for all nonempty non-crossing partitions starting with a single \(1\) according to the number of occurrences of \(\tau\) is given by \(F-1-x(F-1)=(1-x)(F-1)\), by subtraction. Hence, case (iii) is seen to contribute \(\frac{x}{1-x}(F-1)(1+x+\cdots+x^{b-2})(1-x)(F-1)\) towards \(F\) if \(b\geq 2\), with (iii) not applicable (i.e., it is subsumed by (iv)) if \(b=1\). In case (iv), one gets a contribution of \(\frac{x}{1-x}(G-1)x^{b-1}(F-1)\) towards \(F\) for all \(b\geq 1\), where the \(G-1\) factor accounts for the nonempty section \(\alpha\), as it is followed by (at least) \(b\) letters \(1\). Combining (i)-(iv) then gives
\[F=\frac{1}{1-x}+\frac{x}{1-x}(F-1)+\frac{x-x^{b}}{1-x}(F-1)^{2}+\frac{x^{b}}{1 -x}(F-1)(G-1). \tag{4}\]
To solve (3) and (4), it is easier to consider \(U=F-1\). Then (3) implies \(G-1=(1+(q-1)x^{a-1})U\) and thus (4) may be rewritten as
\[U=\frac{x}{1-x}\left(1+U+(1-x^{b-1})U^{2}+x^{b-1}(1+(q-1)x^{a-1})U^{2}\right). \tag{5}\]
Solving for \(U\) in (5) gives
\[U=\frac{1-2x-\sqrt{1-4x-4(q-1)x^{a+b}}}{2x(1+(q-1)x^{a+b-2})},\]
which implies the desired formula for \(F=U+1\).
Note that the formula for \(F_{\tau}\) in Theorem 4 depends only on the length of the subword \(\tau\). A bijective proof showing the equivalence of \(\tau\) and \(\tau^{\prime}\) of the same length is given below. In particular, when \(|\tau|=3\), we have \(211\equiv 221\) as subwords on \(NC_{n}\) for all \(n\), with the common gf formula given by
\[\frac{1+2(q-1)x^{2}-\sqrt{1-4x-4(q-1)x^{3}}}{2x(1+(q-1)x)}.\]
Differentiating the formula in Theorem 4 gives the following.
**Corollary 5**.: _If \(n\geq a+b-1\), then the total number of occurrences of \(\tau=(\rho+1)1^{b}\) as described above within all the members of \(NC_{n}\) is given by \(\binom{2r-2}{r+1}\), where \(r=n-a-b+2\)._
_Remark:_ For each \(m\geq 1\), we have from Corollary 3 that the nonzero values in the sequences for the total number of occurrences of \(1^{m}\) and \(1^{m}2\) in \(NC_{n}\) for \(n\geq 1\) correspond respectively to A001791 and A002054 in [13]. Corollary 5 implies the total number of occurrences of \((\rho+1)1^{b}\) corresponds to A002694.
Suppose \(\tau=(p+1)1^{b}\) is as described above with \(|\rho|=a\) and \(\tau^{\prime}=(\rho^{\prime}+1)1^{b^{\prime}}\), where \(\rho^{\prime}\) is of length \(a^{\prime}\geq 1\) and satisfies the same requirements as \(\rho\) above, \(b^{\prime}\geq 1\) and \(a^{\prime}+b^{\prime}=a+b\).
**Bijective proof of \(\tau\equiv\tau^{\prime}\) as subwords on \(NC_{n}\):**
Clearly, we may assume \(|\tau|=a+b\geq 3\). We first prove the result when \(b=b^{\prime}=1\). Let \(\pi\in NC_{n}\), represented sequentially. Let \(\mathbf{s}\) denote a string of \(\pi\) of the form \(\mathbf{s}=u\alpha v\), where \(\alpha\neq\varnothing\) and \(1\leq v<u\leq\min(\alpha)\). If \(\mathbf{s}\) corresponds to an occurrence \(\tau\) (\(\tau^{\prime}\)), then we will refer to \(\mathbf{s}\) as an \(\tau\)_-string_ (\(\tau^{\prime}\)-_string_, respectively). We wish to define a bijection \(f\) on \(NC_{n}\) in which partitions containing a given number of \(\tau\)-strings are mapped to those containing the same number of \(\tau^{\prime}\)-strings, and vice versa. If no \(\tau\)- or \(\tau^{\prime}\)-strings exist (i.e, if \(\pi\) avoids both \(\tau\) and \(\tau^{\prime}\) as subwords), then let \(f(\pi)=\pi\). So let \(x_{1},x_{2},\ldots,x_{r}\) where \(r\geq 1\) denote the complete combined set of \(\tau\)- and \(\tau^{\prime}\)-strings in a left-to-right scan of the sequence \(\pi\). Note that since
\(b=b^{\prime}=1\), the adjacent strings \(x_{i}\) and \(x_{i+1}\) for some \(1\leq i\leq r-1\) are either disjoint or share a single letter.
We now change each \(x_{i}\) to the other option regarding containment of \(\tau\) or \(\tau^{\prime}\). We will first change \(x_{1}\) and then subsequently work on \(x_{2},x_{3},\ldots,x_{r}\), going from left to right. Suppose first that \(x_{1}\) is a \(\tau\)-string. Then we will change \(x_{1}\) to a \(\tau^{\prime}\)-string \(y_{1}\) as follows. Similar reasoning will apply to the case when \(x_{1}\) is a \(\tau^{\prime}\)-string. Suppose \(\tau\) has \(s+1\) distinct letters, where \(s\geq 1\), and that the \(\tau\)-string \(x_{1}\) makes use of the actual letters \(v<u=u_{1}<u_{2}<\cdots<u_{s}\). Note that \(\rho\) a partition and \(\pi\) non-crossing implies \(u_{2},\ldots,u_{s}\) represent the leftmost occurrences of the letters of their respective kinds within \(\pi\) and hence \(u_{\ell}=u_{2}+\ell-2\) for \(2\leq\ell\leq s\). Suppose \(\tau^{\prime}\) has \(t+1\) distinct letters, where \(t\geq 1\). If \(s\geq t\), then replace the letters in \(x_{1}\) with a sequence that is isomorphic to \(\tau^{\prime}\) in which the roles of \(1,2,\ldots,t+1\) are played by \(v<u_{1}<\cdots<u_{t}\). Further, if \(s>t\), then the letters \(u_{t+1}<\cdots<u_{s}\) are not needed in this replacement, in which case, we reduce each letter of \(\pi\) belonging to \(\{u_{s}+1,u_{s}+2,\ldots\}\), all of which must necessarily occur to the right of \(x_{1}\) within \(\pi\), by the amount \(s-t\). Note that \(\pi\) non-crossing and \(\tau\) starting with \(2\) and ending in \(1\) implies that the letters \(u_{t+1},\ldots,u_{s}\) within \(x_{1}\) do not occur elsewhere in \(\pi\).
On the other hand, if \(t>s\), then we use all of the distinct letters occurring in \(x_{1}\), together with \(u_{s}+1,\ldots,u_{s}+t-s\), when performing the replacement. In this case, we must increase any letters of \(\pi\) greater than or equal to \(u_{s}+1\), all of which must occur to the right of \(x_{1}\), by the amount \(t-s\) in order to accommodate the new letters used. In all cases, let \(y_{1}\) denote the \(\tau^{\prime}\)-string that results from making the replacement as described and let \(\pi_{1}\) be the resulting member of \(NC_{n}\). Note that the combined set of \(\tau\)- and \(\tau^{\prime}\)-strings in \(\pi_{1}\) is given by \(y_{1},x_{2},\ldots,x_{r}\). We then repeat the process described above on \(\pi_{1}\) in replacing \(x_{2}\) with a string \(y_{2}\) that represents the other option concerning containment of \(\tau\) or \(\tau^{\prime}\), and let \(\pi_{2}\) denote the resulting member of \(NC_{n}\). Likewise, we continue with \(x_{3},\ldots,x_{r}\), and convert them sequentially to \(y_{3},\ldots,y_{r}\), letting \(\pi_{3},\ldots,\pi_{r}\) denote the corresponding partitions that arise.
Let \(f(\pi)=\pi_{r}\) and we show that \(f\) can be reversed. To do so, first note that the positions of the first and last letters of the strings \(y_{1},\ldots,y_{r}\) in \(\pi_{r}\) are the same as the corresponding positions within \(x_{1},\ldots,x_{r}\) in \(\pi\), as they are seen to be invariant in each step of the transition from \(\pi\) to \(\pi_{r}\). This follows from the fact that the first and last letters within an occurrence \(z\) of \(\tau\) or \(\tau^{\prime}\) are the two smallest letters in \(z\). Therefore, the inverse of \(f\) may be found by reversing each of the transitions \(\pi_{i}\) to \(\pi_{i+1}\) for \(0\leq i\leq r-1\), where \(\pi_{0}=\pi\), in reverse order (i.e., starting with the \(i=r-1\) transition and ending with \(i=0\)). Hence, we have \(\mu_{\tau}(\pi)=\mu_{\tau^{\prime}}(f(\pi))\) for all \(\pi\in NC_{n}\) when it is assumed \(b=b^{\prime}=1\).
To complete the proof, it then suffices to show \(2\sigma 1^{b}\equiv 2^{b}\sigma 1\), where \(b\geq 2\) and \(2\sigma\) is a nonempty non-crossing partition (using the letters in \(\{2,3,\ldots\}\)) such that \(\sigma\) starts with \(3\) if nonempty. To establish this equivalence, let \(\pi=\pi_{1}\cdots\pi_{n}\in NC_{n}\) and we consider (maximal) strings \(\mathbf{p}\) within \(\pi\) of the form
\[\mathbf{p}=u_{1}^{r_{1}}\sigma_{1}u_{2}^{r_{2}}\sigma_{2}\cdots u_{t}^{r_{t}} \sigma_{t}u_{t+1}^{r_{t+1}},\]
where \(t,r_{1},\ldots,r_{t}\geq 1\), \(r_{t+1}\geq 0\), \(u_{1}>u_{2}>\cdots>u_{t}\) (with \(u_{t}>u_{t+1}\) if \(r_{t+1}>0\) and \(u_{t+1}=1\) if \(r_{r+1}=\varnothing\)) and \(u_{i}\sigma_{i}\) isomorphic to \(2\sigma\) for \(1\leq i\leq t\). Note that if \(r_{t+1}=0\), then either \(u_{t}\sigma_{t}\) contains the last letter of \(\pi\) or the successor of the final letter of \(u_{t}\sigma_{t}\) is greater than or equal \(u_{t}\) if \(\sigma\) is nonempty (with the successor being strictly greater if \(\sigma\) is empty). Further, if \(r_{t+1}>0\), then it is understood that \(\sigma\) is nonempty and that the string \(u_{t+1}^{r_{t+1}}\) is not directly followed by a sequence of letters \(\alpha\) such that \(u_{t+1}\alpha\) is isomorphic to \(2\sigma\). We replace each such
string \(\mathbf{p}\) with \(\mathbf{p}^{\prime}\), where
\[\mathbf{p}^{\prime}=\begin{cases}u_{1}^{r_{t+1}}\sigma_{1}u_{2}^{r_{t}}\sigma_{2} \cdots u_{t}^{r_{2}}\sigma_{t}u_{t+1}^{r_{1}},\text{ if }r_{t+1}>0,\\ u_{1}^{r_{t}}\sigma_{1}u_{2}^{r_{t-1}}\sigma_{2}\cdots u_{t}^{r_{1}}\sigma_{t}u_ {t+1}^{r_{t+1}},\text{ if }r_{t+1}=0.\end{cases}\]
Let \(g(\pi)\) denote the member of \(NC_{n}\) that results from replacing each string \(\mathbf{p}\) with \(\mathbf{p}^{\prime}\) as described. Then \(g\) is an involution on \(NC_{n}\) that replaces each occurrence of the pattern \(2\sigma 1^{b}\) with \(2^{b}\sigma 1\) and vice versa, which implies the desired equivalence and completes the proof.
_Remarks:_ When \(|\tau|=|\tau^{\prime}|=3\), then the bijection \(f\) above shows \(231\equiv 221\). For example, let \(\pi=1\underline{2311451678\overline{6619}}\in NC_{15}\), where the occurrences of \(231\) and \(221\) are underlined and overlined, respectively. Then we have
\[\pi_{0} \to\pi_{1}=1\overline{221}1\underline{341567\overline{5518}}\to \pi_{2}=1\overline{221}1\overline{331}4\underline{56}\overline{441}7\to\pi_{3 }=1\overline{221}1\overline{331}4\overline{554416}\] \[\to\pi_{4}=1\overline{221}1\overline{331}4\overline{55} \underline{461}7,\]
and thus \(f(\pi)=\pi_{4}\in NC_{15}\). Note that \(\pi\) has three occurrences of \(231\) and one of \(221\), whereas \(f(\pi)\) has three occurrences of \(221\) and one of \(231\). When \(\tau\) and \(\tau^{\prime}\) are each of length three, the bijection \(g\) shows \(221\equiv 211\). For example, if \(n=12\) and \(\pi=122322114115\in NC_{12}\), then \(g(\pi)=122332214415\). Note that \(\pi\) and \(g(\pi)\) contain one and three and three and one occurrences respectively of \(221\) and \(211\). Finally, the mapping \(g\) is seen to preserve the number of blocks of a partition, whereas \(f\) does not in general.
In the next result, we enumerate members of \(NC_{n}\) with respect to a family of subword patterns generalizing \(121\).
**Theorem 6**.: _Let \(\tau=1^{a}(\rho+1)1^{b}\), where \(a,b\geq 1\) and \(\rho\) is the sequential representation of a non-crossing partition of length \(m\) for some \(m\geq 1\). Then the generating function counting the members of \(NC_{n}\) for \(n\geq 0\) according to the number of occurrences of \(\tau\) is given by_
\[\frac{\left(1-x+(1-q)(1-x^{s})x^{m+t}\right)\left(1-\sqrt{1-\frac{4x(1-x+(1-q) (1-x^{s-1})x^{m+t})}{1-x+(1-q)(1-x^{s})x^{m+t}}}\right)}{2x\left(1-x+(1-q)(1-x ^{s-1})x^{m+t}\right)},\]
_where \(s=\min\{a,b\}\) and \(t=\max\{a,b\}\)._
Proof.: First assume \(b\geq a>1\). To find a formula for \(F=F_{\tau}\) in this case, we refine \(F\) by letting \(F_{i}\) for \(i\geq 1\) denote the restriction of \(F\) to those partitions starting with a sequence of \(1\)'s of length exactly \(i\). Then we have \(F_{1}=x+x(F-1)+x(F-1)^{2}=x(F^{2}-F+1)\), upon considering whether or not a partition enumerated by \(F_{1}\) contains one or more runs of \(1\). By the definitions, we have \(F_{i+1}=xF_{i}\) for all \(i\neq a-1\), upon considering separately the cases \(1\leq i\leq a-2\) and \(i\geq a\), since prepending an extra \(1\) to a member of \(NC_{n}\) not starting with a run of \(1\) of length \(a-1\) does not introduce an occurrence of \(\tau\). We now write a formula for \(F_{a}\). We consider the following cases on \(\pi\in NC_{n}\) where \(n\geq a\): (i) \(\pi=1^{a}\pi^{\prime}\), where \(\pi^{\prime}\) contains no \(1\)'s and is possibly empty, (ii) \(\pi=1^{a}\alpha\beta\), where \(\alpha\) is nonempty and contains no \(1\)'s with \(\alpha\neq\rho+1\) and \(\beta\) is nonempty starting with \(1\), (iii) \(\pi=1^{a}\alpha\beta\), where \(\alpha=\rho+1\) and \(\beta\) is as before. Note that \(\beta\) in case (ii) is accounted for by \(F-1\), whereas in (iii), we need
\[\sum_{i=1}^{b-1}F_{i}+q\sum_{i\geq b}F_{i}=\sum_{i=1}^{a-1}x^{i-1}F_{1}+\sum_{i= a}^{b-1}x^{i-a}F_{a}+q\sum_{i\geq b}x^{i-a}F_{a}=\frac{1-x^{a-1}}{1-x}F_{1}+ \frac{1+(q-1)x^{b-a}}{1-x}F_{a}.\]
Thus, combining cases (i)-(iii), we have
\[F_{a}=x^{a}F+x^{a}(F-1-x^{m})(F-1)+x^{m+a}\left(\frac{1-x^{a-1}}{1-x}F_{1}+\frac{ 1+(q-1)x^{b-a}}{1-x}F_{a}\right),\]
which implies
\[F_{a}=\frac{x^{a}(1+x^{m})+x^{a}(F-1-x^{m})F+\frac{x^{m+a}(1-x^{a-1})}{1-x}F_{1 }}{1-\frac{x^{m+a}(1+(q-1)x^{b-a})}{1-x}}. \tag{6}\]
We use the same cases (i)-(iii) in determining \(F\) (except that the initial run of 1's can have arbitrary length in (i) and any length \(\geq a\) in (ii) and (iii)), along with an additional case where \(\pi\) is of the form \(\pi=1^{r}\alpha\beta\), wherein \(1\leq r\leq a-1\) and \(\alpha\) and \(\beta\) are nonempty with \(\alpha\) not containing 1 and \(\beta\) starting with 1. This yields
\[F =1+\frac{x}{1-x}F+\frac{x-x^{a}}{1-x}(F-1)^{2}+\frac{x^{a}}{1-x} (F-1-x^{m})(F-1)\] \[\quad+\frac{x^{m+a}}{1-x}\left(\sum_{i=1}^{b-1}F_{i}+q\sum_{i \geq b}F_{i}\right)\] \[=1+\frac{x}{1-x}F+\frac{x-x^{a}}{1-x}(F-1)^{2}+\frac{x^{a}}{1-x} (F-1-x^{m})(F-1)+\frac{x^{m+a}(1-x^{a-1})}{(1-x)^{2}}F_{1} \tag{7}\] \[\quad+\frac{x^{m+a}(1+(q-1)x^{b-a})}{1-x}\cdot\frac{x^{a}(1+x^{m} )+x^{a}(F-1-x^{m})F+\frac{x^{m+a}(1-x^{a-1})}{1-x}F_{1}}{1-x-x^{m+a}(1+(q-1)x^ {b-a})},\]
where we have made use of (6).
Note that the \(F_{1}\) coefficient in (7) may be simplified to give
\[\frac{x^{m+a}(1-x^{a-1})}{(1-x)^{2}}+\frac{x^{2(m+a)}(1-x^{a-1})(1 +(q-1)x^{b-a})}{(1-x)^{2}(1-x-x^{m+a}(1+(q-1)x^{b-a}))}\] \[=\frac{x^{m+a}(1-x^{a-1})}{(1-x)^{2}}\left(1+\frac{x^{m+a}(1+(q-1 )x^{b-a})}{1-x-x^{m+a}(1+(q-1)x^{b-a})}\right)\] \[=\frac{x^{m+a}(1-x^{a-1})}{(1-x)(1-x-x^{m+a}(1+(q-1)x^{b-a}))}.\]
Thus, upon clearing fractions in (7), we have
\[(1-x-\ell)(F-1) =x(1-x-\ell)F^{2}-x^{m+a}(1-x-\ell)(F-1)\] \[\quad+\ell x^{a}(1+x^{m}+(F-1-x^{m})F)+x^{m+a}(1-x^{a-1})F_{1},\]
where \(\ell=x^{m+a}(1+(q-1)x^{b-a})\). By the formula for \(F_{1}\), the last equation after several algebraic steps yields
\[(1-x+(1-q)(1-x^{a})x^{m+b})(F-1)=x(1-x+(1-q)(1-x^{a-1})x^{m+b})F^{2},\]
which leads to the stated formula for \(F\) in this case.
Now let us consider the case \(a=1\) and \(b\geq 1\). By similar reasoning as above, we have
\[F=1+\frac{x}{1-x}F+\frac{x}{1-x}(F-1-x^{m})(F-1)+\frac{x^{m+1}(1+(q-1)x^{b-1}) }{(1-x)^{2}}F_{1},\]
\[F_{1}=xF+x(F-1-x^{m})(F-1)+\frac{x^{m+1}(1+(q-1)x^{b-1})}{1-x}F_{1}.\]
Solving this system for \(F\) gives
\[F=\frac{1+(1-q)x^{m+b}-\sqrt{(1+(1-q)x^{m+b})(1-4x+(1-q)x^{m+b})}}{2x},\]
which establishes all cases of the formula when \(b\geq a\geq 1\).
By a comparable argument, one can establish the stated formula for \(F\) when \(a>b\geq 1\). Alternatively, note that the formula is symmetric in \(a\) and \(b\). Thus, to complete the proof, it suffices to define a bijection on \(NC_{n}\) showing that the \(\mu_{\tau}\) statistic when \(\tau=1^{a}(\rho+1)1^{b}\) has the same distribution as \(\mu_{\tau^{\prime}}\) for \(\tau^{\prime}=1^{b}(\rho+1)1^{a}\) where \(a>b\geq 1\). By a _maximal \(\tau\)-string_ within \(\pi=\pi_{1}\cdots\pi_{n}\in NC_{n}\), we mean a sequence \(\mathbf{s}\) of consecutive letters of \(\pi\) of the form \(\mathbf{s}=x^{i_{1}}\alpha_{1}x^{i_{2}}\alpha_{2}\cdots x^{i_{r}}\alpha_{r}x^{ i_{r+1}}\), where \(r,i_{1},\ldots,i_{r+1}\geq 1\), each \(\alpha_{i}\) is isomorphic to \(\rho\) and \(x<\min\{\alpha_{1}\cup\alpha_{2}\cup\cdots\cup\alpha_{r}\}\), that is contained in no other such string of strictly greater length. Identify all maximal \(\tau\)-strings \(\mathbf{s}\) within \(\pi\); note that the various \(\mathbf{s}\) are mutually disjoint, by maximality. Within each string, replace \(x^{i_{1}},x^{i_{2}},\ldots,x^{i_{r+1}}\) with \(x^{i_{r+1}},x^{i_{r}},\ldots,x^{i_{1}}\) (i.e., reverse the order of the \(x\)-runs), leaving the \(\alpha_{i}\) unchanged. Let \(\pi^{\prime}\in NC_{n}\) denote the partition that results from performing this operation on all maximal \(\tau\)-strings \(\mathbf{s}\); note that \(\pi\mapsto\pi^{\prime}\) is an involution and hence bijective. Since any occurrence of \(\tau\) must lie within some \(\mathbf{s}\), the mapping \(\pi\mapsto\pi^{\prime}\) implies the desired equivalence of distributions and completes the proof.
Theorem 6 yields the following formula for the total number of occurrences of \(1^{a}(\rho+1)1^{b}\).
**Corollary 7**.: _If \(n\geq m+a+b-1\), then the total number of occurrences of \(\tau=1^{a}(\rho+1)1^{b}\) as described above within all the members of \(NC_{n}\) is given by \(\binom{2r}{r+1}\), where \(r=n-m-a-b+1\)._
_Remarks:_ When \(s=1\) in Theorem 6, the formula for \(F=F_{\tau}\) may be simplified further to give
\[F=\frac{1+(1-q)x^{a+m}-\sqrt{(1+(1-q)x^{a+m})(1-4x+(1-q)x^{a+m})}}{2x},\]
where \(\tau=1^{a}(\rho+1)1\) or \(1(\rho+1)1^{a}\) and \(a\geq 1\). Note that there is really no loss of generality in assuming \(\rho\) is a sequential representation of some (non-crossing) partition in the hypotheses for Theorem 6 above. This is because if the first occurrence of some letter \(c\) in \(\rho\) precedes the first occurrence of \(d\) with \(c>d\), then containment of \(\tau=1^{a}(\rho+1)1^{b}\) by a partition \(\pi\) would imply an occurrence of 1-2-1-2 of the form \(y\)-\(z\)-\(y\)-\(z\), where \(z\) corresponds to the \(d+1\) in \(\rho+1\) and \(y\) to the 1 of \(\tau\). In addition to implying the symmetry in \(a\) and \(b\) of the pattern \(\tau\), the formula in Theorem 6 shows that \(\tau=1^{a}(\rho+1)1^{b}\) is equivalent to \(\tau^{\prime}=1^{a}(\rho^{\prime}+1)1^{b^{\prime}}\) of the same length, where \(\rho^{\prime}\) denotes a nonempty non-crossing partition and \(a\leq\min\{b,b^{\prime}\}\). For example, when \(|\tau|=4\), we have \(1121\equiv 1211\equiv 1221\equiv 1231\) as subwords on \(NC_{n}\). A bijective proof of \(1^{a}(\rho+1)1^{b}\equiv 1^{a}(\rho^{\prime}+1)1^{b^{\prime}}\) can be obtained by modifying somewhat the mapping \(f\) described above, the details of which we leave to the interested reader.
## 3. The subwords \(12\cdots(m-1)m^{a}\) and \(1^{a}23\cdots m\)
Let \(\tau=12\cdots(m-1)m^{a}\), where \(a,m\geq 2\). To aid in enumerating the members of \(NC_{n}\) with respect to occurrences of \(\tau\), we consider the joint distribution with a further parameter on \(NC_{n}\) that was introduced in [9]. Given \(\pi=\pi_{1}\cdots\pi_{n}\in NC_{n}\), excluding the increasing partition \(12\cdots n\), let \(\operatorname{rep}(\pi)\) denote the smallest repeated letter of \(\pi\). Below, we will find,
more generally, the gf for the joint distribution \(\sum_{\pi\in NC_{n}}v^{\operatorname{rep}(\pi)}q^{\mu_{\tau}(\pi)}\), where \(\operatorname{rep}(12\cdots n)\) is defined to be zero.
Let \(NC_{n,i}\) for \(1\leq i\leq n-1\) denote the subset of \(NC_{n}\) whose members have smallest repeated letter \(i\). Define \(a(n,i)=\sum_{\pi\in NC_{n,i}}q^{\mu_{\tau}(\pi)}\) for \(n\geq 2\) and \(1\leq i\leq n-1\) and \(a(n)=\sum_{\pi\in NC_{n}}q^{\mu_{\tau}(\pi)}\) for \(n\geq 1\), with \(a(0)=1\).
To aid in finding recurrences for \(a(n)\) and \(a(n,i)\), we consider a generalization of \(\mu_{\tau}\) as follows. Given \(\ell\geq 0\) and a partition \(\pi\), let \(\mu_{\tau}^{(\ell)}(\pi)\) denote the number of occurrences of \(\tau\) in the sequence \(12\cdots\ell(\pi+\ell)\). Define
\[a^{(\ell)}(n,i)=\sum_{\pi\in NC_{n,i}}q^{\mu_{\tau}^{(\ell)}(\pi)},\qquad n \geq 2\text{ and }1\leq i\leq n-1,\]
and
\[a^{(\ell)}(n)=\sum_{\pi\in NC_{n}}q^{\mu_{\tau}^{(\ell)}(\pi)},\qquad n\geq 1,\]
with \(a^{(\ell)}(0)=1\). Note that \(a^{(0)}(n,i)=a(n,i)\) and \(a^{(0)}(n)=a(n)\) for all \(n\) and \(i\).
We have the following system of recurrences satisfied by the \(a^{(\ell)}(n,i)\) and \(a^{(\ell)}(n)\).
**Lemma 8**.: _If \(n\geq a\) and \(1\leq i\leq n-a+1\), then_
\[a^{(\ell)}(n,i)=\sum_{j=i+1}^{n}a^{(\ell+i)}(j-i-1)a^{(0)}(n-j+1)+\begin{cases} 0,\text{ if }i+\ell\leq m-1,\\ (q-1)a^{(0)}(n-i-a+2),\text{ if }i+\ell\geq m,\end{cases} \tag{8}\]
_for all \(\ell\geq 0\). Furthermore, we have_
\[a^{(\ell)}(n)=C_{a-1}+\sum_{i=1}^{n-a+1}a^{(\ell)}(n,i),\qquad n\geq a, \tag{9}\]
_with \(a^{(\ell)}(n)=C_{n}\) for \(0\leq n\leq a-1\)._
Proof.: Since \(a^{(\ell)}(0)=1\) for all \(\ell\geq 0\), formula (8) is equivalent to
\[a^{\ell}(n,i)=\sum_{j=i+2}^{n}a^{(\ell+i)}(j-i-1)a^{(0)}(n-j+1)+\begin{cases}a ^{(0)}(n-i),\text{ if }i+\ell\leq m-1,\\ a^{(0)}(n-i)+(q-1)a^{(0)}(n-i-a+2),\text{ if }i+\ell\geq m,\end{cases} \tag{10}\]
which we will now show. To do so, first consider the position \(j\) of the second occurrence of \(i\) within \(\pi\in NC_{n,i}\). If \(j\geq i+2\), such \(\pi\) are expressible as \(\pi=12\cdots i\alpha i\beta\), where \(\alpha\) is nonempty and contains no letters \(i\) and \(\beta\) is possibly empty. Then we get \(a^{(\ell+i)}(j-i-1)a^{(0)}(n-j+1)\) possibilities and summing over all \(j\geq i+2\) yields the first part of (10) in either case. So assume \(j=i+1\) and first suppose \(i+\ell\leq m-1\). Then there is no occurrence of \(\tau\) in \(\pi\) involving any of its first \(i-1\) letters, regardless of the length of the leftmost run of \(i\)'s, which implies a contribution of \(a^{(0)}(n-i)\) and hence the first case of (10). If \(i+\ell\geq m\), then we consider cases based on the length of the leftmost run of \(i\)'s as follows. Suppose first that \(\pi\) is expressible as \(\pi=12\cdots(i-1)i^{r}\pi^{\prime}\), where \(\pi^{\prime}\) does not start with \(i\) and \(2\leq r\leq a-1\), assuming for now \(a\geq 3\). Then, by subtraction, there are \(a^{(0)}(n-i-r+2)-a^{(0)}(n-i-r+1)\) possibilities and summing over all \(r\) gives
\[\sum_{r=2}^{a-1}(a^{(0)}(n-i-r+2)-a^{(0)}(n-i-r+1))=a^{(0)}(n-i)-a^{(0)}(n-i-a+ 2).\]
On the other hand, if \(\pi=12\cdots(i-1)i^{r}\pi^{\prime}\), where \(r\geq a\), then \(i+\ell\geq m\) implies that there is an occurrence of \(\tau\) involving the first \(i+a-1\) letters of \(\pi\) (when taken together with the understood suffix \(12\cdots\ell\) consisting of strictly smaller letters). Then the sequence \(i^{r-a+1}\pi^{\prime}\) corresponds to a partition enumerated by \(a^{(0)}(n-i-a+2)\), as it is directly preceded by at least one \(i\), and hence the contribution towards the overall weight in this case is given by \(qa^{(0)}(n-i-a+2)\). Combining this case with the previous yields the second part of (10) when \(i+\ell\geq m\) and completes the proof of (10).
For (9), first note that the initial conditions when \(0\leq n\leq a-1\) are apparent since no occurrence of \(\tau\) is possible for such \(n\) for all \(\ell\). Suppose \(k\) is the smallest repeated letter in \(\pi\in NC_{n}\). If \(1\leq k\leq n-a+1\), then \(\pi\) is accounted for by the sum in (9), by the definitions. Otherwise, \(\pi\) can be represented as \(\pi=12\cdots(n-a+1)\pi^{\prime}\), where \(\pi^{\prime}\) contains no letters in \([n-a+1]\), for which there are \(C_{a-1}\) possibilities since no such \(\pi\) can contain an occurrence of \(\tau\) (as the \(m^{a}\) part of \(\tau\) cannot be achieved by any letter in \(\pi^{\prime}\)). Combining this with the prior case yields (9).
Define
\[A(x,u)=\sum_{n\geq 0}\sum_{\ell\geq 0}a^{(\ell)}(n)u^{\ell}x^{n}\]
and
\[A(x,u,v)=\sum_{n\geq a}\sum_{\ell\geq 0}\sum_{i=1}^{n-a+1}a^{(\ell)}(n,i)u^{ \ell}v^{i-1}x^{n}.\]
Rewriting the recurrences in Lemma 8 in terms of gf's yields the following system of functional equations.
**Lemma 9**.: _We have_
\[A(x,u)=A(x,u,1)+\frac{x^{a}C_{a-1}}{(1-u)(1-x)}+L(x,u), \tag{11}\]
\[A(x,u,v) =\frac{x(A(x,0)-1)(A(x,vx)-A(x,u))}{vx-u}-M(x,u,v)\] \[\quad+\frac{(q-1)x^{a-1}(A(x,0)-1)(u^{m}(1-vx)-(vx)^{m}(1-u))}{( 1-u)(1-vx)(u-vx)}, \tag{12}\]
_where \(L(x,u)=\frac{1}{1-u}\sum_{j=0}^{a-1}C_{j}x^{j}\) and_
\[M(x,u,v)=\begin{cases}0,\text{ if }a=2,\\ \frac{x}{(1-u)(1-vx)}\sum_{j=0}^{a-3}\sum_{i=1}^{a-2-j}C_{i}C_{j}x^{i+j},\text { if }a\geq 3.\end{cases}\]
Proof.: Multiplying both sides of (9) by \(u^{\ell}x^{n}\), and summing over \(n\geq a\) and \(\ell\geq 0\), gives
\[A(x,u) =\sum_{n\geq a}\sum_{\ell\geq 0}\sum_{i=1}^{n-a+1}a^{(\ell)}(n,i)u^{ \ell}x^{n}+\sum_{n\geq a}\sum_{\ell\geq 0}C_{a-1}u^{\ell}x^{n}+\sum_{n=0}^{a-1} \sum_{\ell\geq 0}a^{(\ell)}(n)u^{\ell}x^{n}\] \[=A(x,u,1)+\frac{x^{a}C_{a-1}}{(1-u)(1-x)}+\frac{1}{1-u}\sum_{j=0 }^{a-1}C_{j}x^{j},\]
by the initial values for \(a^{(\ell)}(n)\).
To rewrite (8) in terms of gf's, we first must find
\[\sum_{n\geq a}\sum_{\ell\geq 0}\sum_{i=1}^{n-a+1}\sum_{j=i+1}^{n}a^{(\ell+i)}(j-i -1)a^{(0)}(n-j+1)u^{\ell}v^{i-1}x^{n}.\]
First observe the following manipulation of sums:
\[\sum_{n\geq a}\sum_{\ell\geq 0}\sum_{i=1}^{n-a+1}\sum_{j=i+1}^{n}( \ldots)=\sum_{\ell\geq 0}\sum_{i\geq 1}\sum_{j\geq i+1}\sum_{n\geq\max\{j,i+a-1\}} (\ldots)\] \[= \sum_{\ell\geq 0}\sum_{i\geq 1}\sum_{j=i+1}^{i+a-1}\sum_{n\geq i+a- 1}(\ldots)+\sum_{\ell\geq 0}\sum_{i\geq 1}\sum_{j\geq i+a}\sum_{n\geq j}( \ldots),\]
where \((\ldots)\) denotes the original summand above. Replacing \(j\) with \(j+i+1\) in both sums in the last expression implies
\[\sum_{n\geq a}\sum_{\ell\geq 0}\sum_{i=1}^{n-a+1}\sum_{j=i+1}^{n}a^{ (\ell+i)}(j-i-1)a^{(0)}(n-j+1)u^{\ell}v^{i-1}x^{n}\] \[= \sum_{\ell\geq 0}\sum_{i\geq 1}\sum_{j=0}^{a-2}\sum_{n\geq i+a- 1}a^{(\ell+i)}(j)a^{(0)}(n-j-i)u^{\ell}v^{i-1}x^{n}\] \[+\sum_{\ell\geq 0}\sum_{i\geq 1}\sum_{j\geq a-1}\sum_{n\geq j+i+1 }a^{(\ell+i)}(j)a^{(0)}(n-j-i)u^{\ell}v^{i-1}x^{n}\] \[= \sum_{\ell\geq 0}\sum_{i\geq 1}\sum_{j=0}^{a-2}\sum_{n\geq a-1-j} a^{(\ell+i)}(j)a^{(0)}(n)u^{\ell}v^{i-1}x^{n+i+j}\] \[+\sum_{\ell\geq 0}\sum_{i\geq 1}\sum_{j\geq a-1}\sum_{n\geq 1}a^{( \ell+i)}(j)a^{(0)}(n)u^{\ell}v^{i-1}x^{n+i+j}\] \[= \sum_{\ell\geq 0}\sum_{i\geq 1}\sum_{j=0}^{a-2}a^{(\ell+i)}(j)u^{ \ell}v^{i-1}x^{i+j}\left(\sum_{n\geq 1}a^{(0)}(n)x^{n}-\sum_{n=1}^{a-2-j}a^{(0)}(n )x^{n}\right)\] \[+\sum_{\ell\geq 0}\sum_{i\geq 1}\sum_{j\geq a-1}a^{(\ell+i)}(j)u^{ \ell}v^{i-1}x^{i+j}\sum_{n\geq 1}a^{(0)}(n)x^{n}\] \[= \sum_{\ell\geq 0}\sum_{i\geq 1}\sum_{j\geq 0}a^{(\ell+i)}(j)u^{ \ell}v^{i-1}x^{i+j}\sum_{n\geq 1}a^{(0)}(n)x^{n}\] \[-\sum_{\ell\geq 0}\sum_{i\geq 1}\sum_{j=0}^{a-2}a^{(\ell+i)}(j)u^{ \ell}v^{i-1}x^{i+j}\sum_{n=1}^{a-2-j}a^{(0)}(n)x^{n}\] \[= (A(x,0)-1)\sum_{i\geq 1}\sum_{\ell\geq i}\sum_{j\geq 0}a^{( \ell)}(j)u^{\ell-i}v^{i-1}x^{i+j}-\frac{x}{(1-u)(1-vx)}\sum_{j=0}^{a-3}C_{j}x^ {j}\sum_{n=1}^{a-2-j}C_{n}x^{n}\] \[= \frac{x(A(x,0)-1)}{u-vx}\sum_{\ell\geq 1}\sum_{j\geq 0}a^{( \ell)}(j)x^{j}(u^{\ell}-(vx)^{\ell})-M(x,u,v)\]
\[=\frac{x(A(x,0)-1)(A(x,vx)-A(x,u))}{vx-u}-M(x,u,v).\]
For converting the second part of formula (8), we consider cases on \(i\). Omitting the factor \(q-1\), this yields
\[\sum_{i=1}^{m-1}\sum_{\ell\geq m-i}\sum_{n\geq i+a-1}a^{(0)}(n-i-a+ 2)u^{\ell}v^{i-1}x^{n}+\sum_{i\geq m}\sum_{\ell\geq 0}\sum_{n\geq i+a-1}a^{(0)}(n-i-a+ 2)u^{\ell}v^{i-1}x^{n}\] \[=(A(x,0)-1)\sum_{i=1}^{m-1}\sum_{\ell\geq m-i}u^{\ell}v^{i-1}x^{i+ a-2}+(A(x,0)-1)\sum_{i\geq m}\sum_{\ell\geq 0}u^{\ell}v^{i-1}x^{i+a-2}\] \[=(A(x,0)-1)\left(\frac{ux^{a-1}(u^{m-1}-(vx)^{m-1})}{(1-u)(u-vx)} +\frac{v^{m-1}x^{a+m-2}}{(1-u)(1-vx)}\right)\] \[=\frac{x^{a-1}(A(x,0)-1)(u^{m}(1-vx)-(vx)^{m}(1-u))}{(1-u)(1-vx)( u-vx)}.\]
Combining the two contributions to the gf above yields (12).
**Theorem 10**.: _Let \(y=A(x,0)\) denote the generating function counting members of \(NC_{n}\) for \(n\geq 0\) according to the number of occurrences of \(\tau=12\cdots(m-1)m^{a}\), where \(a,m\geq 2\). Then \(y\) satisfies the polynomial equation_
\[xy^{2}-y+1+(q-1)x^{a+m-2}y^{m-1}(y-1)=0. \tag{13}\]
_More generally, the generating function counting members of \(NC_{n}\) jointly according to the smallest repeated letter and number of occurrences of \(\tau\) (marked by \(v\) and \(q\), respectively) is given by \(vA(x,0,v)+\frac{1}{1-x}\) if \(a=2\) and by_
\[vA(x,0,v)+\frac{1}{1-x}+\frac{v^{2}x^{a}C_{a-1}-v^{2}x^{2}}{1-vx}+v\sum_{i=2}^ {a-1}C_{i}x^{i}+(v-1)\sum_{n\geq 2}\sum_{j=r}^{n-1}C_{n-j}x^{j},\]
_if \(a\geq 3\), where \(r=\max\{1,n-a+2\}\) and_
\[A(x,0,v)=\frac{(y-1)(A(x,vx)-y)}{v}-M(x,0,v)+\frac{(q-1)v^{m-1}x^{a+m-2}(y-1)} {1-vx}, \tag{14}\]
_with \(A(x,u)\) given by (17)._
Proof.: We first find an equation satisfied by \(y\). Note that (12) at \(u=0\) and \(v=1\), taken together with (11), gives
\[y^{2}=(y-1)A(x,x)-M(x,0,1)+\frac{(q-1)x^{a+m-2}(y-1)}{1-x}+\frac{x^{a}C_{a-1}}{ 1-x}+L(x,0). \tag{15}\]
We apply the kernel method to (12) to obtain an expression for \(A(x,x)\). Taking \(u=xA(x,0)=xy\) and \(v=1\) in (12) implies
\[A(x,x)=\frac{(q-1)x^{a-2}((xy)^{m}(1-x)-x^{m}(1-xy))}{(1-x)(1-xy)}-M(x,xy,1)+ \frac{x^{a}C_{a-1}}{(1-x)(1-xy)}+L(x,xy). \tag{16}\]
Now observe
\[\sum_{j=0}^{a-3}\sum_{i=1}^{a-2-j}C_{i}C_{j}x^{i+j}=\sum_{i=1}^{a-2}x^{i}\sum_ {j=0}^{i-1}C_{i-j}C_{j}=\sum_{i=0}^{a-2}x^{i}(C_{i+1}-C_{i}),\]
by the recurrence for Catalan numbers. Thus, the right-hand side of (15) may be written as
\[\frac{(q-1)x^{a-2}((xy)^{m}(1-x)-x^{m}(1-xy))(y-1)}{(1-x)(1-xy)}-\frac {x(y-1)}{(1-x)(1-xy)}\sum_{i=0}^{a-2}x^{i}(C_{i+1}-C_{i})\] \[\quad+(y-1)\left(\frac{x^{a}C_{a-1}}{(1-x)(1-xy)}+\frac{1}{1-xy} \sum_{j=0}^{a-1}C_{j}x^{j}\right)-\frac{x}{1-x}\sum_{i=0}^{a-2}x^{i}(C_{i+1}-C _{i})\] \[\quad+\frac{(q-1)x^{a+m-2}(y-1)}{1-x}+\frac{x^{a}C_{a-1}}{1-x}+ \sum_{j=0}^{a-1}C_{j}x^{j}\] \[=\frac{(q-1)x^{a+m-2}y^{m}(y-1)}{1-xy}-\frac{xy}{1-xy}\sum_{i=0}^ {a-2}x^{i}(C_{i+1}-C_{i})+\frac{x^{a}yC_{a-1}}{1-xy}+\frac{y(1-x)}{1-xy}\sum_{ j=0}^{a-1}C_{j}x^{j}\] \[=\frac{y}{1-xy}\bigg{(}(q-1)x^{a+m-2}y^{m-1}(y-1)-\sum_{i=1}^{a-1 }C_{i}x^{i}+x\sum_{i=0}^{a-2}C_{i}x^{i}+x^{a}C_{a-1}\] \[\quad+(1-x)\sum_{j=0}^{a-1}C_{j}x^{j}\bigg{)}\] \[=\frac{(q-1)x^{a+m-2}y^{m}(y-1)+y}{1-xy}.\]
Equating this last expression with \(y^{2}\) then leads to (13). Solving for \(A(x,u)\) in (12) at \(v=1\), making use of (11), gives
\[A(x,u) =\frac{x-u}{xy-u}\bigg{(}\frac{x^{a}C_{a-1}}{(1-u)(1-x)}+\frac{(1 -q)x^{a-1}(u^{m}(1-x)-x^{m}(1-u))(y-1)}{(1-u)(1-x)(x-u)} \tag{17}\] \[\quad+\frac{x(y-1)}{x-u}A(x,x)-M(x,u,1)+L(x,u)\bigg{)},\]
where \(A(x,x)\) is given by (16). Letting \(u=0\) in (12) now leads to (14). Finally, taking into account the \(v\)-weights of members of \(NC_{n}\) having smallest repeated letter \(i\) where \(r\leq i\leq n-1\), along with the increasing partition (which has weight \(1\) for all \(n\geq 0\)), implies the gf enumerating members of \(NC_{n}\) for \(n\geq 0\) jointly according to the rep value and number of occurrences of \(\tau\) is given by \(vA(x,0,v)+\frac{1}{1-x}\) if \(a=2\) and by
\[vA(x,0,v)+\frac{1}{1-x}+\sum_{n=2}^{a-1}x^{n}\sum_{j=1}^{n}v^{j}(C_{n-j+1}-C_{n -j})+\sum_{n\geq a}x^{n}\sum_{k=n-a+2}^{n}v^{k}(C_{n-k+1}-C_{n-k}),\]
if \(a\geq 3\). Rewriting the last expression somewhat yields the stated formula for the joint gf and completes the proof.
**Corollary 11**.: _If \(n\geq a+m-1\), then the total number of occurrences of \(\tau=1\cdots(m-1)m^{a}\) within all the members of \(NC_{n}\) is given by \(\frac{r}{2r+m}\binom{2r+m}{r}\), where \(r=n-a-m+2\)._
Proof.: Let \(C=C(x)\), \(F=A(x,0)\) and \(D=\frac{\partial F}{\partial q}\mid_{q=1}\). Differentiating both sides of (13) with respect to \(q\), and noting \(F\mid_{q=1}=C\), yields
\[2xCD-D+x^{a+m-2}C^{m-1}(C-1)=0,\]
i.e.,
\[D=\frac{x^{a+m-2}C^{m-1}(C-1)}{1-2xC}=\frac{x^{a+m-2}C^{m-1}(C-1)}{\sqrt{1-4x}}.\]
Extracting the coefficient of \(x^{n}\) for \(n\geq a+m-1\), and making use of [14, Eqn. 2.5.15], then gives
\[[x^{n}]D=\binom{2r+m}{r}-\binom{2r+m-1}{r}=\left(1-\frac{r+m}{2r+m}\right) \binom{2r+m}{r}=\frac{r}{2r+m}\binom{2r+m}{r}.\]
_Remark:_ The \(m=2\) and \(m=3\) cases of the formula \(\frac{r}{2r+m}\binom{2r+m}{r}\) from Corollary 11 coincide respectively with sequences A002054 and A002694 in [13].
We conclude with the following equivalence between \(\tau\) and \(1^{a}23\cdots m\).
**Theorem 12**.: _We have \(1^{a}23\cdots m\equiv 12\cdots(m-1)m^{a}\) as subwords on \(NC_{n}\) for all \(a,m\geq 2\)._
Proof.: We provide a bijective proof of this result. Suppose that the descents from left to right within \(\pi=\pi_{1}\cdots\pi_{n}\in NC_{n}\) correspond to the letters \(a_{i}>b_{i}\) for \(1\leq i\leq r\) and some \(r\geq 0\). Let \(\rho_{1}\) denote the section of \(\pi\) to the left of and including \(a_{1}\) and \(\rho_{r+1}\) the section to the right of and including \(b_{r}\) (if \(r=0\), then \(\rho_{1}\) comprises all of \(\pi\)). If \(r\geq 2\), then let \(\rho_{i}\) for \(2\leq i\leq r\) denote the subsequence of \(\pi\) starting with \(b_{i-1}\) and ending with \(a_{i}\). Note that \(\rho_{i}\) for each \(i\) is weakly increasing, as it consists of the letters between consecutive descents of \(\pi\) (or occurring prior to the first or after the last descent of \(\pi\)).
Suppose that the descent bottom letters \(b_{1},\ldots,b_{r}\) within \(\pi\) are given, with \(b_{0}=1\). Let section \(\rho_{i}\) of \(\pi\) for \(1\leq i\leq r+1\) be represented sequentially as \(\rho_{i}=s_{0}^{(i)}s_{1}^{(i)}\cdots s_{t_{i}}^{(i)}\), where \(s_{0}^{(i)}=b_{i-1}\). Define the binary sequence \(\mathbf{d^{(i)}}=d_{1}^{(i)}d_{2}^{(i)}\cdots d_{t_{i}}^{(i)}\), where \(d_{k}^{(i)}=1\) if \(s_{k}^{(i)}>s_{k-1}^{(i)}\) and \(d_{k}^{(i)}=0\) if \(s_{k}^{(i)}=s_{k-1}^{(i)}\) for \(1\leq k\leq t_{i}\). Note that \(\pi\) non-crossing implies it is uniquely determined by its descent bottoms \(b_{1},\ldots,b_{r}\), taken together with its complete set of associated binary sequences \(\mathbf{d^{(1)}},\ldots,\mathbf{d^{(r+1)}}\).
Let \(\pi^{\prime}\) be the uniquely determined member of \(NC_{n}\) whose descents bottoms are the same as those of \(\pi\) (i.e., are given by \(b_{1},\ldots,b_{r}\)) and whose associated binary sequences are given by \(\operatorname{rev}(\mathbf{d^{(1)}}),\ldots,\operatorname{rev}(\mathbf{d^{(r+ 1)}})\), where \(\operatorname{rev}(s)\) denotes the reversal of a sequence \(s\). Note that the section \(\rho_{i}^{\prime}\) of \(\pi^{\prime}\) corresponding to \(\rho_{i}\) in \(\pi\) for \(1\leq i\leq r+1\) will have the same set of distinct letters for all \(i\), and thus \(\pi^{\prime}\) will have the same ascent tops as \(\pi\). Also, an occurrence of \(1^{a}23\cdots m\) or \(12\cdots(m-1)m^{a}\) within some section \(\rho_{i}\) of \(\pi\) will result in an occurrence of the other pattern within \(\rho_{i}^{\prime}\) of \(\pi^{\prime}\), and vice versa. Further, an occurrence of either pattern must lie completely within a section \(\rho_{i}\) of \(\pi\) or \(\rho_{i}^{\prime}\) of \(\pi^{\prime}\), as neither contains a descent. Since the mapping \(\pi\mapsto\pi^{\prime}\) is an involution on \(NC_{n}\), and hence bijective, the desired equivalence of patterns follows.
|
2308.04665 | Sudowoodo: a Chinese Lyric Imitation System with Source Lyrics | Lyrics generation is a well-known application in natural language generation
research, with several previous studies focusing on generating accurate lyrics
using precise control such as keywords, rhymes, etc. However, lyrics imitation,
which involves writing new lyrics by imitating the style and content of the
source lyrics, remains a challenging task due to the lack of a parallel corpus.
In this paper, we introduce \textbf{\textit{Sudowoodo}}, a Chinese lyrics
imitation system that can generate new lyrics based on the text of source
lyrics. To address the issue of lacking a parallel training corpus for lyrics
imitation, we propose a novel framework to construct a parallel corpus based on
a keyword-based lyrics model from source lyrics. Then the pairs \textit{(new
lyrics, source lyrics)} are used to train the lyrics imitation model. During
the inference process, we utilize a post-processing module to filter and rank
the generated lyrics, selecting the highest-quality ones. We incorporated audio
information and aligned the lyrics with the audio to form the songs as a bonus.
The human evaluation results show that our framework can perform better lyric
imitation. Meanwhile, the \textit{Sudowoodo} system and demo video of the
system is available at
\href{https://Sudowoodo.apps-hp.danlu.netease.com/}{Sudowoodo} and
\href{https://youtu.be/u5BBT_j1L5M}{https://youtu.be/u5BBT\_j1L5M}. | Yongzhu Chang, Rongsheng Zhang, Lin Jiang, Qihang Chen, Le Zhang, Jiashu Pu | 2023-08-09T02:12:04Z | http://arxiv.org/abs/2308.04665v1 | # Sudowoodo: a Chinese Lyric Imitation System with Source Lyrics
###### Abstract
Lyrics generation is a well-known application in natural language generation research, with several previous studies focusing on generating accurate lyrics using precise control such as keywords, rhymes, etc. However, lyrics imitation, which involves writing new lyrics by imitating the style and content of the source lyrics, remains a challenging task due to the lack of a parallel corpus. In this paper, we introduce _Sudowoodo_, a Chinese lyrics imitation system that can generate new lyrics based on the text of source lyrics. To address the issue of lacking a parallel training corpus for lyrics imitation, we propose a novel framework to construct a parallel corpus based on a keyword-based lyrics model from source lyrics. Then the pairs _(new lyrics, source lyrics)_ are used to train the lyrics imitation model. During the inference process, we utilize a post-processing module to filter and rank the generated lyrics, selecting the highest-quality ones. We incorporated audio information and aligned the lyrics with the audio to form the songs as a bonus. The human evaluation results show that our framework can perform better lyric imitation. Meanwhile, the _Sudowoodo_ system and demo video of the system is available at Sudowoodo and [https://youtu.be/u5BBT_j1LSM](https://youtu.be/u5BBT_j1LSM).
## 1 Introduction
AI creative assistants are artificial intelligence systems that can learn from large amounts of text data to understand human language and culture and use this knowledge to create content such as story generation Alabdulkarim et al. (2021); Zhu et al. (2020), poetry writing Guo et al. (2019); Liu et al. (2019); Yang et al. (2019), grammar and spelling checking Patil et al. (2021), etc. In addition, AI creative assistants can also assist in songwriting Potash et al. (2015); Zhang et al. (2020); Shen et al. (2019) by learning from numerous songs, understanding human emotional expression, and creating music in a similar writing style to humans. Previous research Castro and Attarian (2018); Watanabe et al. (2018); Manjavacas et al. (2019); Fan et al. (2019); Li et al. (2020); Zhang et al. (2020, 2022) has focused on generating lyrics based on specified keywords (e.g., _Snow_), lyrics styles, themes, or user input passages, which generate new lyrics with limited control over the content. However, in actual music production, users sometimes adapt excellent songs by adding their own creativity while remaining the original lyrical structure, resulting in new lyrics. This requires stronger control over the source lyrics such as text content, emotion, and fine-grained writing styles.
To address this issue, this paper demonstrates _Sudowoodo_1 (a Pokemon with the ability to imitate) a Chinese lyrics imitation generation system based on source lyrics. _Sudowoodo_ is typically based on the Encoder-Decoder framework, where the encoder encodes the text and attributes of the source lyrics, and the decoder generates the imitated lyrics. However, since we only have the source lyrics and not the target ones, the parallel corpus is lacking to train the imitation model. To solve the problem, we also propose a method for constructing aligned training samples, which generated the target lyrics from the extracted keywords of source lyrics using a keywords-based lyrics generation model.
Footnote 1: [https://en.wikipedia.org/wiki/Talk%3ASudowoodo](https://en.wikipedia.org/wiki/Talk%3ASudowoodo)
Specifically, we first collect the source lyrics corpus \(D_{k}\) from the Internet 2 and utilize the keyword extraction method described in Section 2.1 to extract keywords from source lyrics. And we train a keywords-based model, named Model\({}_{K2L}\), which can generate lyrics from given keywords. Then, we generate the target lyrics \(D_{k^{\prime}}\) using the Model\({}_{K2L}\). Finally, we train a lyrics imitation model with the aligned lyrics corpus (\(D_{k^{\prime}}\), \(D_{k}\)) based on the encoder-decoder framework. In addition, to improve the quality of generated lyrics
and better showcase the results, we also employ post-processing modules including lyrics quality scoring and relevance scoring. Meanwhile, to provide a more intuitive understanding of the generated lyrics through imitation, we incorporate audio information (the vocals and melody of the source song) and align the lyrics with the audio to produce a complete song.
The main contributions of the _Sudowoodo_ system are summarized as follows:
* We present a lyric imitation tool that generates new lyrics end-to-end based on source lyrics. Furthermore, we explore the addition of musical information to the generated lyrics in order to create songs. Sample songs can be heard at the songs of Sudowoodo.
* We propose a novel framework for constructing a parallel lyrics corpus for imitation based on the keyword-based model. The results of the human evaluation show the efficacy of the imitation model trained on the basis of this parallel lyrics corpus.
* The _Sudowoodo_ system and demo video can be available at Sudowoodo and [https://youtu.be/u5BBT_j1LSM](https://youtu.be/u5BBT_j1LSM).
## 2 Framework
The _Sudowoodo_ system consists of two models and a post-processing module, as illustrated in Figure 1: **Model\({}_{K2L}\)**, **Model\({}_{L2L}\)**, and **Post-Processing**. These modules will be described in greater detail below.
### Data Preparation
In this study, we obtain a dataset of 800\(k\) Chinese lyrics of various styles from the Internet, including pop, hip-hop, rap, etc. After filtering out lyrics less than 100 characters in length and removing duplicates, we are left with \(600k\) unique lyrics. We denote the processed lyrics corpus as \(D_{k}\).
As depicted in the attribute extraction section of Figure 1, when conducting attributes extraction for the source lyrics, we extract not only the keywords of the source lyrics but other attributes such as style and emotion. To extract keywords from the source lyrics, we first segment the lyrics into multiple bars. We then apply KBERT Liu et al. (2019) based on distiluse-base-multilingual-cased-v1 3 to extract a subset of the keywords from each bar. We extract 5 keywords for each bar. In addition, we rank the
Figure 1: The framework of _Sudowoodo_ system proposes in this paper. **Model\({}_{K2L}\)** denotes a model for generating lyrics based on keywords, while **Model\({}_{L2L}\)** represents the generation model from source lyrics to imitation lyrics.**Encoder** refers to the encoding portion of the Encoder-decoder architecture, while **Decoder** represents the decoding portion. **Post-processing** is mainly aimed at the imitation lyrics generated based on the Model\({}_{L2L}\).
keywords according to their scores and select the top 10% scoring keywords as the keywords for the whole song. In this process, we utilize the Jieba 4 as a word separation tool. For other information, we train a classifier model to acquire attributes such as emotion and style from source lyrics. Finally, we construct a parallel corpus dataset by extracting keywords, style, and emotion from the lyrics and aligning these attributes with source lyrics to form paired data (\(D_{A}\), \(D_{K}\)), where \(D_{A}\) represents the corpus composed of the extracted attributes of the corresponding source lyrics. The size of this dataset is \(600k\).
Footnote 4: [https://github.com/fxsjy/jieba](https://github.com/fxsjy/jieba)
### Models
We first train a model, named Model\({}_{K2L}\), using the paired data (\(D_{A}\), \(D_{K}\)) to generate lyrics based on keywords and their associated attributes such as emotion and style. Then, we acquire three new lyrics through Model\({}_{K2L}\) for each source lyric with random keywords extracted from the source lyric. The new lyrics are aligned with the source lyric and keywords to form paired data. All the lyrics generated by Model\({}_{K2L}\) are collected as \(D^{\prime}_{K}\). Consequently, we construct a parallel corpus dataset (\(D^{\prime}_{k}\), \(D_{k}\)) with a size of \(1800k\). Meanwhile, during the training of Model\({}_{L2L}\), we encoder \(D^{\prime}_{k}\) and the write styles of \(D_{k}\), while the decoding side targets \(D_{k}\).
**Initialization:** To improve the model's performance and generate more fluent text, we initialize the model with a self-developed transformers-based pre-training model. Note that the structure of the pre-trained model is consistent with GPT-2 5, containing 210 million parameters with 16 layers, 1024 hidden dimensions, and 16 self-attention heads. The model is pre-trained on 30G of Chinese novels collected from the internet, using a vocabulary of 11400 words and a maximum sequence length of 512.
Footnote 5: [https://openai.com/blog/gpt-2-1-5b-release/](https://openai.com/blog/gpt-2-1-5b-release/)
**Training:** Due to the lack of direct alignment corpus from lyrics to lyrics, we cannot train a seq2seq encoding and decoding model directly. Therefore, we propose a novel training strategy, as shown in Figure 1. The framework comprises two models for training. Firstly, a keyword-to- lyrics model, named Model\({}_{K2L}\), is used to generate aligned lyrics from source lyrics, with keywords and attributes such as style and emotion encoded into a latent semantic space and then decoded into source lyrics. The Model\({}_{K2L}\) utilizes an encoder-decoder architecture with the keywords, style, and emotion serving as encoder inputs and the source lyrics as decoder outputs, with training loss as shown in Equation 1. Secondly, an end-to-end lyrics imitation model, called Model\({}_{L2L}\), is trained using the aligned corpus (\(D^{\prime}_{k}\), \(D_{k}\)) constructed from Model\({}_{K2L}\) and also utilizes the encoder-decoder architecture. The Model\({}_{L2L}\) encodes \(D^{\prime}_{k}\) and the attributes of the source lyrics into the encoder, with the source lyrics serving as the decoder output and training loss as shown in Equation 2.
\[L_{key2lytic}=-\sum_{D_{k}}logP(y_{i}|D(E(k_{i},W_{i}))) \tag{1}\]
\[L_{lyric2lytic}=-\sum_{(D_{k^{\prime}},D_{k})}logP(y_{i}|D(E(x_{i},k_{i},W_{i} ))) \tag{2}\]
Where \(E\) encodes lyrics, keywords, and writing styles into latent representation, and \(D\) decodes the latent representation into lyrics. \(k_{i}\) means the keywords and the \(W_{i}\) represents the writing styles such as emotion and style in source lyrics. \(x_{i}\) indicates the lyrics in \(D^{\prime}_{k}\). \(D_{k}\) is the dataset of source lyrics, and \(D^{\prime}_{k}\) is lyrics generated from Model\({}_{K2L}\).
**Inference:** During inference, the input to Model\({}_{K2L}\) is controlled by keywords and writing style and is typically less than \(512\) in length. In contrast, Model\({}_{L2L}\)'s inputs include the source lyrics, which can easily exceed the length of \(512\). The most intuitive approach is to truncate the inputs after incorporating the keywords and writing style. However, this approach would be easy to obscure the controlling elements such as writing style and keywords. To address this issue, when the lyrics exceed \(512\) minus the length of the writing style and keywords, we truncate the last bar of the source lyrics to ensure that the input to the model does not exceed \(512\). It is worth noting that the last bar of the source lyrics often repeats the previous content, so this truncation does not significantly impact the generated lyrics.
**Decoding Strategy:** We use a top-k sampling strategy with a sampling temperature of \(0.8\) and a value of \(k\) of \(10\). Additionally, to prevent the model from easily generating duplicate words, we apply a sampling penalty technique proposed by Yadong et al. (2021), which only penalizes the first \(200\) words. In lyrics generation, although the model
can learn the specific format, which is the number of lines and a number of words per line based on the source lyrics, we perform format control decoding to ensure that the generated lyrics have the same format as the source lyrics. To do this, we record the number of lines and words in the generated lyrics and adjust the _[SEP]_ and _[EOS]_ logits in each decoding step.
### Post-processing
After the model training is finished, we can use the source lyrics, provided keywords, and writing styles to generate limitation lyrics with Model\({}_{L2L}\). We utilize the top-k sampling method at decoding to generate candidate lyrics. For each input, the model generates 10 samples. Then we re-rank the samples according to the following scores.
**Lyrics Quality Scoring:** To filter high-quality lyrics, we train a classification model to determine whether a song lyric is a high-quality lyric and consider its confidence score as the Lyrics score, which is called \(S_{Lyric}\), for re-rank. Inspired by QiuNiu (Zhang et al., 2022), we utilize popular and classic lyrics as positive samples, while lyrics with very few plays are negative samples. The experimental results indicate that the model gives a high confidence score when the lyrics contain beautiful sentences and rhetorical devices.
**Relevance Scoring:** In this paper, we introduce a method called \(S_{relevance}\) to measure the semantic similarity between source lyrics and generated lyrics. To calculate \(S_{relevance}\), we use the sentence transformer to obtain sentence vectors for both the source and generated lyrics, and then calculate the cosine similarity to rank the relevance. This method allows us to evaluate the quality of the generated lyrics in terms of their semantic similarity to the original lyrics.
Finally, we apply an anti-spam filter to the lyrics and use a combination of scores to sort them as shown in Equation 3. We then select the top \(3\) results as the final output. This post-process allows us to identify the most high-quality lyrics according to our criteria.
\[Score=w_{1}*S_{Lyric}+w_{2}*S_{relevance} \tag{3}\]
which the \(w_{1}\) and \(w_{2}\) denote the weights of the corresponding scores. In this paper, we set \(w_{1}\) to \(0.7\) and \(w_{2}\) to \(0.3\).
**Composing and Singing:** In order to evaluate the quality of lyrics generated from the Model\({}_{L2L}\), we annotate popular songs by extracting various musical features, including melody, chord progression, key, structure, and phrasing, using both general music theory 6 and more advanced analytical techniques. Based on these features, we then use intelligent composition (Song et al., 2009) techniques to generate melodies similar to those in the source style. Additionally, we use matching arrangement techniques, virtual vocal timbre selection, and mixing parameter adjustment to produce a fully synthesized song that includes accompaniment and singing. Finally, we incorporated audio information and aligned the lyrics with the audio to form the songs as shown in Figure 2. We can enjoy it in songs mode of Sudowoodo!
Footnote 6: [https://www.ipr.edu/blogs/audio-production/what-are-the-basics-of-music-theory/](https://www.ipr.edu/blogs/audio-production/what-are-the-basics-of-music-theory/)
## 3 Results of the Experiment
We conduct an ablation study to evaluate the framework proposed in this paper.
**Metrics:** We evaluate the generated lyrics from four perspectives: (1) _Thematic:_ The relevance of the imitation lyrics to the theme of the source lyrics, including love, friendship, family inspiration, etc. (2) _Fluency:_ It refers to the smoothness
Figure 2: The interface of Sudowoodo.
and naturalness of the language used in the lyrics. In evaluating the fluency of a song's lyrics, we consider factors such as the fluency of the words and the rhythmic structure of the sentences. (3) _Logic:_ It refers to the coherence and smoothness of scene transitions in the lyrics. To evaluate the logic of a song's lyrics, we consider whether consecutive sentences describe a single scene. If \(m\) consecutive sentences describe a scene, we argue that those sentences are reasonable within logic. If \(n\) consecutive groups of \(m\) sentences are found to exist within \(n\) different scenes, the lyrics are considered to have a high degree of smooth scene transitions overall. The number of scene jumps 7 can measure the logic of the song. (4) _Overall:_ The overall scoring of a song's lyrics.
Footnote 7: Scene jumps occur when consecutive sentences describe different things or switch abruptly between different sensory perspectives, resulting in an unnatural or jarring transition.
**Results:** We sample \(100\) lyrics from the source dataset and generate three imitation lyrics for each source lyric. We invite \(3\) professional lyricists to score each of the 300 lyrics based on _Thematic_, _Fluency_, _Logic_, and _Overall_. The score ranges from 1-5, with 5 being the best and 1 being the worst. The results are shown in Table 1, where all scores are averages for one song. We observe that the thematic and comprehensive scores of Model\({}_{K2L}\) exceeded those of Model\({}_{K2L}\). Additionally, we also verify the effect of the model, which uses only lyrics as input without keywords and writing style, and find that the addition of keywords improves the fluency of the generated lyrics. When the model is used to generate lyrics for the same lyrics using all three end-to-end methods, we observe that the method based on generated lyrics outperforms the keywords in 67.25% of cases. It indicates that generated lyrics for training can improve the performance of a lyric imitation model.
## 4 Demonstration
This section demonstrates how the _Sudowoodo_ system works.
The user interface for this demo is shown in Figure 2. As an imitation demo, it offers limited interaction with the user. The _Sudowoodo_ system operates in two modes: _Lyrics_ and _Songs_. In _Lyrics_ mode, the user is required to select the source
\begin{table}
\begin{tabular}{l|c c c c c} \hline & Theme (avg.) & Flu (avg.) & Logic (avg.) & Overall (avg.) & Best (\%) \\ \hline Model\({}_{K2L}\) & 4.168 & 4.103 & **3.480** & 4.078 & 32.75 \\ \hline Model\({}_{L2L}\) & 4.250 & **4.160** & 3.460 & **4.153** & **34.25** \\ w/o WS & **4.275** & 4.108 & 3.415 & 4.148 & 33 \\ \hline \end{tabular}
\end{table}
Table 1: Human evaluation results of Ablation. The scores in the table are the average scores of the three annotators. ”Best” indicates that the model achieves Top-1 in the validation dataset for the same source lyric using three end-to-end lyrics imitation methods. _Flu_ means _Fluency_, and _Theme_ is _Thematic_ in metrics. WS means the writing styles such as keywords, style, and emotion.
Figure 3: The instance of imitation lyrics in Lyrics mode. We enter the “\(\boxplus\) (Iove)” and ”\(\boxplus\) (freedom)” as keywords. As you can see from the picture, not all of the keywords entered are necessarily used. The Red color in Chinese and English indicates keywords.
lyrics and the desired sentiment for the generated lyrics. Additionally, the user may provide keywords, which are typically space-separated phrases such as " (_freedom love_)" or, alternatively, left blank. When generating lyrics, the _Sudowoodo_ system takes into account the writing style of the selected source lyrics, including its theme, rhymes, and provided keywords, as well as the desired sentiment. The provided keywords are highlighted for easy identification. Note that not all provided keywords are necessarily used in the generated lyrics. In _Songs_ mode, the user can select the name of the source lyrics to hear the generated lyrics as a song. Due to technical limitations, the lyrics are rendered offline. In this paper, we apply three different AI singers to provide the sounds. Finally, the user can click "Generate!" to produce the output.
Next, we show some generated examples in Figure 3.
**Lyrics:** The leftmost column of the display lyrics represents the source lyrics selected by the user, while the three columns on the right show the generated imitation lyrics. If the user has entered keywords, these will be highlighted in red within the generated lyrics. This demo can generate smooth, high-quality lyrics in a format and writing style similar to the source lyrics for each generation.
**Songs:** Figure 4 shows the results in Songs mode. As the real-time rendering of songs is a challenging task, we have performed offline rendering for this demo. A player is provided above the generated lyrics, which can be clicked on to hear the resulting song after rendering with the imitation lyrics. In the future, we aim to integrate real-time rendering of songs to create a true lyric imitation system that can take source lyrics and generate corresponding songs. More experiences are available in Sudowoodo.
## 5 Conclusion
In this paper, we describe _Sudowoodo_, a Chinese lyric imitation system that supports two modes: Lyrics and Songs. In Lyrics mode, users can input keywords to generate imitated lyrics based on existing lyrics. In Songs mode, _Sudowoodo_ uses an unspecified technology to generate music that accompanies the imitated lyrics to create a complete song. To address the lack of a lyric-to-lyric alignment corpus, we propose a novel training framework structure to construct a parallel corpus for lyric imitation. Additionally, we apply Chinese pre-trained GPT-2 for initialization. To improve the quality of the generated lyrics, we employ a post-processing module to sort the generated results and select the highest quality ones. Finally, We audio-aligned some of the imitation lyrics to form songs!
|
2301.07101 | Distributed LSTM-Learning from Differentially Private Label Proportions | Data privacy and decentralised data collection has become more and more
popular in recent years. In order to solve issues with privacy, communication
bandwidth and learning from spatio-temporal data, we will propose two efficient
models which use Differential Privacy and decentralized LSTM-Learning: One, in
which a Long Short Term Memory (LSTM) model is learned for extracting local
temporal node constraints and feeding them into a Dense-Layer
(LabelProportionToLocal). The other approach extends the first one by fetching
histogram data from the neighbors and joining the information with the LSTM
output (LabelProportionToDense). For evaluation two popular datasets are used:
Pems-Bay and METR-LA. Additionally, we provide an own dataset, which is based
on LuST. The evaluation will show the tradeoff between performance and data
privacy. | Timon Sachweh, Daniel Boiar, Thomas Liebig | 2023-01-15T22:11:07Z | http://arxiv.org/abs/2301.07101v1 | # Distributed LSTM-Learning from Differentially Private Label Proportions
###### Abstract
Data privacy and decentralised data collection has become more and more popular in recent years. In order to solve issues with privacy, communication bandwidth and learning from spatio-temporal data, we will propose two efficient models which use Differential Privacy and decentralized LSTM-Learning: One, in which a Long Short Term Memory (LSTM) model is learned for extracting local temporal node constraints and feeding them into a Dense-Layer (LabelProportionToLocal). The other approach extends the first one by fetching histogram data from the neighbors and joining the information with the LSTM output (LabelProportionToDense). For evaluation two popular datasets are used: Pems-Bay and METR-LA. Additionally, we provide an own dataset, which is based on LuST. The evaluation will show the tradeoff between performance and data privacy.
Long-Short Term Memory (LSTM), Differential Privacy, Learning from Label Proportions (LLP), Distributed Learning, Spatio-Temporal, Traffic, IoT
## I Introduction
In the last few years, the increased popularity of the Internet of Things (IoT) has led to an increasing amount of decentralized data collection. According to the current state, the data is usually sent to a central instance, where the data is processed. Centralized learning methods have several problems:
The first aspect that becomes clear is the lack of _data protection_. Especially with the introduction of the _General Data Protection Regulation (GDPR)_[1], a lot has changed in terms of _data privacy_, which must be implemented by everyone using sensitive data. Because of divergent goals between data protection and learning from data, this is an urgent topic.
Another aspect is the ever-increasing number of Internet participants. Taking into account that the maximum Internet traffic capacity is not increasing at the same rate, the bandwidth per device is shrinking. In the long term, this can result in bottlenecks for participants that need high bandwidth rates.
The two proposed _fully distributed deep learning_ algorithms will ensure flexible _data privacy_ by setting a hyperparameter to balance both aspects: privacy and prediction accuracy.
### Existing Approaches
Since there are already algorithms that cover parts of the challenges of traffic flow prediction with privacy aspects, we will briefly distinguish our approach from the existing ones.
Most algorithms usually focus on one of the two relevant properties, _privacy_ or _prediction accuracy_. For example, the _dp-LLP_ algorithm [2], introduced by Sachweh et al., is a fully distributed learning algorithm that uses only locally collected data, as well as data from the neighbor nodes, to predict traffic flow. The authors introduced a variant of _Label Proportions_, originally developed in [3, 4, 5], extended by _Differential Privacy_ to ensure that data transfer is protected. Key advantages of this approach are _less data traffic_ during execution and full _Data Privacy_. One negative aspect is the low complexity of the integrated \(k\)_-Means_ learning algorithm, which results in worse prediction accuracy. In addition, the authors do not use time-dependent features, which is an important feature space especially for traffic flow.
Another algorithm that uses _Label Proportions_ for data transfer was developed by Dulac et al. [6]. The authors use the original _Learning from Label Proportions (LLP)_[4] approach and change the learning model by using a _Long-Short Term Memory (LSTM)_ model. This more complex model results in higher prediction accuracy results with lower energy consumption, shown using the MNIST dataset. Since this approach lacks time-dependent features too, it is not perfectly suitable for vehicle traffic prediction.
In contrast, [7] makes full use of spatio-temporal data. One global _Graph Convolutional Neural Net (GraphCNN)_ model is learned, with decentralized sensors as nodes in the graph. Local sensor data trains the local neighborhood in the graph. With this approach original data labels are propagated. Therefore it is not _privacy preserving_ at all. This approach also has high _energy consumption_ and is not ready for _peer to peer_ scenarios.
Our approach builds on the experience of the papers presented, and will leverage the data protection properties from [2], as well as comparable performance to [6, 7].
## II Fundamental Work
In the following we provide basic knowledge of various privacy-preserving data transfer methods, as well as an explanation of _Differential Privacy_. _Long-Short Term Memory
(LSTM)_ models and the _\(k\) Nearest Neighbor (\(k\)NN)_ classifier will be described as well.
### _Privacy Preserving Data Transfer_
The distributed learning setting has led to more and more private data transfer methods. Concerning data transfer the following general concepts can ensure privacy:
* _Homomorphic Encryption_[8] ensures that encrypted data is transformed into another space with similar learnable features. Therefore models can be learned with encrypted data, as shown in [9]. One downside of this method is the high usage of computation resources.
* _Masking_ addresses this problem by inserting _Camouflage Values_. Those are fake values, that mask the original data points.
* An alternative to _Masking_ is data aggregation. This has the advantage of data compression and a lot of variety in search queries. Temporal aggregation of labels in so-called label proportions is the core idea of related works [2, 4, 5, 6]. Besides the possibility to hide individual data points in an aggregate, _Learning from Label Proportions (LLP)_ also reduces communication costs and energy consumption.
Because of the lack of a central authority in a fully distributed scenario, encryption does not directly provide a solution. Recent work that combines learning from label proportions with differential privacy (compare next section) [2] is promising, and we therefore utilize _Data Aggregation_ for our approach aswell.
### _Differential Privacy_
_Data Aggregation_ methods, which are applied on very pure data are problematic, because resulting histograms contain all information, although data points were aggregated.
Therefore [2] extended _Data Aggregation_ (building histograms) by adding _Differential Privacy_ to the histograms. _Differential Privacy_ adds noise to each bin to ensure a specific privacy guarantee.
#### Iii-B1 Differential Privacy Definition
In general, an algorithm is \((\varepsilon,\delta)\)-differentially private, if for all \(S\subseteq R\) Equation 1 is valid [10]. \(M:D\to R\) denotes a randomized algorithm and \(D^{\prime},D^{\prime\prime}\in D\) are sets, which differ at most by one element (\(||D^{\prime}-D^{\prime\prime}||_{1}\leq 1\)).
\[Pr[M(D^{\prime})\in S]\leq e^{\varepsilon}Pr[M(D^{\prime\prime})\in S]+\delta \tag{1}\]
The definition states that the probability distributions, that the output of \(M\) is in \(S\) for different inputs \(D^{\prime}\) and \(D^{\prime\prime}\) differ at most by a factor of \(e^{\varepsilon}\) and a constant value of \(\delta\). In our approach the constant factor \(\delta\) is \(0\) and called \(\varepsilon\)-differentially private if it satisfies the Equation 1.
#### Iii-B2 Sensitivity
Sensitivity is needed to gain information about the maximum influence of a single data point in a dataset \(D\). Therefore \(l_{1}\)-sensitivity is defined as follows:
\[\Delta f=max_{\begin{subarray}{c}D^{\prime},D^{\prime\prime}\in D,\\ ||D^{\prime}-D^{\prime\prime}||_{1}=1\end{subarray}}||M(D^{\prime})-M(D^{ \prime\prime})||_{1} \tag{2}\]
It states that every two subsets \(D^{\prime},D^{\prime\prime}\in D\), which differ exactly by one element, are checked for the maximum \(l_{1}\) distance of the outputs of \(M\) with different inputs \(D^{\prime}\) and \(D^{\prime\prime}\).
#### Iii-B3 Laplacian Noise
To satisfy Equation 1, we need to apply noise to each bin. The noise calculation must be scaled by the privacy parameter \(\epsilon\), as well as the \(l_{1}\)_-sensitivity_. To achieve this, we introduce _Laplacian distribution_, defined as follows:
\[lap(x|\sigma,\mu)=\frac{1}{2\sigma}e^{-\frac{|x-\mu|}{\sigma}} \tag{3}\]
Parameter \(\mu\) sets the mean value. In our case, the mean is \(0\). By inserting \(\frac{\Delta f}{\epsilon}\) for \(\sigma\), the variance of _Laplacian Distribution_ is dependent on the \(l_{1}\)_-sensitivity_ and the privacy factor \(\epsilon\). Theorem 3.6 in [11] proves, that the _Laplace distribution_ ensures the \((\varepsilon,0)\)-Differential Privacy border for \(\sigma=\frac{\Delta f}{\epsilon}\) and \(\mu=0\).
By varying \(\varepsilon\), the privacy guarantee can be changed. Experiments have shown \(\epsilon=0.1\) is a good setting in order to ensure privacy, but keeps enough information for learning on the _differentially private_ data.
### _Long-Short Term Memory (LSTM)_
Long short-term memory (LSTM) [12] is a model often used for time series prediction (e.g. traffic [13]). Though temporal loops can enrich Feedforward Networks to process sequential data in so-called Recurrent Neural networks [14], this solely Markovian modeling approach has drawbacks. The concept of recurrent neural networks is to process a sequence of information, but these models are not capable of considering different inputs and outputs together. Even if the information is connected, it was considered individually. This poses various challenges for many tasks. Clearly, one needs to know the succeeding data to predict the future since the two are connected. In some sense, recurrent neural networks serve as a memory that collects and stores information about what the system has computed so far. A recurrent neural network system can look back at some steps to use previous information for current knowledge.
In LSTM, the data transfer process is the same as in standard recurrent neural networks. However, the operation to propagate the information differs. As the information passes through, the model selects which information to process further and which to let pass. The network structure consists of cells, each consisting of 3 gates (input, output, forget). Each of the gates themselves can be considered a feedforward network. However, they are connected by the state of the cell. The state of a cell acts as a path to transmit information. Cells are, therefore, memories.
### \(k\) _Nearest Neighbor (\(k\)NN)_
The \(k\) _Nearest Neighbor (\(k\)NN)_ classifier is a non-parametric supervised learning algorithm. It can be adopted to be well suited for traffic prediction [15]. Originally it was developed by Fix and Hodges in 1951 [16].
The general concept is to store all measured data. For example, the traffic flow rate and corresponding time as one feature vector. There is no training phase. The prediction is made by determining the distance to all saved feature vectors and getting the labels of the closest \(k\). Finally prediction is made by a majority voting over the closest \(k\) labels.
## III Our Approach
Our approach is combining the advantages of DP-LLP [2], and [6] to build a _fully decentralized_ learning algorithm, which ensures _privacy_ and results in good forecasting performance. First, we will describe the general data exchange between the distributed nodes and the integration of _Differential Privacy_. Afterwards, we show the advanced _LSTM_ model, which uses differentially private neighbor information to learn.
### _Distributed Network_
A general distributed network setup is shown in Figure 1. We have a list of nodes and an adjacent matrix, which defines the edges between nodes.
We denote \(j\) the node that is currently observed. For all \(i\in|N_{j}|\), \(n_{i}(j)\) are the neighbors of \(j\) and \(|N_{j}|\) denotes the count of neighbors from \(j\), shown in Figure 1. In our approach, data is only transferred between direct neighbors, illustrated with arrows between \(j\) and its neighbors. The histograms in Figure 1 denote that no original data is sent via the network but just aggregated intervals over larger time frames, so-called _buckets_. All transferred data ensure \(\epsilon\)-Differential Privacy and are used by the neighbors to learn a _LSTM_ model.
### _Distributed Data Exchange_
As mentioned before, transferred data must by \(\epsilon\)-differentially private. To ensure this, data is first discretized and then sliced in fixed time windows of size \(w\). Each window contains \(x_{l}\) for \(l\in[0,w]\) data points. For each window a histogram is calculated to aggregate the data. Additionally laplacian noise is added with \(\sigma=\frac{1}{\varepsilon}\) and \(\mu=0\). The sensitivity is fixed to \(1\) because the maximum influence of a single data point in a counting query is \(1\). Finally we get an \(\epsilon\)-differentially private histogram, which is send to the neighbors.
However, to say that the whole algorithm will be \(\varepsilon\)-differentially private, one has to show that later processing on \(\varepsilon\)-differentially private data is at least \(\varepsilon\)-differentially private. Fortunately, Proposition 2.1 from [11] proves exactly this. The authors show, that every _Post-Processing_ on data that is \((\varepsilon,\delta)\)-differentially private will result in a \((\varepsilon,\delta)\)-differentially private outcome, too.
Using this Proposition, we can prove that applying noise at each transferred histogram is sufficient to ensure a fully \(\varepsilon\)-differentially private algorithm.
### _Model Architecture_
In principle, our model can be split into two parts as visualized in Figure 2. The first part comprises the node-wise learning block, where each node uses its data for training. The second block explicitly trains the last layer of the model by using aggregated data from the neighbors as additional information.
In Figure 2 it can be seen that we are using the local data of node \(j\), denoted as \(x\), as inputs for the _LSTM_ model. Parallelly, the aggregated spatial data from neighbors is built up and sent to node \(j\). Node \(j\) then uses the aggregated data to improve the learning of the network's last _Dense-Layer_. Finally, the outcome of the _Dense-Layer_ is the prediction \(\hat{y}\), from which the gradient could be determined, and all weights will be updated.
In the following, we will describe in more detail how the learning in both phases works.
#### Iii-C1 Local Node Learning
The local node learning phase consists of a _LSTM_[17, 18], which builds features containing temporal dependencies. Features are learned on fixed time windows of \(w=12\) time steps (around \(1\) hour in PemsBay). By replacing \(k-Means\) with a _LSTM_ model, we gain more learned information about time-dependent features. When using multiple features, e.g., speed and density, they are handled independently by the _LSTM_. The output is then limited to the \(R_{+}\) value range by a downstream _Rectified Linear Unit (ReLU)_ activation function. Outputs of the _ReLU_ are used to feed the
Fig. 1: Network Architecture: Connected nodes specified by an adjacent matrix. Node \(j\) is the observed node and \(n_{1}(j),n_{2}(j),n_{3}(j)\) are the direct neighbor nodes, that are sending aggregated histograms to \(j\) for learning.
Fig. 2: Proposed distributed Label Proportion LSTM architecture. The LSTM Block contains an LSTM layer, followed by a ReLU and a local linear layer.
Local Linear Layer_, which is initialized by the _identity matrix_. The reason for this is first to let all information through and later, during the training, modify the weights to gain more information. The process of feeding outputs of the _ReLU_ into the initialized _Local Linear Layer_ is denoted in Figure 3.
The first block of the model ends before feeding information to the _Local Linear Layer_. This first phase uses the local information for learning. However, because both blocks of the model are updated in a single _Backpropagation_, we need to describe the second part before determining the loss calculation.
#### Iii-B2 Learning with Aggregated Spatial Information
In the second phase, we are using the aggregated spatial neighbor information. To enable this, we use transferred noisy histograms as described previously. Thus for each time frame we have \(|N|\) histograms of \(N\) different direct neighbors.
These histograms are also used as inputs for the model - but only for the _Dense-Layer_. To fix the length of the additional input vector, we had to average the received histograms. This is needed because we cannot guarantee that the number of neighbors is always the same.
The resulting averaged histogram is concatenated with the outputs of the _LSTM + ReLU + Local Linear Layer_ and fed into the _Dense-Layer_ as inputs. The Dense-Layer is applied channelwise, such that the histograms of speed and density are concatenated with time features of speed and density individually. The arrows denote this data flow from the _LSTM_ and from the neighbor histogram in Figure 2.
The main advantage of this approach is to use more information from which the model can be trained. Therefore we have updated weights with information from neighboring nodes, which eventually result in better prediction behaviors after the _Dense-Layer_.
Nevertheless, there are also some downsides to using this approach. For example, when predicting a specific node, one must gather the neighbors' histogram information. Therefore, a stable network connection is required. To work around this problem, one can use previous neighbors' histogram values or the own histograms of node \(j\). We consider this architecture not dependent on the loss regularization term only and assume it is more accurate because of additional learned information. A similar assumption was also made by the authors from [2, 4].
#### Iii-B3 Loss Calculation
To this point, we described the data flow from input \(x\) and the histograms to the prediction \(\hat{y}\). For development, _PyTorch_[19], _PyTorch Geometric_[20] and a modified Pytorch implementation of [18] were used. Therefore, all layers in the model architecture result in one _Gradient Graph_. Based on this graph, the loss can be backpropagated. We used the _Mean Squared Error (MSE)_ loss for calculating the deviation between prediction and actual data values:
\[loss_{MSE}=\frac{1}{|B|}\sum_{i}^{|B|}(\hat{y}_{i}-y_{i})^{2} \tag{5}\]
Because we are using batches with size \(|B|\), the squared error loss is calculated for every prediction in the batch, and afterwards, the total amount is normalized by \(|B|\). With this setup, we can propagate the error back to both, the _Dense-Layer_ with histograms from the neighbors and the _LSTM_ with inputs of \(x\).
## IV Experimental Evaluation
The experimental results compare our approach against the state-of-the-art _kNN_ in a centralized computation setting (compare subsection II-D). Performance results are measured on the real-world, large-scale traffic datasets _Pems-Bay_[21], _METR-LA_[22] and an own generated dataset from _Luxembourg SUMO Traffic (LuST)_ scenario [23]. We aim to show the level of privacy that we can reach with our distributed learning approach by putting it in relation to the prediction accuracy. Before we go into the experiment settings and results in detail, we will briefly describe the datasets and introduce relevant steps in the evaluation.
### _Datasets_
First we will give a detailed overview over the used datasets, especially how the _LuST_ dataset is built up.
#### Iv-A1 Pems-Bay
The _Pems-Bay_ dataset was collected by the California Transportation Agencies (CalTrans) utilizing the _Performance Measurement System (PeMS)_. The dataset is based on 325 Bay Area sensors collected from Jan 1st, 2017, to May 31st, 2017, in 5-minute intervals. Each data point contains information about the traffic density and a normalized time value. _Pems-Bay_ is mainly used to verify the prediction accuracy applicable to non-euclidean structural models.
#### Iv-A2 METR-LA
_METR-LA_ is a similar dataset as _Pems-Bay_. The containing data was collected from _Los Angeles County Highway_ ring detectors from March 1st, 2012, to June 30st, 2012. In total, 207 sensors were used to collect traffic density data.
#### Iv-A3 LuST
As the third dataset, we introduce a new one based on the _Luxembourg SUMO Traffic (LuST)_[24] scenario. This scenario is executed in the _Simulation of Urban Mobility (SUMO)_ environment [25], which was built in order to have a stable basis to develop and test data based mobility solutions.
We let the data simulation run and extracted the traffic counts for every 5 minutes of each street and the corresponding speeds. As metadata, we collected the graph information and built an adjacency matrix in which the relations between streets
Fig. 3: Local Linear Layer. The Parameters of the matrices are not shared over time stamps, but are trainable and initialized with the identity matrix.
were stored. The resulting dataset contains traffic density and speed for the Luxembourg simulation covering an entire day.
The big advantage of this dataset generation is, that the same process can be executed on a different simulation, or for longer intervals. In our setting we use the short time frame of one day for analyzing, whether the model can learn on short time periods too.
Our prepared _LuST_ dataset can be downloaded from the following google drive: [https://drive.google.com/uc?export=download&id=1OjPkvptYb22010](https://drive.google.com/uc?export=download&id=1OjPkvptYb22010)
#### Iv-A1 Mse
An overview of the general test accuracy based on _MSE_ is shown in Table I.
The first column denotes the used dataset whereas the second and third column denote the model and \(\varepsilon\) privacy parameter. When no privacy degree \(\varepsilon\) can be specified, the value is shown as x. It stands out that the metric ranges from \(0.3\) to \(1.02\), lower being better. Obviously, the general performance is highly dependent on the datasets. For example, the _MSE_ of the _LuST_ dataset is around \(0.38\) to \(0.47\), whereas no value is below \(0.48\) on the _METR-LA_ dataset.
Looking at the figures, it is obvious that the _LabelProportionLocal_ or _LabelProportionToDense_ algorithm always achieves better results than the _kNN_ as a centralized approach. This is probably due to the fact that the integrated _LSTM_ in our approach is better able to represent temporal components. Another noticeable aspect is the increasing _MSE_ value for increasing _privacy guarantees_ by reducing \(\varepsilon\). This is also an expected degradation in terms of accuracy, since adding noise reduces the information content of the neighbors. But it is interesting to see that the performance is getting worse compared to the _LabelProportionLocal_ algorithm, which is not using additional neighbor information. Only when using no noise, performance of _LabelProportionToDense_ is the best on _METR-LA_ and _Pems-Bay_ in comparison to the other algorithms.
Based on these general results, one can say that our approach with sending _histograms_ between direct nodes is improving the general prediction performance. By using the _LSTM_ as the central learning model for local data, we achieved to outperform the centralized _kNN_ algorithm. Using \(\varepsilon\)_-Differential Privacy_ to ensure privacy of exchanged data, we have measured a significant increase of the _MSE_ error, which results in worse performance, than when using no neighbor information.
#### Iv-A2 Prediction Curve Pems-Bay - Overview
Because a single metric is not very meaningful, we plotted the predicted values of the different algorithms on the _Pems-Bay_ dataset in Figure 4. The actual measured data is depicted by the solid grey line in the background. In the foreground the three different algorithms are compared to each other. This chart contains no privacy preserving approach with privacy parameter \(\varepsilon\). Predictions of the algorithms are split into sections on the test set, for better visibility. Therefore, the predictions of the _kNN_ as the blue line is only plotted for the first 3500 steps. Following this, the prediction of the _LabelProportionLocal_ has been plotted in orange up to time step 7500. Finally, the predicted values of the _LabelProportionToDense_ approach are shown in black. As seen at the time scale we have predictions of approximately 10500 time steps which are equal to \(10500*5min=52500min=875h\approx 36\) days.
Therefore, only the general prediction shape can be seen, which is quite accurate. At most times, the _kNN_ and our approaches nearly met the actual prediction curve. There are some exceptions, where the data cannot be fitted very well. Especially peaks in the actual car speed measurements are not recognized by the approaches. For example, the _kNN_ does not predict the sharp drop in speed at around time step 150. The same issue occurs, when looking at the _LabelProportionLocal_ approach around time step 5800. The only algorithm, that fits
Fig. 4: Testset Prediction for one node in 5 minute prediction intervals on the Pems-Bay dataset. The solid gray line represents the ground truth, whereas the blue, orange and black dotted lines are the predictions of different algorithms. For clarity, only parts of the test set were plotted for each algorithm.
the curve nearly perfectly by viewing this in large scale is the _LabelProportionToDense_ approach.
In this overview, the cycles where traffic speed is going down during rush hour are very good to see. During the normal day, the density is oscillating around 70 which is the normal expected traffic speed curve. This can be the reason why our approaches reach better prediction accuracy, because _time dependent_ features are extracted by the LSTM.
#### V-A3 PredictionCurve Pems-Bay - Detailed
In the plot over the entire test period, you can see tendencies, indicating the prediction accuracy. For better details, we have cut out a section of 200 time steps and displayed it in Figure 5. This plot represents \(200*5min=1000min\approx 17h\), which is part of a day and night phase. It can also be seen by assuming that rush-hour is visible in the time steps starting from 6535. The night phase then begins around time step 6610, where a near constant speed is driven. We chose the slice by analyzing the dataset for the area, where the _MSE_ was the lowest. Therefore, we chose the concrete slice from time frame 6468 to 6668.
The actual measurement values are visible in gray. The kNN, as well as the _LabelProportionLocal_ and _LabelProportionToDense_ are shown solid in the same color as the previous chart. Additionally there are two more variations plotted. Those are our approaches (_LabelProportionToDense_), where noise is applied by \(\varepsilon\)-Differential Privacy. Variations with noise are highlighted by dashed (\(\varepsilon=0.5\)) or dotted (\(\varepsilon=0.1\)) lines.
The figure indicates, it is clear that the prediction is not as accurate as indicated in Figure 4. Here, all little deviations in density prediction are getting noticed.
By focusing on the _kNN_ it looks like it is mostly under predicting the real world measurements. Only for steep drops, visible at around time step 6535, the prediction is not close to the real curve. In general, the prediction of _kNN_ looks somewhat like a step function. Therefore, all the little variations are not predicted well.
Compared to this, our approach without histogram transfer (_LabelProportionLocal_) fits the real world measurements quite well. Especially the steep drop, which the _kNN_ could not handle, is fitted well. For most predictions it is just slightly above the real world data and at some peaks, like in time step 6590, it is shifted along the time axis. At those points, the peak is predicted a bit later. But in general, this prediction curve is quite close to the original measurements.
The only better approach than this is the _LabelProportionToDense_ approach. Variations between the prediction of the approach and the real data is almost not visible. Only slight jumps of the original data, which are not relevant for the general traffic speed, are not predicted. By looking at Figure 5, this algorithm is approaching the best results from all tested ones.
However no privacy guarantee can be given for the histograms. Therefore we added _differentially private_ variations. For \(\varepsilon=0.5\), the dashed curve shows the predictions. Those predictions are also quite good and fit the real world data well. Sometimes peaks are predicted with no real speed peak. This can be seen at time step 6485.
For the _LabelProportionToDense_ with \(\varepsilon=0.1\), the dotted line shows the predictions. Those are much worse than all other algorithms. When analyzing the curve, one can see that it is mostly a prediction around a traffic density of 66. It varies in the prediction value, but not by much. Therefore, it looks like the approach has learned to just predict the mean value. The reason for this can be the applied noise to the histograms. The noise could have resulted in nearly equally distributed histograms, so that no information is included. When this happened, the model could also learn only the mean distribution, or mean value. For this reason, it looks like a privacy factor of \(\varepsilon=0.1\) is too high to gain useful information from neighboring histograms that are afterwards normalized again over all neighbors.
Fig. 5: Testset slice of 200 time stamps. Prediction for one node in 5 minute prediction intervals on the PEMS-BAY dataset. The solid gray line represents the ground truth, whereas the blue, orange and black solid lines are the predictions of different algorithms. For clarity, only parts of the test set were plotted for each algorithm. When \(\varepsilon\)-Differential Privacy is applied, the lines are dashed or dotted.
## V Conclusion
In conclusion, the general approach adopted from [6] results in very good prediction accuracy on spatio-temporal data, as used in our evaluation with _Pems-Bay_, _METR-LA_ and _LuST_. We could show, that the local approach of using an _LSTM_ combined with _ReLU_ and _Dense-Layer_ results in a very good prediction, because temporal information is extracted well by the _LSTM_ model. By adding information of the neighbors when integrating histograms, we could improve results by \(0.28\) on _METR-LA_ and \(0.19\) on _Pems-Bay_ as shown in Table I for the _Pems-Bay_ dataset. However, adding _Differential Privacy_ to the neighboring histograms, has a significant impact on the learning performance. As shown by the evaluation, sometimes it is better to use no noisy neighbor data to train the model.
For the future, it is conceivable that either the second-degree neighbors will be included or that an attempt will be made to transfer _differentially private_ data with less information loss or misinformation.
|
2303.09755 | On the Effect of Instrumentation on Test Flakiness | Test flakiness is a problem that affects testing and processes that rely on
it. Several factors cause or influence the flakiness of test outcomes. Test
execution order, randomness and concurrency are some of the more common and
well-studied causes. Some studies mention code instrumentation as a factor that
causes or affects test flakiness. However, evidence for this issue is scarce.
In this study, we attempt to systematically collect evidence for the effects of
instrumentation on test flakiness. We experiment with common types of
instrumentation for Java programs - namely, application performance monitoring,
coverage and profiling instrumentation. We then study the effects of
instrumentation on a set of nine programs obtained from an existing dataset
used to study test flakiness, consisting of popular GitHub projects written in
Java. We observe cases where real-world instrumentation causes flakiness in a
program. However, this effect is rare. We also discuss a related issue - how
instrumentation may interfere with flakiness detection and prevention. | Shawn Rasheed, Jens Dietrich, Amjed Tahir | 2023-03-17T03:39:32Z | http://arxiv.org/abs/2303.09755v1 | # On the Effect of Instrumentation on Test Flakiness
###### Abstract
Test flakiness is a problem that affects testing and processes that rely on it. Several factors cause or influence the flakiness of test outcomes. Test execution order, randomness and concurrency are some of the more common and well-studied causes. Some studies mention code instrumentation as a factor that causes or affects test flakiness. However, evidence for this issue is scarce. In this study, we attempt to systematically collect evidence for the effects of instrumentation on test flakiness. We experiment with common types of instrumentation for Java programs--namely, application performance monitoring, coverage and profiling instrumentation. We then study the effects of instrumentation on a set of nine programs obtained from an existing dataset used to study test flakiness, consisting of popular GitHub projects written in Java. We observe cases where real-world instrumentation causes flakiness in a program. However, this effect is rare. We also discuss a related issue--how instrumentation may interfere with flakiness detection and prevention.
Flaky Tests, Test Bugs, Instrumentation
## I Introduction
A test can only provide useful feedback if it consistently has the same outcome (either pass or fail) for every execution with the same code version. Flaky tests may pass in some runs and fail on others. Test flakiness has been gaining the attention of both academia and industry because of its negative impact on testing, testing-dependent processes (especially automated testing in CI/CD pipelines), and techniques that rely on executing tests [1, 2]. Several factors can cause test flakiness, including concurrency, test order dependency, network, shared state and platform dependencies. Most of these causes are common across programming languages and platforms [3, 4, 5].
One potential factor that may have an impact on test flakiness is instrumentation. Ideally, for common use cases of instrumentation, such as code coverage, the effect of instrumentation should be transparent to the application. Concerning test flakiness, this means that test outcomes should remain the same with or without instrumentation. This may well not be the case in practice, though. A study on code coverage at Google [6] describes flakiness as a cause for failed coverage computation. Lam et al. [7] explain how their instrumentation for root-causing flakiness interferes with program behaviour and leads to increased/decreased test flakiness.
However, there are gaps in these studies with respect to the question we are interested in. Those studies are not focused on the effect of instrumentation on flakiness, and the full datasets are not available as the studies are from the industry. This work is an attempt towards addressing this, in which we discuss the effect of instrumentation on test flakiness, and how it affects flaky test prevention/detection techniques. We propose evaluation metrics and perform a preliminary evaluation on a dataset used in a previous test flakiness study to determine whether instrumentation impacts flakiness. In this work, we address the research question:
**RQ:** Does instrumentation increase/decrease test flakiness?
## II Related work
### _Flaky test detection_
Several techniques have been proposed to detect flaky tests, most of which determine flakiness by observing transitions of test outcome across multiple runs [8, 9]. There are two main approaches used for detecting flaky tests: static techniques that rely only on analyzing the test code without actually executing tests or dynamic techniques that involve the execution of tests [2]. Most of the existing tools focus on specific causes of test flakiness, such as concurrency (e.g., Shaker [10]) and test order-dependency (e.g., iFixFlakies [11]). There have been recent attempts to build lightweight static approaches for flaky test prediction that aim to avoid or minimize test reruns [12, 13].
### _Instrumentation and flakiness_
Wing et al. [7] used instrumentation to record runtime properties for root-causing flaky tests. They reported that their instrumentation could change runtime behaviour and thus decrease or increase test flakiness. In their study, runtime overhead from instrumentation was observed to affect the reproducibility of flaky tests (by executing a random sample from 59 flaky tests, two tests were flaky only with instrumentation and three were flaky without instrumentation). Ivankovic et al. [6] reported that flakiness due to coverage instrumentation is a common reason for failed coverage computation, which is manifested by performance failure or increased flakiness of non-deterministic tests. Tengeri et al. [14] reported cases where coverage instrumentation changes the behaviour of tests and their results. Finally, Dietrich et al. [15] used instrumentation to intercept network errors to control flakiness caused by dependency on network connectivity.
## III Background
Instrumentation, the process of transparently adding functionality to a program, is often used in dynamic program
analyses. For instance, to capture the runtime properties of a program. Instrumentation in Java often uses bytecode manipulation, facilitated by the availability of bytecode engineering libraries like _asm_ and _javassist_. In addition, the Java Virtual Machine (JVM) supports instrumentation directly through agents, which can be deployed statically (via a JVM argument) or attached dynamically.
Instrumentation can cause interference in the behaviour of a program. By interference, we mean that execution is affected by the instrumentation of the program's code. Unavoidably, instrumentation does interfere with available resources as additional instructions use CPU/memory. Some cases of interference include: race conditions caused by timing issues introduced by instrumentation [6]; instrumentation interfering with shared resources; classpath issues where the agent uses a different version of classes that are also part of the classpath of the program and its tests1.
Footnote 1: In practice, those issues are often avoided by using dependency shading when building agents
Interference caused by instrumentation can change the outcome of tests, resulting in test flakiness. We use the following definition of test flakiness here: _tests having different results on multiple executions for the same version of program code; or transitions of test outcomes for the same test across runs_. Here is an example that illustrates flakiness introduced by telemetry instrumentation from MockServer2. MockServer features functionality to mock HTTP/HTTPS, REST or RPC services for testing purposes. The test runs MockServer as a JUnit 5 extension, configured with the annotation @MockServerSettings to use port 8888 for the mocked service. The same port is used by the OpenTelemetry3 APM collector service. When the test is executed with OpenTelemetry instrumentation, it fails with the exception that the port is already in use. A dependency on local network resources causes this flakiness.
Footnote 2: [https://www.mock-server.com/](https://www.mock-server.com/)
Footnote 3: [https://opentelemetry.io/](https://opentelemetry.io/)
## IV Study design
This section lists the set of programs used in our study, the types of instrumentation used in the experiments, and the evaluation metrics we use. Our experiment requires rerunning each program's tests 20 times with the baseline (without instrumentation) and then with the use of each of the five instrumentations listed in Table II (total of six).
### _Dataset_
We have used the set of programs used in Cordeiro et. al's study on manifesting flakiness by adding noise to the environment [16]. In addition, they are 11 GitHub projects written in Java and use Maven for build automation. For two of the programs (_ozone_ and _hbase_), this was not possible due to their size, as they take considerably longer than the other programs (\(>\)30min per run). Thus, we have included only nine projects in our analysis (Table I).
### _Instrumentation_
While it is possible to craft an instrumentation that interferes with a particular test execution, we were interested in studying real-world instrumentation scenarios. To achieve this, we have identified several popular agents used for different purposes - coverage capture, monitoring and profiling.
The instrumentation tools used in the experiment are listed in Table II. This includes Elastic APM, an application performance monitoring system, OpenTelemetry, JaCoCo for Java code coverage, IntelliJ's code coverage, and Java Flight Recorder, which collects profiling data for Java applications.
### _Evaluation Metrics_
#### Iv-C1 Flaky test count
Flaky test count measures the number of tests over \(N\) runs that result in different states across runs for a given instrumentation configuration. If the set of outcomes (as in JUnit) are \(\{success,failure,error,skip\}\), a test, \(t\), is flaky across runs (or configurations) if for any two runs, \(r_{1}\) and \(r_{2}\), there is a transition across test states, i.e. \(r_{1}(t)\neq r_{2}(t)\) if we consider test runs as a mapping from tests to states.
#### Iv-C2 Flakiness score
The flakiness score measures the variability between test runs. For instance, we may already observe flakiness if a test fails only in 1/20 runs. It is still interesting to see whether we can see this changing to a higher (or lower) value with instrumentation being used, such as 5/20.
For each configuration (baseline or one particular instrumentation) for a program in the dataset, we compute this as follows:
* \(FT\) is the set of tests that are flaky in all configurations. Note that this may not include tests that are flaky across configurations. Including those tests in the baseline would
be problematic as adding another configuration may affect the flakiness scores for existing configurations.
* For each run \(i\) compute a set, \(r_{i}\) consisting of pairs \((t,state)\in FT\times\{pass,fail,error,skip\}\)
* For each pair of runs, \(r_{i}\) and \(r_{j}\), compute the Jaccard distance, \(d(r_{i},r_{j})=1-\frac{|r_{i}\cap r_{j}|}{|r_{i}\cup r_{j}|}\), for \(N\) runs. This yields \(N(N-1)/2\) values (20 runs resulting in 190 values)
* The distances across runs can then be \(aggregated\) (e.g. mean or median) to obtain a flakiness score.
If the flakiness score is 0, there is no flakiness. The flakiness score is greater than 0 if there is flakiness. For example, the baseline for a program's test results has some flakiness (one test fails once), but this one test fails a few more times with instrumentation. In this case, the flakiness score would go up even though the flaky count would remain the same.
### _Experimental setup and process_
We run each program's tests 20 times in six different configurations (120 runs for each program), i.e., baseline and the listed five types of instrumentation. Experiments were run on a computer with a 3.2 GHz 6-Core Intel Core i7 CPU with Oracle's Java SE Development Kit 8u301. We made the data from the experiments available online [http://www.bitbucket.org/unshorn/inst-study](http://www.bitbucket.org/unshorn/inst-study).
## V Results and Discussion
### _Experimental Results_
Detailed results for the experiments are shown below. Table III shows the results for the experiments indicating flakiness counts. This includes the count of test outcomes (success, failure/error and flaky) and test runtimes (rt.) for each configuration (i.e., instrumentation). The numbers are averages for 20 runs. Table IV shows the flakiness scores discussed in Section IV-C2 (configurations with zero values are omitted for brevity).
In answering our research question if instrumentation changes the test outcome, we observe that, in general, there are no significant changes in test outcomes with or without the use of different instrumentation tools. However, there were cases in some of the programs we examined that showed variation in test outcomes. The two tests that are flaky across configurations, which are in the programs _flow_ and _ripme_ are due to external flakiness. The test, FrontendToolsLocatorTest::toolLocated, in _ripme_ fails due to an error in executing in an external program, and BaraagRipperTest::testRip fails to access
Fig. 1: Variation of flakiness score for _CorfuDB_
network resources, which is incidental and not caused by instrumentation. The only confirmed case of flakiness introduced by instrumentation (OpenTelemetry) is the failure of tests in MockServerExtensionConstructorInjectionWithSettingsMultiplePortTest in _mockserver_ and FrontendToolsLocatorTest::toolLocated in _flow_.
On the related question if instrumentation increases test flakiness over multiple runs; again, instrumentation does not appear to have a discernible effect on the stability of flaky tests. However, this is difficult to measure for most programs in the dataset as the count of flaky tests, seen in Table III, is sparse for most configurations, with one flaky test per configuration being common. One of the programs where it shows noticeable results are for _CorfuDB_. _CorfuDB_, as listed in Table IV has relatively more flaky tests when compared to the other programs. This varies from 10 to 75 flaky tests. As depicted in Figure 1 for _CorfuDB_, flakiness across runs varies for the different instrumentations.
### _Discussion of Findings_
Our experiment shows that while instrumentation may cause flakiness, this effect is rare. There are only a few cases, which show an impact of instrumentation on the presence of flaky tests. There is no unified pattern that we could identify with regard to specific instrumentation tools or configurations that would introduce/increase flakiness across the programs we study.
A related question is whether instrumentation can interfere with existing flakiness detection and prevention techniques. Given a large number of such techniques [1, 2], this discussion is incomplete.We aim to investigate this question further in our future research.
For _detection techniques based on static analysis_ such as [13], interference is possible (but may still be unlikely) simply because the instrumentation is not part of the analyses, and this may lead to additional false positives and/or false negatives. This effect can be difficult to mitigate.
Dynamic prevention and detection techniques such as _RootFinder_[7] and _saflate_[15] (both use instrumentation) can be prone to instrumentation order dependencies. While it is possible to craft examples showing this, it is unlikely to occur in practice.
On the other hand, dynamic prevention and detection techniques that use a particular platform (such as _NonDex_[17] using a modified standard library or VVM [18]) may also be sensible to instrumentation, as the instrumentation code itself is affected by those changes.
## VI Threats to Validity
Even though the results of our experiments indicate the effects of instrumentation on test flakiness, there are threats to generalising them. First, the set of programs may not be representative, which we address using a set of programs from a previous flakiness study. As flaky tests can be non-deterministic and caused by environmental factors, results may vary if the experiment is repeated. To account for such variance, we have executed the experiment 20 times and fixed the factors we control, such as the hardware, OS and JVM. Nonetheless, the results could change with more reruns of the experiment.
## VII Conclusion and Future Work
In this paper, we discuss the possible impact of instrumentation on the presence of flakiness in test suites. We hypothesised that instrumentation does have an impact on the presence or frequency of flaky tests. To investigate this, we conducted an experiment using five Java instrumentation tools representing three different instrumentation types (APM, coverage and profiling). Our results from studying nine open-source programs show that instrumentation has little to no effect on the presence or frequency of flaky tests (i.e., it does not increase flakiness). These are preliminary results, and a more comprehensive study is needed. In the future, we plan to extend this study to include a more significant number of instrumentation tools and to study programs with a larger number of reruns.
## Acknowledgment
This work is funded by Science for Technological Innovation (SfTI) National Science Challenge (NSC) of New Zealand, grant number MAUX2004. |
2309.02883 | All sky archival search for FRB high energy counterparts with Swift and
Fermi | Fast radio bursts (FRBs) are millisecond-duration radio signals from unknown
cosmic origin. Many models associate FRBs with high-energy astrophysical
objects such as magnetars. In this attempt to find counterparts to FRBs, we
explore gamma-ray bursts (GRBs) from the Swift and Fermi missions. We first
search for spatial correlations between FRB and GRB populations as a whole and
then search for a one-by-one correlation between each of the FRBs and GRBs
investigated. Temporal coincidences are not considered. To evaluate the
significance of any correlation found, we generate background realizations that
take into account instrumentally induced anisotropies in the distribution of
the sources. Neither study yields any significant counterpart detection. We
estimate that less than 4\% of the FRBs are associated with GRBs in the studied
samples | Halim Ashkar, Mehdi El Bouhaddouti, Stephen Fegan, Fabian Schüssler | 2023-09-06T10:19:47Z | http://arxiv.org/abs/2309.02883v1 | # All sky archival search for FRB high energy counterparts with Swift and Fermi
###### Abstract:
Fast radio bursts (FRBs) are millisecond-duration radio signals from unknown cosmic origin. Many models associate FRBs with high-energy astrophysical objects such as magnetars. In this attempt to find counterparts to FRBs, we explore gamma-ray bursts (GRBs) from the Swift and Fermi missions. We first search for spatial correlations between FRB and GRB populations as a whole and then search for a one-by-one correlation between each of the FRBs and GRBs investigated. Temporal coincidences are not considered. To evaluate the significance of any correlation found, we generate background realizations that take into account instrumentally induced anisotropies in the distribution of the sources. Neither study yields any significant counterpart detection. We estimate that less than 4% of the FRBs are associated with GRBs in the studied samples.
Introduction
Fast Radio Bursts (FRBs) are flashes of radio emission from astrophysical origin that last from a fraction of a millisecond to a few milliseconds. The Lorimer Burst [1] was the first FRB discovered using the archival data of the Parkes radio telescope. Since then, FRB detections have increased and reached more than 600 detections by July 2022. There are two types of FRBs: repeaters and non-repeaters. In 2020, for the first time, an FRB from galactic origin, FRB 20200428A, was detected [2, 3, 4] repeatedly [5, 6]. The FRB is associated to a soft gamma-ray repeater (SGR), SGR1925+2154 [7] which is a magnetar. FRB 20200428A was detected during an active outburst phase of the magnetar that lasted several weeks. Coincidentally with the FRB, X-ray outbursts have been detected by several instruments [8, 9, 10, 11]. A hint of a gamma-ray transient was discovered in the Swift satellite data contemporaneously to FRB 20131104 [12]. However, the 2022 outburst remains the only confirmed electromagnetic counterpart to an FRB until now (October 2022).
The short emission time and the high-temperature brightness of FRBs imply small emission regions and coherent processes [13]. Some of the most prominent sources linked to FRBs, and repeating FRBs in particular, are magnetars [14, 15]. For example, the interaction of a magnetar with a surrounding nebula could produce FRBs through synchrotron maser emission [16, 17, 18]. Some of these models suggest the FRB could be accompanied by a gamma-ray outburst. FRB 20200428A, in spite of having a low energy budget on the FRB scale, is the first observational evidence of this association. For this specific FRB, it has been suggested that the unusually hard spectrum implies a common origin for the radio and X-ray emission [9]. The hard X-ray spectrum points to a non-thermal nature, which can lead to the production of gamma rays [10]. Other prominent sources linked to FRBs are neutron star interactions, hyperflares and giant flares from magnetars [19, 20], binary white dwarf mergers [21], black hole interactions [22, 23] and neutron star interactions [24, 25, 26, 27, 28]. Whether the sources of FRBs are magnetars or compact object interactions, they are undoubtedly very energetic sources that would be capable of producing gamma rays. Magnetars for example are linked to short gamma-ray bursts [GRBs, 29] as well as mergers involving at least one neutron star [30, 31].
Several attempts have been made to search for FRB counterparts either by looking in archival data or by actively observing or following-up FRBs with no clear success until now. From recent archival searches no clear relations were found between IceCube neutrinos and CHIME FRBs [32] and no association between gravitational waves and FRBs was found [33, 34]. An optical counterpart was possibly found for FRB 20180916b [35]. On the gamma-ray transients side, a possible counterpart for FRB 20171209 was found [36] from a search including 110 FRB and 1440 GRBs with an afterglow detection. Moreover, a gamma-ray transient has been reported as a possible counterpart to FRB 20121104 [37]. A search including CHIME FRBs and GRBs (short and long) detected between July 2018 and July 2019 did not find any GRB counterparts to FRBs [38]. A similar result was found for Insight-HXMT gamma-ray transients and FRBs [39].
Our work aims to independently establish a spatial link between FRBs and gamma-ray transients, notably GRBs, by looking at archival data of the last two decades from active GRB observatories such as Swift and Fermi. Several models expect different timescales for FRB emission from GRB progenitors. For example, in some models of FRBs from young magnetars, the time difference between the initial cataclysmic event (GRB progenitor) and the FRB emission can be
years or even decades [40, 41]. On the other hand, in the case of FRB 20200428A the high energy emission arrived simultaneously with the FRB. In a merger-driven explosion scenario, it is possible that the FRB emission precedes the high energy emission from a long GRB [42]. For these reasons, we do not consider any temporal constraints on the emission in our study. Since both neutron star mergers and the core collapse of massive stars could lead to the production of a magnetar, we do not differentiate between short and long GRBs. Finally, given that a fraction of non-repeater FRBs might be undetected repeaters [43] and that the difference between repeaters and non-repeaters is not perfectly clear, we do not differentiate between these two types either.
## 2 Data and background
For this study, we consider all FRBs since the first detection until July 2022. This sample of FRBs is taken from the Transient Name Server (TNS)1. For GRBs, we consider those detected by the Burst Alert Telescope (BAT) on board the Swift observatory [44] for their precise localisation [45]. To avoid duplication, when the X-ray Telescope (XRT) position is available we only consider it. We also enlarge our sample by adding Fermi GRB detections. The Fermi GRBs are taken from the Gamma-ray Burst Monitor (GBM) [46] GRB catalog that is the catalog [47, 48, 49, 50]. Moreover, we include GRBs from the Large Area Telescope [LAT, 51, 52] on the Fermi gamma-ray space telescope. We remove all entries from the Fermi catalogs that are also detected by Swift. Fermi-GBM GRBs are poorly localized with localization regions that span several square degrees in the sky. Therefore, we consider Fermi-GBM GRBs with localization uncertainties smaller than 1 degree separately. These are mainly detected and localized by other instruments. The three datasets represent GRBs detected in different energy ranges with different instrumental detection biases. They are first treated separately to highlight these differences, then combined into one dataset including all selected GRBs.
Footnote 1: [https://www.wis-tns.org](https://www.wis-tns.org)
The data is compared to a background generated from the data itself. While the underlying distribution of GRBs can be approximated as isotropic, the distribution of detected positions can be affected by instrumental biasses. Therefore, to simulate the background we use the positions of the sources in the dataset. The goal is to generate 1000 simulated GRB catalogues with positions that follow the distribution of the Swift-BAT GRBs in the sky. The separations squared between FRBs and GRBs are calculated for each simulated catalogue, binned and plotted in histogram after averaging the bins. Equidistant elements from the Swift-BAT declinations cosines are taken to generate a function describing their distribution. The function is then applied to a set of random numbers between 0 and 1 having the same length as the initial sample to generate the cosine of the simulated declinations. The right ascensions are generated by subdividing the declination cosine distribution into 20 intervals and applying the same method as above for each of the sub-intervals.
## 3 Searching for gamma-ray counterparts for FRB and GRB populations
Populations of FRBs and gamma-ray transients can be studied to test spatial correlations between them [32]. The idea here is to calculate separations squared (sep\({}^{2}\)) between FRBs and other transients, plot the histograms of separations squared and compare them to a background. If
there is any spatial correlation between FRBs and other transients populations, we would expect to see a significant excess in the first few histogram bins.
The background is generated as explained in Sec. 2. The separation squared between FRBs and GRBs is calculated. To determine the bin size, the average positional uncertainty is considered for FRBs and GRBs. The average error squared is: \(\langle\delta^{2}\rangle=\langle\delta^{2}_{\rm FRB}\rangle+\langle\delta^{2}_{ \rm GRB}\rangle\)
We generate 1000 realizations. The separation squared are binned and the average of the bins are calculated. The resulting histograms are shown in Fig. 1. The excess and the significance of the excess in each bin is:
\[{\rm Excess}={\rm N_{On}}-\alpha{\rm N_{Off}}\quad{\rm and}\quad\sigma=\frac{{ \rm N_{On}}-\alpha{\rm N_{Off}}}{\sqrt{{\rm N_{On}}+\alpha^{2}{\rm N_{Off}}}} \tag{1}\]
where \({\rm N_{On}}\) is the number of entries for each signal bin, \({\rm N_{Off}}\) is the sum of entries of the background bins and \(\alpha\) is 1/1000.
This method is used to establish correlations between FRBs and Swift-BAT, Fermi-GBM (\(<\) 1 deg uncertainties) and Fermi-LAT GRBs. The study is also repeated combining all three GRB catalogues. The bin size depends on the localization uncertainties. We find that for the comparison with Swift-BAT and Fermi-GBM, a bin size of 0.5 deg\({}^{2}\) is suitable, allowing any excess to appear in the first bins if any significant correlation exists. Due to a small number of GRBs detected by Fermi-LAT, the bin size is slightly larger in order to increase the statistics in each bin. The simulated background follows the signal distribution which is a good indication of the reliability of the background generation method. No significant excess can be extracted from the first few bins.
Figure 1: sep\({}^{2}\) between FRBs and Swift-BAT (upper left), Fermi-GBM (\(<\) 1 deg uncertainty, upper right), Fermi-LAT (lower left), and the combination of all three GRB catalogs (lower right) distribution. The blue dots show the number of matches in each bin and their uncertainty (signal) and the red dots show the same for the averaged generated background (background). The bottom plots show the significance of the excess computed for each bin.
## 4 Searching for gamma-ray counterparts for FRBs case by case
As a second attempt to establish a link between FRBs and GRBs we look at spatial coincidences between all well-localized GRBs and FRBs. Following [36] and [35], for each FRB, a 10-degree radius region around it is considered. The effective expected number of GRBs inside each of these areas is: \(\lambda=\rho_{\rm i}\)S, where \(\rho_{\rm i}\) is the effective density of GRBs in the region around each of the FRBs and \(S\) is the surface of the region. Taking into consideration the whole sky: S = \(41252.96(1-\cos({\rm D}+\delta_{\rm FRB}+\delta_{\rm GRB}))\) deg\({}^{2}\) where D is the angle between the FRB and the GRB, \(\delta_{\rm FRB}=\sqrt{\delta_{\rm RA,FRB}\delta_{\rm Dec,FRB}}\) is the error radius of the FRB and \(\delta_{\rm GRB}\) is the error radius of the GRB. The GRBs inside the 10-degree radius region follow a Poisson distribution. The chance probability of finding one or more GRBs in the FRB test region is P\({}_{1,{\rm i}}\) and the post-trial chance probability P are:
\[{\rm P}_{1,{\rm i}}=1-\exp(-\lambda)\quad\mbox{and}\quad{\rm P}=1-\prod_{{\rm i }=1}^{\rm N}(1-{\rm P}_{1,{\rm i}}) \tag{2}\]
where N = 627 is the total number of FRBs in our study. P\({}_{1,{\rm i}}\) and P values are computed for all the matches between CHIME FRBs and GRBs.
We search for coincidences between FRBs and Swift-BAT, Fermi-GBM (\(<\) 1 deg uncertainties) and Fermi-LAT GRBs making sure to remove all duplicate detections. We also combine all three catalogs together. The most significant results are shown in Tab. 1. We find several matches with small separations and small chance probabilities \(P_{1}\). One of them is the pair FRB 20171209A - GRB 110715A found in [36]. The fact that we used a bigger sample and a slightly different approach yields to slightly different chance probabilities.
## 5 Discussion and conclusions
For the population study, where we considered FRB and GRB population as a whole, the histograms show that the signal in the first few bins does not exceed the background by a significant amount that allows us to claim an association between the two populations. In order to asses the sensitivity of the study in Sec. 3, we inject fake GRBs that are spatially associated with FRBs. For that, we consider the position of a random FRB with its localization uncertainty. We generate a new position based on this information and assign it to a random GRB in the GRB catalog. The new
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Catalogues & FRB & GRB & Separation & \(P_{1}\) & P \\ \hline FRB - BAT & FRB 20190304C & GRB 201128A & 0.057 & 0.028 & 1 \\ FRB - BAT & FRB 20171209A & GRB 110715A & 0.084 & 0.005 & 1 \\ FRB - GBM & FRB 20181218A & GRB 20306761 & 0.152 & 0.1734 & 1 \\ FRB - GBM & FRB 20181018C & GRB 210308276 & 0.429 & 0.0132 & 1 \\ FRB - LAT & FRB 20190612A & GRB 151006413 & 0.237 & 0.041 & 1 \\ FRB - LAT & FRB 20190201A & GRB 211023546 & 0.805 & 0.033 & 1 \\ FRB - ALL & FRB 20190304C & GRB 201128A & 0.057 & 0.044 & 1 \\ FRB - ALL & FRB 20171209A & GRB 110715A & 0.084 & 0.007 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Smallest separations and chance probabilities obtained for FRB and GRB pairs.
GRB position is also generated taking into consideration the localization uncertainties of the GRB instruments. The number of fake GRB injections is increased progressively and the analysis in Sec. 3 is repeated each time. The entire study is then repeated 10 times. Fig.2 shows the sensitivity curves for each of the GRB catalogs, apart and combined. The x-axis shows the percentage of GRBs needed to be associated with FRBs to claim as a significant population association and the y-axis shows the confidence level. From these plots, we conclude that less than 3%, 5%, 48% and 3% of the FRBs are associated with Swift-BAT, Fermi-GBM, Fermi-LAT, and all GRBs combined respectively with a 95% confidence level. It is also worth mentioning that we applied this study to the Fermi-GBM GRBs that have localization uncertainties larger than 1 degree. The average uncertainty for these GRBs are larger than 6 degrees. We also applied this study to the Fermi-GBM trigger catalog that consists of gamma-ray transients that are not classified as GRBs. Both studies did not yield any significant detection in their current form. Large uncertainties in these catalogs significantly decreases the sensitivity of the study and prevent us from claiming any detection. This discourages us to investigate further catalogs with large uncertainties such as the BATSE GRB catalogue [53] with the current methods.
In the case-by-case search in Sec. 4, several matches between FRBs and GRBs with small chance probabilities were found. However, taking into account the number of trials increases these chance probabilities to 1 for nearly all matches. This concludes that no significant association can be claimed for any of the FRB - GRB pairs. The sensitivity of the radio telescopes like CHIME to increasing declination is a caveat that should be taken into consideration. The localization precision of CHIME seems to deteriorate with increasing altitudes. However, the inclusion of this caveat is
Figure 2: Sensitivity curves showing the significance vs the percentage of injected fake GRBs with FRB positions for the Swift-BAT (upper left), Fermi-GBM (\(<\)1 deg uncertainty, upper right), Fermi-LAT (lower left) and the combination of all three GRB catalogues (lower right). The black curves show the result of individual sensitivity studies repeated 10 times for each case. The red curves are the average of the black curves for each case.
beyond the scope of this study.
Finally, considering a scenario where the GRB is generated in the initial cataclysmic event that created the magnetar and the FRB later on by the interactions of the magnetar and the surrounding nebula, it is possible that the delay between the GRB and the FRB might be longer than a few years. In the case of the repeater FRB 20121104B, the system is believed to be between 20 and 50 years young, while observations of FRB 20180916A suggest a system between 200 and 500 years old. The contemporaneity of the GRB and the FRB data used here might be an explanation for the lack of significant association. In that case, the use of older GRB data might be beneficial to test this scenario.
|
2303.06466 | Bound on the distance between controlled quantum state and target state
under decoherence | To implement quantum information technologies, carefully designed control for
preparing a desired state plays a key role. However, in realistic situation,
the actual performance of those methodologies is severely limited by
decoherence. Therefore, it is important to evaluate how close we can steer the
controlled state to a desired target state under decoherence. In this paper, we
provide an upper bound of the distance between the two controlled quantum
systems in the presence and absence of decoherence. The bound quantifies the
degree of achievement of the control for a given target state under
decoherence, and can be straightforwardly calculated without solving any
equation. Moreover, the upper bound is applied to derive a theoretical limit of
the probability for obtaining the target state under decoherence. | Kohei Kobayashi | 2023-03-11T17:31:45Z | http://arxiv.org/abs/2303.06466v3 | # General bound for any quantum control method under decoherence
###### Abstract
To realize quantum information technologies, quantum control technology for preparing a desired state plays a key role. However, in realistic situation, the actual performance of those methodologies is severely limited by decoherence. Therefore, the following questions arise; how close can we steer the controlled state under decoherence to the ideal target state? To evaluate this problem, we provide an upper bound of the distance between the two controlled open quantum systems in the presence and absence of decoherence. The bound is straightforward to calculate and can be applied to several types of control methods, as demonstrated via numerical simulation.
## I Introduction
Control of quantum dynamics is of great importance for preparing a desired target state in quantum science and technology. The control of closed quantum system has been well formulated; for instance, the open-loop control (OLC) offers powerful means for efficiently implementing a target gate [1; 2; 3; 4]. Efforts to extend these studies to open quantum system, where the system interacts with an environments, are underway. The measurement-based feedback control (MFC) [5; 6; 7; 8] and the reservoir engineering such as coherent feedback control (CFC) [9; 10; 11; 12; 13] have been also well established with the aim of deterministic preparation of a desired quantum state. In particular, the CFC has attracted much attention as a powerful control method surpassing OLC or MFC; because any CF loop does not involve any classical component, it does not suffer from the drawbacks such as signal loss or time delays unlike the other methods. In recent years, some notable progress in experiment have been reported in the research field of MFC [14; 15; 16] and CFC [17; 18; 19].
However, the biggest obstacle for control of open quantum systems is of course decoherence, i.e., the loss of quantum information of a system caused by the interaction with its environments or heart bath [20]. In the presence of decoherence, the actual control performance is sometimes far away from the ideal one. Therefore, given a control system and decoherence, it is important to clarify and characterize how close one can steer the quantum state to the target state. This problem is known as the _reachability analysis_.
For closed quantum systems, a detailed analysis of reachability for various cases was investigated [21; 22; 23]. In recent years, several approaches for evaluating the reachable set of open quantum systems has been extensively conducted [24; 25; 26; 27; 28; 29; 30]. However, these are restrictive to specific cases, and there has been no general and rigorous approach for reachability analysis. A few exception is found for several types of control and decoherence [28], but its actual effectiveness is weak and impractical.
Motivated by this background, in this paper we present a bound for reachability of a general Markovian open quantum system driven by the decoherence process and several types of control. More specifically, we give an upper bound of the norm-distance between the ideal final state and the obtained state under decoherence. The derived bound can be straightforwardly computed only by the information on decoherence and control time. Also, it in particular gives a tight estimate on the realistic performance of MFC, as demonstrated in several examples.
## II Main result
### Dynamics of open quantum system under control and decoherence
First, let us explain a control setting of OLC and reservoir engineering. In the ideal case without decoherence, the quantum state \(\rho(t)\) evolves from the initial state \(\rho(0)=\rho_{0}\) obeying the Markovian master equation:
\[\frac{d\rho(t)}{dt}=-i[u(t)H,\rho(t)]+\mathcal{D}[L]\rho(t), \tag{1}\]
where \(\mathcal{D}[A]\rho=A\rho A^{\dagger}-A^{\dagger}A\rho/2-\rho A^{\dagger}A/2\) is the Lindblad super operator. \(H\) is a system Hamiltonian and \(u(t)\) is the time-dependent control sequence. \(L\) is the controllable coupling operator representing the interaction between the system and the environment. The standard reservoir engineering problem aims to design the control system \((u,H,L)\) appropriately, so that \(\rho(t)\) autonomously converges to a target state. In this paper, we assume that \(\rho(t)\) reaches the target state at the final time \(t=T\).
However, in practical situation, there is decoherence affecting the control, and thus the controlled state \(\sigma(t)\) obeys the following:
\[\frac{d\sigma(t)}{dt}=-i[u(t)H,\sigma(t)]+\mathcal{D}[L]\sigma(t)+\mathcal{D}[ M]\sigma(t), \tag{2}\]
\(M\) represents the uncontrollable operator corresponding to decoherence and determines the dissipative component of system's evolution. Therefore, there is a gap between
\(\rho(t)\) and \(\sigma(t)\) for \(t\in(0,T]\). Here we define this gap as the Frobenius norm:
\[D(t)=\|\rho(t)-\sigma(t)\|_{\mathrm{F}},\quad(0\leq D(t)\leq\sqrt{2}), \tag{3}\]
where \(\|A\|_{\mathrm{F}}:=\sqrt{\mathrm{Tr}[A^{\dagger}A]}\). In general, it is impossible to achieve \(D(t)=0\) at \(t=T>0\) under \(M\). If \(\rho(t)\) is orthogonal to \(\sigma(t)\), \(D(t)\) takes the maximum value \(\sqrt{2}\). Hence, \(D(t)\) represents the cost function between the controlled two states in the absence and presence of decoherence. This is why we aim to explore an upper bound of \(D(T)\).
As another class of control method, we explain the setting of the standard MFC. The quantum state \(\rho_{c}(t)\) conditioned on the measurement record \(y(t)\) obeys the stochastic master equation:
\[d\rho_{c}(t) =-i[u(t)H,\rho_{c}(t)]dt+\mathcal{D}[L]\rho_{c}(t)dt\] \[\quad+\mathcal{H}[L]\rho_{c}(t)dW(t), \tag{4}\]
where \(dW(t)\) is the Wiener process representing the innovation process based on \(y(t)\). \(\mathcal{H}[A]\rho=A\rho+\rho A^{\dagger}-\mathrm{Tr}[(A+A^{\dagger})\rho]dt\) represents the change of the quantum state due to the acquisition of information during the measurement process. The goal of MBF is to design the control input \(u\) as a function of \(\rho_{c}(t)\) to accomplish a certain goal. In particular, under \(L=L^{\dagger}\), we can steer the state to an arbitrary eigenstate of \(L\). The unconditional state \(\mathbb{E}[\rho_{c}(t)]=\rho(t)\), which is the ensemble average over all the measurement result, obeys the master equation
\[\frac{d\rho(t)}{dt}=-i[H,\mathbb{E}[u(t)\rho_{c}(t)]]+\mathcal{D}[L]\mathbb{E }[\rho_{c}(t)], \tag{5}\]
due to \(\mathbb{E}[dW(t)]=0\). On the other hand, the time evolution of a state \(\sigma_{c}(t)\) subjected to \(M\) is given by
\[d\sigma_{c}(t) =-i[u(t)H,\sigma_{c}(t)]dt+\mathcal{D}[L]\sigma_{c}(t)dt\] \[\quad+\mathcal{D}[M]\sigma_{c}(t)dt+\mathcal{H}[L]\rho_{c}(t)dW(t). \tag{6}\]
In this paper, our interest is the average distance
\[\mathbb{E}[D_{c}(t)]=\mathbb{E}[\|\rho_{c}(t)-\sigma_{c}(t)\|_{\mathrm{F}}]. \tag{7}\]
The next subsection will present an upper bound applicable to both \(D(t)\) and \(\mathbb{E}[D_{c}(t)]\).
### Main result
The main contribution of this paper is to present an upper bound of the cost \(D(t)\) and \(\mathbb{E}[D_{c}(t)]\) in an explicit form:
_Theorem 1._ The distance between \(\rho(t)\) and \(\sigma(t)\) (3) has the following upper bound
\[D(T)\leq\delta:=\sqrt{2(1-e^{-\alpha T})}, \tag{8}\]
where \(\alpha=(1/T)\int_{0}^{T}\|M(t)\|_{\mathrm{F}}^{2}dt\). In addition, \(\mathbb{E}[D_{c}(t)]\leq\delta\) also holds (the proof is given in Appendix A).
\(\delta\) gives a control limit on how close the controlled state can be steered to a target state or preserved at around an initial state under decoherence. Here we list some notable properties of \(\delta\):
(i) \(\delta\) can be generally applicable to several types of control techniques, e.g., OLC, CFC, and MFC, as long as the dynamics of quantum systems can be described by the master equation or stochastic master equation. Moreover, when the system is subjected to multiple decoherence channels, \(\alpha\) can be straightforward extended as follows:
\[\alpha=(1/T)\int_{0}^{T}\sum_{j}\|M_{j}(t)\|_{\mathrm{F}}^{2}dt. \tag{9}\]
(ii) \(\delta\) can be calculated only by the information about \(M\) and \(T\). Thus, it does not need for solving any dynamical equations.
(iii) \(\delta\) is monotonically decreasing with respect to \(\alpha\), meaning that, as decoherence or control time becomes bigger, the distance between \(\rho(t)\) and \(\sigma(t)\) becomes bigger. For instance, in quantum annealing protocol, the computational time \(T\) is given by a polynomial of the system size \(N\)[31; 32]:
\[T=\mathcal{O}(N^{a}). \tag{10}\]
Thus, in order to avoid the worst-case \(D(T)=\sqrt{2}\), we must suppress \(\alpha\) as small as \(\alpha=\mathcal{O}(N^{-a})\).
In what follows, we examine the effectiveness of \(\delta\) via some examples.
## III Hamiltonian control
### Qubit
We begin with a simple example, a single qubit system such as a two-level atom consisting of the excited state \(|e\rangle=[1,0]^{\top}\) and the ground state \(|g\rangle=[0,1]^{\top}\), to evaluate how well the derived bound works effectively.
We consider the system operators
\[H=\Omega S_{y},\ \ M=\sqrt{\gamma}S_{-}, \tag{11}\]
where \(S_{x}=|e\rangle\langle g|+|g\rangle\langle e|\), \(S_{y}=i(|g\rangle\langle e|-|e\rangle\langle g|)\), \(S_{z}=|e\rangle\langle e|-|g\rangle\langle g|\), and \(S_{-}=|g\rangle\langle e|\). \(H\) rotates the state vector along the \(y\) axis with frequency \(\Omega>0\). \(M\) represents the decoherence process with \(\gamma>0\), which can be interpreted as a spontaneous emission process for a two-level atom. We here aim to make the qubit state excited \(\rho(T)=|e\rangle\langle e|\) from the ground state \(\rho_{0}=|g\rangle\langle g|\). Ideally, i.e., \(\gamma=0\), solving the equation \(d\rho(t)/dt=-i\Omega[S_{y},\rho(t)]\) yields the exact control time \(T=\pi/(2\Omega)\). Then, we have
\[\delta=\sqrt{2\left(1-\exp\left(\frac{\pi\gamma}{2\Omega}\right)\right)}. \tag{12}\]
Figure 1(a) shows \(\delta\) and simulated values \(D(T)\) for several \(\gamma\), in unit of \(\Omega=1\). The gap between \(\delta\) and \(D(T)\)
gradually becomes small as \(\gamma\) increases, because \(\delta\) is convex upward. From this result, the performance of \(\delta\) tends to be weak when the decoherence is small. However, even if \(\gamma\) is small, if the control time \(T\) is large, then \(\delta\) works as a tight bound, as will be seen later.
Moreover, it is worth comparing \(\delta\) to the analytical value of \(D(T)\). For this purpose, we set \(\Omega=0\) and choose the initial state as \(\rho_{0}=|e\rangle\langle e|\). In this case, because \(d\rho(t)/dt=0\), \(\rho_{0}\) is preserved. While, by solving the master equation \(d\sigma(t)/dt=\mathcal{D}[\sqrt{\gamma}S_{-}]\sigma(t)\), the \(x\) and \(z\) component of the Bloch coordinates
\[\sigma(t)=\frac{1}{2}\left(I+x(t)S_{x}+y(t)S_{y}+z(t)S_{z}\right), \tag{13}\]
are given by \(x(t)=0\) and \(z(t)=2e^{-\gamma T}-1\). Using these, \(D(T)\) is calculated as
\[D(T) =\sqrt{\text{Tr}[(\rho_{0}-\sigma(T))]^{2}}\] \[=\sqrt{1-2\text{Tr}[\rho_{0}\sigma(T)]+\text{Tr}[\sigma^{2}(T)]}\] \[=\sqrt{2(1-e^{-\gamma T})}. \tag{14}\]
Notably, this \(D(T)\) is identical to \(\delta\), and hence \(\delta\) is the achievable bound. However, setting the operators \(H=0\) and \(M=\sqrt{\gamma}S_{z}\) (corresponding to the phase damping channel), and choose the initial state as the superposition \(\rho_{0}=|+\rangle\langle+|\) where \(|+\rangle:=(|e\rangle+|g\rangle)/\sqrt{2}\), \(\delta\) and \(D(T)\) are calculated as follows:
\[\delta=\sqrt{2(1-e^{-2\gamma T})},\ \ D(T)=\frac{1-e^{-2\gamma T}}{2}. \tag{15}\]
Thereby, it is not that \(\delta=D(T)\) holds for any decoherence. Thus, exploiting an equality condition of the inequality 14 is one of our future works.
### Two-qubits
Here we extend the above example to a two-qubit system.
\[H=\Omega S_{y}\otimes S_{y},\ \ M=\frac{\sqrt{\gamma}}{2}(S_{-}\otimes I+I \otimes S_{-}), \tag{16}\]
and \(\rho_{0}=|g,g\rangle\langle g,g|\), where \(|a,a\rangle=|a\rangle\otimes|a\rangle\). \(M\) is the global decay process which acts the two atoms spontaneously. Also, let the final state be \(\rho(T)=|e,e\rangle\langle e,e|\) and \(T\) is given by \(T=\pi/(2\Omega)\). In this setup, the upper bound has the same expression \(\delta=\sqrt{2(1-e^{\frac{\pi}{2\Omega}})}\) as Eq. (12). As shown in Fig. 1(b), \(\delta\) is slightly looser for \(D(T)\) than the qubit case. From this result, it may seem that \(\delta\) is effective for the small system.
## IV Coherent Feedback
As a notable example of reservoir engineering, we in particular take the CFC proposed in [13]. The quantum system is controlled by the two different open control systems \(G_{1}=(H_{1},L_{1})\) and \(G_{2}=(H_{2},L_{2})\), which are connected through a single probe field. Under the assumption that the propagation time from \(G_{1}\) to \(G_{2}\) is negligible, the total control system \(G=(H,L)\) is characterized as follows [13]:
\[G(H,L)=\left(H_{1}+H_{2}+\frac{1}{2i}(L_{2}^{\dagger}L_{1}-L_{1}^{\dagger}L_{ 2}),\ L_{1}+L_{2}\right). \tag{17}\]
\(H_{1}\), \(H_{2}\), \(L_{1}\), and \(L_{2}\) are operators living in the same Hilbert space. Here \(L_{1}\) is Hermitian, \(L_{1}=L_{1}^{\dagger}\). This coupling induces a dispersive state change and set a superposition of the eigenstates of \(L_{1}\) to be the target state. The output field generated by the first coupling \(L_{1}\) is fed back to the quantum system through the second coupling \(L_{2}\). Then, by choosing \(L_{2}\) as a dissipative coupling operator, the master equation (1) with \(G(L,H)\) deterministically produces the target state.
We consider the case where the qubit is deterministically stabilized at any state on the \(xz\)-plane of the Bloch sphere. Here let us take \(L_{1}=\sqrt{\kappa_{1}}S_{z}\) and \(L_{2}=\sqrt{\kappa_{2}}S_{-}\). Note that \(\sqrt{\kappa_{2}}S_{-}\) is the introduced decay process that autonomously compensates the dispersive effect caused by \(\sqrt{\kappa_{1}}S_{z}\). Also we set \(H_{1}=H_{2}=0\). Then, from (17), the whole control sytem is given by
\[H =\frac{\sqrt{\kappa_{1}\kappa_{2}}}{2i}(S_{-}^{\dagger}S_{z}-S_{ z}S_{-})=-\frac{\sqrt{\kappa_{1}\kappa_{2}}}{2}S_{y},\] \[L =\sqrt{\kappa_{1}}S_{z}+\sqrt{\kappa_{2}}S_{-}=\begin{bmatrix} \sqrt{\kappa_{1}}&0\\ \sqrt{\kappa_{2}}&\sqrt{\kappa_{1}}\end{bmatrix}. \tag{18}\]
The state of the CF controlled system is obtained by solving the corresponding master equation (1). Then, we find that any initial state converges to the steady state \(\rho(\infty)=|\psi(\infty)\rangle\langle\psi(\infty)|\) with
\[|\psi(\infty)\rangle=\begin{bmatrix}2\sqrt{\kappa_{1}}\\ \sqrt{\kappa_{2}}\end{bmatrix}. \tag{19}\]
Note that arbitrary pure state (except for \(|e\rangle\)) can be prepared by suitably choosing the control parameters \(\kappa_{1}\) and \(\kappa_{2}\). \(|e\rangle\) can be approximately generated by setting \(\kappa_{2}\ll\kappa_{1}\) (if \(\gamma=0\), the master equation does not have a unique steady solution).
Figure 1: Plots of \(\delta\) (solid red line) and \(D(T)\) (blue dot) for (a) single qubit and (b) two qubits, in unit of \(\Omega=1\).
Here let us consider the stabilization of the qubit at the superposition \(|\psi(\infty)\rangle=|+\rangle\), which can be realized by choosing \(\kappa_{2}=4\kappa_{1}\) [Fig. 2(a)]. In fact, the fidelity \(\langle\psi(\infty)|\rho(t)|\psi(\infty)\rangle\) takes \(0.99\) at \(t=2\). Thus, we regard this time as the final time \(T\). Figure 2 (b) shows \(D(t)\) under the undesirable dissipative coupling \(M=\sqrt{\gamma}S_{-}\) with \(|\psi_{0}\rangle=|e\rangle\) and \(T=2\). Remarkably, the tightness of \(\delta\) is loose, compared than the previous results. This means that the CFC is robust against the dissipation in this case, as shown in [13].
## V Measurement-based feedback control
Finally, we study the MFC for stabilizing a qubit state at the excited state, which is characterized by the following system operators:
\[H=S_{y},\ \ L=\sqrt{\kappa}S_{z},\ \ M=\sqrt{\gamma}S_{-}. \tag{20}\]
This is a typical setup of MFC [5; 6; 7; 8]. \(L\) represents the dispersive coupling between the qubit and the probe, which enables us to continuously monitor the qubit state by measuring the output field and perform a feedback by the control Hamiltonian \(u(t)H\). We now employ the feedback method proposed in [8] and demonstrate its effectiveness [Fig. 3 (a)]. When \(\gamma=0\), this strategy realizes deterministic convergence \(z_{c}(t)=\mathrm{Tr}[S_{z}\rho_{c}(t)]\rightarrow+1\).
Figure 3 (b) shows the difference between \(\delta\) and \(\mathbb{E}[D_{c}(T)]\) for the case \(\kappa=1\) and \(T=3\) as a function of \(\gamma\). Remarkably, we observe that, compared than the CF case, the gap between \(\delta\) and \(\mathbb{E}[D_{c}(T)]\) is drastically small even when \(\gamma\) is small. Therefore, we can say that \(\delta\) is relatively effective for classical control methods, such as Hamiltonian engineering or MFC. Actually, it has been shown that CFC is more robust against the decoherence rather than MFC [13]; hence our intuition is reasonable.
## VI Conclusions
In this paper, we have derived the upper bound \(\delta\) of the distance between the two controlled quantum states in the absence and presence of decoherence. Note that \(\delta\) shows better performance for classical control methodologies such as Hamiltonian engineering or MF method, and also can be easily computed without solving any equation. However, the drawback of \(\delta\) is that it cannot take an information on the initial state into account; for example, if \(\rho(t)\) is fixed at \(\rho_{0}\) under \(H=0\), and moreover, \(\rho_{0}\) is identical to the eigenstate of \(M\), \(D(t)=0\) for \(t>0\), but \(\delta\) is greater than zero. Therefore, there is a room for brushing up in our bound. Moreover, it is worth mentioning that \(\delta\) is achievable, which means that it sometimes becomes the best bound. However, the equality condition of our theorem is unclear, and thus finding the class for achieving \(\delta\) is also interesting future work. We hope that the result shown in this paper will provide a meaningful insight in quantum control.
## Appendix A Proof of the theorem 1
We begin with a case where the dynamics of quantum systems are given by the master equation. We introduce the operator \(\zeta(t)=\rho(t)-\sigma(t)\) and take the time derivative:
\[\frac{d\zeta(t)}{dt}=-i[u(t)H,\zeta(t)]+\mathcal{D}[L]\zeta(t)- \mathcal{D}[M]\sigma(t). \tag{21}\]
Next we consider
\[\frac{dD^{2}(t)}{dt}=2\mathrm{Tr}\left[\zeta(t)\frac{d\zeta(t)}{ dt}\right]\] \[=2\left(\mathrm{Tr}\left\{\zeta(t)\mathcal{D}[L]\zeta(t)\right\} -\mathrm{Tr}\left\{\zeta(t)\mathcal{D}[M]\sigma(t)\right\}\right). \tag{22}\]
From now on, we calculate an upper bound of (22). We first focus on the first term \(\mathrm{Tr}\left\{\zeta(t)\mathcal{D}[L]\zeta(t)\right\}\):
\[\mathrm{Tr}\left\{\zeta(t)\mathcal{D}[L]\zeta(t)\right\}\] \[=\mathrm{Tr}\left[\zeta(t)\left(L\zeta(t)L^{\dagger}-\frac{1}{2}L ^{\dagger}L\zeta(t)-\frac{1}{2}L^{\dagger}L\zeta(t)\right)\right]\] \[=\mathrm{Tr}\left[\zeta(t)L\zeta(t)L^{\dagger}\right]-\mathrm{Tr }\left[\zeta^{2}(t)L^{\dagger}L\right]. \tag{23}\]
Figure 3: (a) Time evolution of \(\mathbb{E}[z_{c}(t)]\), with a special type of MBF control input \(u(t)\) and \(\kappa=1\). (b) Plots of \(\delta\) (solid red line) and simulated values of \(\mathbb{E}[D_{c}(T)]\) (blue dot) as a function of \(\gamma\).
Figure 2: (a) Time evolution of \(z(t)\), with parameter value satisfying \(\kappa_{2}=4\kappa_{1}\). (b) Plots of \(\delta\) (solid red line) and simulated values of \(D(T)\) (blue dot) as a function of \(\gamma\).
We can easily check that \(\mathrm{Tr}\left[\zeta^{2}(t)L^{\dagger}L\right]\) is greater than or equal to zero, because
\[\mathrm{Tr}\left[\zeta^{2}(t)L^{\dagger}L\right]=\|L\zeta(t)\|_{\mathrm{F}}^{2}\geq 0. \tag{10}\]
If \(\zeta(t)\) is positive semidefinite, \(\zeta(t)\geq 0\), \(\mathrm{Tr}[L\zeta(t)L^{\dagger}]=\|L\zeta^{\frac{1}{2}}(t)\|_{\mathrm{F}}^{2}\geq 0\). Then, using the relation \(\mathrm{Tr}(AB)\leq\mathrm{Tr}(A)\mathrm{Tr}(B)\) (\(A\), \(B\geq 0\)), \(\mathrm{Tr}\left[\zeta(t)L\zeta(t)L^{\dagger}\right]\) is bounded as follows:
\[\mathrm{Tr}\left[\zeta(t)L\zeta(t)L^{\dagger}\right] \leq\mathrm{Tr}[\zeta(t)]\mathrm{Tr}[L\zeta(t)L^{\dagger}]\] \[=0. \tag{11}\]
Likewise, if \(\zeta(t)\leq 0\), the inequality (11) also holds. Therefore, we find
\[\mathrm{Tr}\left\{\zeta(t)\mathcal{D}[L]\zeta(t)\right\}\leq 0. \tag{12}\]
Next the second term \(-\mathrm{Tr}\left\{\zeta(t)\mathcal{D}[M]\sigma(t)\right\}\) on the righthand side of Eq. (10) can be upper bounded as follows:
\[-\mathrm{Tr}\left\{\zeta(t)\mathcal{D}[M]\sigma(t)\right\} =-\mathrm{Tr}\left[\zeta(t)\left(M\sigma(t)M^{\dagger}-\frac{1}{2} M^{\dagger}M\sigma(t)-\frac{1}{2}\sigma(t)M^{\dagger}M\right)\right]\] \[=\frac{1}{2}\mathrm{Tr}\left[M^{\dagger}M\sigma(t)\rho(t)\right] +\frac{1}{2}\mathrm{Tr}\left[M^{\dagger}M\rho(t)\sigma(t)\right]-\mathrm{Tr} \left[M^{\dagger}\rho(t)M\sigma(t)\right]\] \[\quad+\mathrm{Tr}\left\{\sigma(t)\mathcal{D}[M]\sigma(t)\right\}\] \[\leq\frac{1}{2}\mathrm{Tr}[M^{\dagger}M]\mathrm{Tr}[\sigma(t) \rho(t)]+\frac{1}{2}\mathrm{Tr}[M^{\dagger}M]\mathrm{Tr}[\rho(t)\sigma(t)]- \mathrm{Tr}\left[M^{\dagger}\rho(t)M\sigma(t)\right]\] \[\leq\|M\|_{\mathrm{F}}^{2}\left(1-\frac{1}{2}D^{2}(t)\right)- \mathrm{Tr}\left[M^{\dagger}\rho(t)M\sigma(t)\right]. \tag{13}\]
In the first inequality, the result \(\mathrm{Tr}\left\{\sigma(t)\mathcal{D}[L]\sigma(t)\right\}\leq 0\) given in (12) and \(\mathrm{Tr}(AB)\leq\mathrm{Tr}(A)\mathrm{Tr}(B)\) were used. In the second inequality, Schwarz's inequality \(\mathrm{Tr}(XY)\leq\|X\|_{\mathrm{F}}\|Y\|_{\mathrm{F}}\) for arbitrary matrices \(X\) and \(Y\) and \(\mathrm{Tr}[\rho(t)\sigma(t)]=1-D^{2}(t)/2\) were used. Moreover, by introducing the unitary matrices \(U\) and \(V\), \(\rho(t)\) and \(\sigma(t)\) can be decomposed as follows:
\[\rho(t)=U\Lambda_{\rho}U^{\dagger},\ \ \sigma(t)=V\Lambda_{\sigma}V^{\dagger}, \tag{14}\]
where
\[\Lambda_{\rho} =\mathrm{diag}\{\lambda_{\rho,1},\cdots,\lambda_{\rho,N}\}\] \[\Lambda_{\sigma} =\mathrm{diag}\{\lambda_{\sigma,1},\cdots,\lambda_{\sigma,N}\},\]
are the diagonal matrices with positive eigenvalues \(\lambda_{\rho,j}\) and \(\lambda_{\sigma,j}\) (\(1\leq j\leq N\)) of \(\rho(t)\) and \(\sigma(t)\). Using these, we find \(\mathrm{Tr}\left[M^{\dagger}\rho(t)M\sigma(t)\right]\geq 0\), because
\[\mathrm{Tr}\left[M^{\dagger}\rho(t)M\sigma(t)\right] =\mathrm{Tr}\left[M^{\dagger}U\Lambda_{\rho}U^{\dagger}MV \Lambda_{\sigma}V^{\dagger}\right)\] \[=\|\Lambda_{\rho}^{\frac{1}{2}}U^{\dagger}MV\Lambda_{\sigma}^{ \frac{1}{2}}\|_{\mathrm{F}}^{2}\geq 0. \tag{15}\]
Hence, we obtain the following upper bound:
\[-\mathrm{Tr}\left\{\zeta(t)\mathcal{D}[M]\sigma(t)\right\}\leq\|M\|_{ \mathrm{F}}^{2}\left(1-\frac{1}{2}D^{2}(t)\right). \tag{16}\]
Combining Eqs. (10)-(16), we have
\[\frac{dD^{2}(t)}{dt}\leq 2\|M\|_{\mathrm{F}}^{2}\left(1-\frac{1}{2}D^{2}(t) \right). \tag{17}\]
On the other hand,
\[\frac{dD^{2}(t)}{dt}=2D(t)\frac{dD(t)}{dt}. \tag{18}\]
Thus, from Eqs. (17) and (18),
\[\frac{dD(t)}{dt}\leq\|M\|_{\mathrm{F}}^{2}\left(\frac{2-D^{2}(t)}{2D(t)} \right). \tag{19}\]
Finally, by integrating both sides of Eq. (16) from \(0\) to \(T\), we end up with the upper bound:
\[D(T)\leq\delta:=\sqrt{2\left(1-e^{-\alpha T}\right)}, \tag{20}\]
where \(\alpha=(1/T)\int_{0}^{T}\|M(t)\|_{\mathrm{F}}^{2}dt\).
Next, we show that the theorem 1 also holds in MBF control case. Introducing \(\zeta(t)=\rho_{c}(t)-\sigma_{c}(t)\), we find the infinitesimal change of \(D_{c}^{2}(t)\):
\[dD_{c}^{2}(t) =2\mathrm{Tr}\left[\zeta_{c}(t)d\zeta_{c}(t)\right]+\mathrm{Tr} \left[d\zeta_{c}(t)\cdot d\zeta_{c}(t)\right]\] \[=2\left(\mathrm{Tr}\left\{\zeta_{c}(t)\mathcal{D}[L]\zeta_{c}(t) \right\}-\mathrm{Tr}\left\{\zeta_{c}(t)\mathcal{D}[M]\sigma(t)\right\}\right)dt\] \[\quad+\mathrm{Tr}\left[\left(\mathcal{H}[L]\rho_{c}(t)\right)^{2} \right]dt+\mathrm{Tr}\left[\left(\mathcal{H}[L]\sigma_{c}(t)\right)^{2} \right]dt\] \[\quad+(\cdots)dW(t), \tag{21}\]
where we used the Ito rule \(dW(t)dt=dtdW(t)=(dt)^{2}=0\). In the same manner shown above, the righthand side of (A) is bounded as follows:
\[dD_{c}^{2}(t) \leq 2\|M\|_{\rm F}^{2}\left(1-\frac{1}{2}D_{c}^{2}(t)\right)+{\rm Tr }\left[\left({\cal H}[L]\rho_{c}(t)\right)^{2}\right]dt\] \[+{\rm Tr}\left[\left({\cal H}[L]\sigma_{c}(t)\right)^{2}\right]. \tag{10}\]
Here, because of \({\rm Tr}(X^{2})\leq{\rm Tr}(X)^{2}\) and \({\rm Tr}[{\cal H}[L]\rho]=0\), we have
\[{\rm Tr}\left[\left({\cal H}[L]\rho(t)\right)^{2}\right]\leq{\rm Tr}\left[{ \cal H}[L]\rho(t)\right]^{2}=0, \tag{11}\]
and \({\rm Tr}\left[\left({\cal H}[L]\sigma_{c}(t)\right)^{2}\right]=0\). Thus, Eq. (16) becomes
\[dD_{c}^{2}(t)\leq 2\|M\|_{\rm F}^{2}\left(1-\frac{1}{2}D_{c}^{2}(t)\right) dt+(\cdots)dW(t). \tag{12}\]
Taking the ensemble average of the both sides of (12),
\[\frac{d\mathbb{E}[D_{c}^{2}(t)]}{dt}\leq 2\|M(t)\|_{\rm F}^{2}\left(1-\frac{1} {2}\mathbb{E}[D_{c}^{2}(t)]\right). \tag{13}\]
Then, by integrating this inequality, we obtain
\[\mathbb{E}[D_{c}^{2}(T)]\leq 2(1-e^{-\alpha T}). \tag{14}\]
Using the relation \(\mathbb{E}(x^{2})\geq\mathbb{E}(x)^{2}\), we consequently obtain the same \(\delta\):
\[\mathbb{E}[D_{c}(T)]\leq\delta=\sqrt{2(1-e^{-\alpha T})}. \tag{15}\]
|
2310.01231 | Jupiter Mass Binary Objects in the Trapezium Cluster | A key outstanding question in star and planet formation is how far the
initial mass function of stars and sub-stellar objects extends, and whether or
not there is a cut-off at the very lowest masses. Isolated objects in the
planetary-mass domain below 13 Jupiter masses, where not even deuterium can
fuse, are very challenging to observe as these objects are inherently faint.
Nearby star-forming regions provide the best opportunity to search for them
though: while they are young, they are still relatively warm and luminous at
infrared wavelengths. Previous surveys have discovered a handful of such
sources down to 3--5 Jupiter masses, around the minimum mass limit established
for formation via the fragmentation of molecular clouds, but does the mass
function extend further? In a new James Webb Space Telescope near-infrared
survey of the inner Orion Nebula and Trapezium Cluster, we have discovered and
characterised a sample of 540 planetary-mass candidates with masses down to 0.6
Jupiter masses, demonstrating that there is indeed no sharp cut-off in the mass
function. Furthermore, we find that 9\% of the planetary-mass objects are in
wide binaries, a result that is highly unexpected and which challenges current
theories of both star and planet formation. | Samuel G Pearson, Mark J McCaughrean | 2023-10-02T14:22:34Z | http://arxiv.org/abs/2310.01231v1 | # Jupiter Mass Binary Objects in the Trapezium Cluster
###### Abstract
A key outstanding question in star and planet formation is how far the initial mass function of stars and sub-stellar objects extends, and whether or not there is a cut-off at the very lowest masses. Isolated objects in the planetary-mass domain below 13 Jupiter masses, where not even deuterium can fuse, are very challenging to observe as these objects are inherently faint. Nearby star-forming regions provide the best opportunity to search for them though: while they are young, they are still relatively warm and luminous at infrared wavelengths. Previous surveys have discovered a handful of such sources down to 3-5 Jupiter masses, around the minimum mass limit established for formation via the fragmentation of molecular clouds, but does the mass function extend further? In a new James Webb Space Telescope near-infrared survey of the inner Orion Nebula and Trapezium Cluster, we have discovered and characterised a sample of 540 planetary-mass candidates with masses down to 0.6 Jupiter masses, demonstrating that there is indeed no sharp cut-off in the mass function. Furthermore, we find that 9% of the planetary-mass objects are in wide binaries, a result that is highly unexpected and which challenges current theories of both star and planet formation.
**Keywords:** surveys, (stars:) binaries: visual, (stars:) brown dwarfs, stars: low-mass
## 1 Main
The Orion Nebula is arguably the most famous and well-studied H ii region in the sky. It is the nearest site of recent massive star formation, producing stars spanning
the full spectral range from massive O-types to M dwarfs, a rich population of sub-stellar brown dwarfs, and many planetary-mass objects. Collectively, these objects are known as the Orion Nebula Cluster and the densest inner core, within 0.5 parsec of the eponymous Trapezium stars, is called the Trapezium Cluster, with a core density reaching \(5\times 10^{4}\) stars pc\({}^{-3}\)[1]. Due to its large population of \(\sim 2000\) members [2], young age (0.5-2 Myr) [3], low foreground extinction (A\({}_{\rm v}\sim 1\)) [4], and close proximity to the Sun [\(390\pm 2\) pc; 5], the Trapezium Cluster provides an ideal laboratory for studies of star and planet formation [6; 7].
Sub-stellar objects below the hydrogen-burning limit [0.075 \(M_{\odot}\); 8; 9; 10] never reach the main sequence and continually cool, becoming fainter as they age. However, when young, sub-stellar sources remain relatively luminous and easy to detect as they shed gravitational energy while contracting: brown dwarfs also undergo a period of deuterium fusion, while sources below \(13\,M_{\rm Jup}\), the planetary-mass objects (henceforth PMOs) do not. The Trapezium Cluster is a particularly advantageous location to study such sources: it is young and has a large enough sample size for robust population statistics, while its relative proximity, location out of the galactic plane, and the dense molecular cloud behind it help minimise contamination due to foreground or background field stars.
Past ground- and space-based surveys of the Trapezium Cluster have revealed a rich population of brown dwarfs and PMOs down to \(\sim 3\,M_{\rm Jup}\)[11; 12; 13; 14; 15; 16; 17; 18; 19; 20], but reaching masses below that is challenging, partly because lower-mass objects are cooler and thus emit most of their energy in the thermal infrared, and partly due to the bright background of the Orion Nebula. Similarly, spectroscopically-confirmed objects below the deuterium burning limit remain relatively rare due to their faintness [21; 22; 23; 24; 25; 26; 27].
As a large, diffraction-limited, cryogenic space telescope, however, the JWST is ideally suited to pushing further into the planetary-mass domain than previously possible, and the wide range of filters allow us to search for tell-tale atmospheric features which can help distinguish between bona fide PMOs and distant field stars. Imaging surveys with NIRCam over wide areas can discover many new candidates, while multi-object follow-up spectroscopy is possible with NIRSpec.
An \(11\times 7.5\) arcminute (or \(1.2\times 0.8\) parsec) region of the inner Orion Nebula and Trapezium Cluster was observed using the Near Infrared Camera (NIRCam) on the NASA/ESA/CSA James Webb Space Telescope (JWST), as part of Cycle 1 GTO programme 12561. A total of 34.9 hours of observing were carried out between 26 September and 2 October 2022, split across 12 filters: F115W, F140M, F162M,
F182M, F187N, F212N, F277W, F300M, F335M, F360M, F444W, and F470N (see McCaughrean & Pearson 2023, submitted, for full details)
Young (1 Myr) PMOs with masses between 1-13 \(M_{\rm Jup}\) have effective temperatures of 890-2520 K [28], which means that their spectral energy distributions (SED) peak in the range 1-3.3 \(\mu\)m. These SEDs are not blackbodies, but are dominated by broad molecular absorption features as seen in Figure 1. The upper panel shows a model spectrum of a young PMO with \(T_{\rm eff}=900\) K and log(g) = 5.0, taken from the ATMO 2020 chemical equilibrium model set [28]. The molecular absorption bands due H\({}_{2}\)O, CH\({}_{4}\), and CO are shown in blue, red, and black, respectively, and are seen to radically alter the SED, confining the spectrum to a series of narrow peaks and troughs. Our selection of NIRCam filters was designed to target these peaks and troughs, in order to robustly distinguish PMOs from more massive and reddened background objects. We have used photometry in the F115W, F140M, F162M, F182M, and F227W filters to measure the depth of the 1.4 and 1.9 \(\mu\)m H\({}_{2}\)O absorption features, classifying sources according to Equation 1. We have also quantified the level of H\({}_{2}\)O absorption using the W-index, defined in Equation 2. As this index utilises the short-wavelength filters, it is susceptible to reddening and so can only be treated as a reliable indicator for low-extinction sources. To identify lower-mass, cooler sources, we use F300M, F335M, and F360M photometry to measure the 3.35 \(\mu\)m CH\({}_{4}\) absorption feature, classifying sources according to Equation 3.
The power of medium-band near-infrared photometry to identify PMOs using these absorption features is demonstrated in Figure 1. The blue curve in the middle panel shows JWST NIRCam photometry of a candidate PMO, while the black curve shows synthetic photometry derived from evolutionary models of a 1 Myr old, 1 \(M_{\rm Jup}\) object at the distance of Orion [10], calculated using a new equation-of-state for dense hydrogen-helium mixtures [29], combined with the atmospheric models from ATMO 2020 [28]. The model fluxes have been adjusted with the further addition of A\({}_{\rm V}=20\) of reddening to best fit the candidate PMO. The strong molecular absorption dips can clearly be seen in the F140M and F182M filters due to the presence of H\({}_{2}\)O, as well as a strong dip in F335M due to CH\({}_{4}\). In contrast, the black curve in the bottom panel shows synthetic photometry of a model 1 Myr, 2 \(M_{\rm Jup}\) object adjusted by A\({}_{\rm V}=14\). This clearly provides a bad fit to the JWST photometry of a candidate reddened background star, seen in red: a much better solution to its smooth SED is arrived at by assuming a reddened blackbody at \(T_{\rm eff}=4042\) K, with reddening of A\({}_{\rm V}=19.9\).
Figure 1: The upper panel shows a model spectrum of a young PMO with \(T_{\text{eff}}=900\,\text{K}\) and \(\log(\text{g})=5.0\) from the ATMO 2020 chemical equilibrium model set [28]. The molecular absorption bands with line intensities greater than \(5\times 10^{-23}\) cm\({}^{-1}\) / molecule cm\({}^{-2}\)) at T\({}_{\text{ref}}=900\,\text{K}\) for H\({}_{2}\)O, CH\({}_{4}\), and CO are shown in blue, red and black, respectively [30]. Due the low temperatures of PMOs, these molecules are present in their atmospheres and absorption radically alters the spectral energy distribution into a series of narrow peaks. Using medium- and wide-band photometry, these peaks and troughs can be readily identified and provide a robust method for distinguishing PMOs from more massive background objects. In the middle panel, the black line shows synthetic photometry of a 1 Myr, 1 \(M_{\text{Jup}}\) PMO, using the atmospheric models from ATMO 2020 [28] combined with the new equation of state from Chabrier & Debras (2021) [10; 29]. The model photometry has been reddened by A\({}_{\text{V}}=20\). The blue line shows our JWST NIRCam photometry of a candidate \(\sim 1\,M_{\text{Jup}}\) PMO: the nominal errorbars are smaller than the markers. The strong molecular absorption dips can clearly be seen in the F140M and F182M filters due to the presence of H\({}_{2}\)O, as well as a strong dip in F335M due to CH\({}_{4}\). The match between the data and model SEDs is excellent. The bottom panel shows synthetic photometry of a 1 Myr, 2 \(M_{\text{Jup}}\) PMO with A\({}_{\text{V}}=14\) in black, alongside NIRCam photometry of a candidate reddened background star shown in red. The candidate reddened background star does not show any molecular absorption features, and instead has much smoother spectral energy distribution: it is well fit by blackbody with \(T_{\text{eff}}=4042\,\text{K}\) and reddening of A\({}_{\text{V}}=19.9\). For reference, the bandpasses of the nine NIRCam filters (F115W, F140M, F162M, F182M, F277W, F300M, F335M, F360M, and F444W) used to classify the sources is shown along the bottom of the plot.
For all unsaturated stars in our JWST survey (Pearson & McCaughrean 2023, in prep), we have fit the medium- and wide-band filter SED to evolutionary models by varying the mass and extinction, assuming a constant age of 1 Myr and a distance of 390 pc. We use three grids of models, one using equilibrium chemistry (CEQ), and two using non-equilibrium chemistry (NEQ\({}_{\rm weak}\) & NEQ\({}_{\rm strong}\)) [10]. These models cover PMOs with masses in the range 0.0004 \(M_{\odot}\) to 0.015 \(M_{\odot}\) (0.42 \(M_{\rm Jup}\)to 15.7 \(M_{\rm Jup}\)). We also use the models which cover brown dwarfs and low mass stars from 0.01 \(M_{\odot}\) to 1.4 \(M_{\odot}\)[8]. The extinction is allowed to vary from A\({}_{\rm V}\) = 1-100. For each combination of model mass and extinction, we reddened the model SED using the reddening law from [31] with R\({}_{\rm V}\) = 3.1, and calculated the \(\chi^{2}\) goodness of fit. The lowest \(\chi^{2}\) value was taken as the best fit. This process was also repeated using a blackbody model with \(T_{\rm eff}\) = 500-50,000 K and A\({}_{\rm V}\) = 0-100.
Figure 2 shows dereddened NIRCam photometry for a sample of candidate brown dwarfs and PMOs in blue. In each case, the light grey curve shows the best-fitting unreddened model assuming an age of 1 Myr and a distance of 390 pc [8, 10]. This plot demonstrates how the SED sampled by our nine NIRCam medium- and wide-band filters evolves with decreasing mass. The H\({}_{2}\)O absorption features at 1.4 and 1.9 \(\mu\)m are already present for brown dwarfs and strengthen with decreasing effective temperature. The CH\({}_{4}\) absorption at 3.35 \(\mu\)m emerges at temperatures below \(T_{\rm eff}\sim\) 1500 K, making it sensitive to PMOs below 5 \(M_{\rm Jup}\). It also strengthens as the effective temperature decreases.
\[\mathrm{F140M-F162M\geq 1.605(F162M-F182M)+0.565} \tag{1}\] \[\mathrm{Windex=F115W+2\times(F140M-F162M+F182M)-F277W}\] (2) \[\mathrm{F335M-F360M\geq 0.488(F300M-F355M)+0.206} \tag{3}\]
Background field stars are ruled out on the basis of their smooth SED, but older, cooler, foreground brown dwarfs could be identified as false positives. However, given the relative proximity of the Orion Nebula and its location out of the galactic plane, such contamination is expected to be minimal [32]. Another potential contaminant is distant galaxies. Towards the centre of the region where the extinction due to the background OMC-1 core is high [A\({}_{\mathrm{V}}\geq\) 50-100, 33], this is not a concern, but at the edges of our survey, a population of galaxies is evident (McCaughrean & Pearson 2023, submitted). The aperture photometry technique described in the Supplementary Material detects extended sources like galaxes and, inter alia, circumstellar disks and outflow nebulosities, but also close binaries, so such sources are marked and checked manually.
In this way, we have identified 540 candidate PMOs in the Trapezium Cluster with SEDs best fit by evolutionary models with masses of 13 \(M_{\mathrm{Jup}}\) or lower. For low
Figure 2: The blue lines show dereddened NIRCam photometry for a sample of candidate brown dwarfs and PMOs in the Trapezium Cluster. This plot demonstrates how the SED sampled by our nine NIRCam medium- and wide-band filters evolves with decreasing mass for young brown dwarfs and PMOs. The light grey lines show the best-fitting unreddened model assuming an age of 1 Myr and distance of 390 pc [8, 10]. The H\({}_{2}\)O absorption features at 1.4 and 1.9 \(\mu\)m are present for brown dwarfs and strengthen with decreasing mass, while below 5 \(M_{\mathrm{Jup}}\), CH\({}_{4}\) absorption at 3.35 \(\mu\)m becomes prominent and also strengthens with decreasing effective temperature.
extinction sources (\(\mathrm{A_{V}}<10\)), we also include candidates which show \(\mathrm{H_{2}O}\) absorption according to Equation 1 and which have a W-index of \(\geq 0.47\) (Equation 2). A total of 168 of these PMO candidates also show \(\mathrm{CH_{4}}\) absorption and are best fit by models with masses of \(5\,M_{\mathrm{Jup}}\) or less. The most extreme candidate PMO in our sample has a mass of \(0.6\,M_{\mathrm{Jup}}\) or 2 Saturn masses. Our PMO candidates show a smooth continuation of the IMF to low masses, with no evidence for a sharp cutoff. We find no evidence for a large population of marginally-detected sources, which indicates that it is very unlikely that the mass function rises significantly below \(1\,M_{\mathrm{Jup}}\). We cautiously note a moderate increase in the number of objects in the 1-3 \(M_{\mathrm{Jup}}\) range, that might be consistent with an over density of low-mass PMOs formed through ejection [32, 34]. However, as these are currently unconfirmed PMO candidates and the masses are not well constrained, we will leave a full analysis of the IMF to future work, which will greatly benefit from scheduled follow up spectroscopy (JWST cycle 2 programme 2770).
We find that the chemical equilibrium models (CEQ) give the best fit to the data for PMOs down to \(2\,M_{\mathrm{Jup}}\). Below this mass, we see that the \(\mathrm{NEQ}_{\mathrm{weak}}\) models are preferred, with CEQ second best, and \(\mathrm{NEQ}_{\mathrm{strong}}\) worst. For the 0.6-2 \(M_{\mathrm{Jup}}\) PMOs, we find that the F360M, F162M, and F444W filters that generally most badly fit by the CEQ models. This could be an indication that vertical mixing is affecting the nitrogen and carbon non-equilibrium chemistry [_cf._ 35], causing an increased abundance and absorption of CO and \(\mathrm{CO_{2}}\) suppressing the flux in the 3.5-5 \(\mu\)m range. We will obtain R\(\sim\)100 NIRSpec prism spectroscopy for many of these PMO candidates as part of JWST cycle 2 programme 2770, which should help further investigate this tentative finding.
A remarkable finding is that a significant fraction of our candidate PMOs are in binaries. Across the initial mass function, the multiplicity fraction, defined as the fraction of primaries that have at least one companion, is seen to decrease with mass. For massive O- and B-type stars, the multiplicity fraction is close to 100%; this fraction decreases to 50-60% for solar type stars [36], drops to 15% for higher-mass brown dwarfs [50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 284, 286, 287, 288, 289, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 311, 323, 314, 315, 316, 317, 318, 324, 319, 325, 326, 327, 338, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 424, 424, 435, 444, 446, 447, 448, 449, 425, 449, 436, 447, 444, 454, 455, 465, 474, 48, 493, 449, 44, 455, 494, 466, 475, 495, 496, 400, 407, 409, 411, 424, 435, 44, 454, 46, 48, 497, 400, 412, 44, 44, 44, 454, 46, 498, 400, 413, 424, 44, 454, 46, 499, 414, 43, 44, 46, 474, 48, 490, 41, 44, 48, 491, 44, 492, 44, 493, 40, 414, 44, 494, 405, 406, 407, 409, 410, 411, 424, 435, 44, 44, 454, 46, 495, 414, 415, 416, 44, 417, 418, 419, 42, 44, 454, 46, 496, 41, 424, 44, 454, 46, 474, 48, 497, 408, 41, 44, 498, 409, 410, 42, 44, 454, 46, 49, 411, 43, 44, 47, 41, 45, 46, 47, 48, 49, 410, 43, 44, 48, 49, 42, 49, 43, 44, 49, 45, 46, 47, 49, 409, 411, 44, 45, 47, 49, 41, 45, 48, 49, 42, 45, 49, 43, 44, 46, 47, 45, 49, 46, 47, 48, 49, 40, 410, 42, 45, 46, 49, 411, 45, 47, 49, 42, 46, 48, 49, 43, 45, 47, 49, 409, 411, 45, 49, 42, 45, 46, 49, 43, 47, 48, 49, 45, 49, 40, 41, 46, 49, 42, 47, 48, 49, 40, 41, 45, 49, 40, 41, 46, 49, 42, 47, 48, 49, 43, 49, 40, 41, 45, 46, 47, 49, 42, 48, 49, 45, 49, 46, 47, 48, 49, 40, 41, 49, 45, 49, 40, 42, 49, 40, 42, 49, 40, 43, 44, 46, 49, 41, 45, 47, 48, 49, 45, 49, 46, 47, 49, 48, 49, 40, 41, 49, 40, 42, 49, 40, 43, 44, 49, 4
to find \(<2\) field brown dwarfs across the full NIRCam mosaic [32]. Furthermore, as is immediately evident just visually, we can exclude the possibility that many if any of these are chance alignments: based on the density of sources in our survey, we can calculate that we would expect to find 3.1 chance alignments within 1 arcsec across the whole region.
Assuming then that the JuMBOs are real binary PMOs, we can compare their statistical properties properties (see Table. 3) with higher-mass systems. The JuMBOs span the full mass range of our PMO candidates, from \(13\,M_{\rm Jup}\) down to \(0.7\,M_{\rm Jup}\). They have evenly distributed separations between \(\sim\)25-390 au, which is significantly wider than the average separation of brown dwarf-brown dwarf binaries which peaks at \(\sim 4\) au [42, 43]. However, as our imaging survey is only sensitive to visual binaries with separations \(>25\) au, we can not rule out an additional population of JuMBOs with closer orbits. For this reason we take 9% as a lower bound for the PMO multiplicity fraction. The average mass ratio of the JuMBOs is \(q=0.66\). While there are a significant number of roughly equal-mass JuMBOs, only 40% of the them have \(q\geq 0.8\). This is much lower than the typical mass ratios for brown dwarfs, which very strongly favour equal masses [42, 43].
Figure 4 shows the wide binary fraction (WBF) as a function of primary mass, where wide is defined as projected separations \(\geq\)100 au, equivalent to 0.26 arcsec at the distance of the Trapezium Cluster. Each data point is illustrated with a cross: the horizontal bar indicates the mass interval, while the height of the vertical bar shows the statistical uncertainty in the WBF. The blue points show a compilation of multiplicity surveys of the stellar neighbourhood [43]. The green points show the WBF for stars and brown dwarfs in the Trapezium Cluster calculated by compiling known binaries from the literature [44, 45, 46, 47, 48, 49, 50, 51, 52, 53]. The red points are from this work and show the WBF for PMOs in the Trapezium Cluster. The WBF starts at \(50-60\%\) for massive stars and decreases monotonically across three orders of magnitude in mass, down to \(\sim\)2% in the brown dwarf regime in the Trapezium Cluster. This is consistent with the current predictions of star formation models and the consensus view that the more massive brown dwarfs (\(>30\)M\({}_{\rm Jup}\)) share the same formation mechanisms as stars [54, 55, 56].
However, the PMOs clearly break that trend and prediction, rising back up to at least 9%. The sudden divergence and increased WBF at planetary masses suggests that new formation mechanisms must come into play at such masses. Broadly speaking, there are two key formation scenarios to consider. If the JuMBOs formed via a "star-like" mechanism, _i.e.,_ via core collapse and turbulent fragmentation, then there must be some fundamental extra ingredient involved at these very low masses. Indeed, the JuMBOs in our sample cover the whole range of PMO masses, down to \(0.7\,M_{\rm Jup}\), well below the minimum mass that is thought to be able to form via 3D fragmentation or 2D shocks [57, 58, 55, 59]: the formation of such low mass objects raises significant questions in itself.
Alternatively, perhaps the JuMBOs formed through a "planet-like" mechanism in a circumstellar disk around a host star and were violently ejected. Ejections can be caused through planet-planet scattering in the disk [60] or by dynamical interactions between stars [61]. The latter are relatively common in dense star-forming regions
like the Trapezium Cluster. In either case, however, how _pairs_ of young planets can be ejected simultaneously and remain bound, albeit weakly at relatively wide separations, remains quite unclear. The ensemble of PMOs and JuMBOs that we see in the Trapezium Cluster might arise from a mix of both of these "classical" scenarios, even if both have significant caveats, or perhaps a new, quite separate formation mechanism, such as a fragmentation of a star-less disk is required [62, 63].
Figure 3: A subsection of the full JWST NIRcam short-wavelength colour composite image of the Orion Nebula, located to the east of the Trapezium and south of the Dark Bay. It is centred at 05h35m27.0s, \(-05^{\circ}23\)’27” (J2000.0) and covers \(52.3\times 35.3\) arcsec or \(0.10\times 0.067\) pc assuming a distance of 390 pc. The image has been rotated with N left and E down to show this E-W strip of JuMBOs more effectively. Five JuMBOs are highlighted with zoomed cutouts: all ten of these PMOs have masses \(<7\,M_{\rm Jup}\)
The advent of JWST marks an exciting milestone for the field of star and planet formation, where observations of isolated objects down to and below \(1\,M_{\rm Jup}\) will soon become routine. Imaging with NIRCam will reveal many candidates through filter-based SEDs as shown here, while follow-up spectroscopy with NIRSpec (and in lower-density regions with NIRISS) will allow us to place much tighter constraints on their effective temperatures, spectral types, and chemical compositions. We will obtain NIRSpec prism spectra of many of the Trapezium Cluster PMO and JuMBO candidates as part of JWST programme 2770 in spring 2024. It would be particularly beneficial to see how the demographics of PMOs change as a function of environmental parameters such as cluster density and how they evolve with age, as this may provide crucial insights that allow us to differentiate between the "star-like" and "planet-like"
Figure 4: The wide binary fraction (WBF) as a function of primary mass, where wide is defined as projected separations \(\geq\)100 au, equivalent to 0.26 arcsec at the distance of the Trapezium Cluster. For each point, the horizontal bar indicates the mass interval and the height of the vertical bar indicates the statistical uncertainty in the WBF. Blue points show a compilation of multiplicity surveys for the solar neighbourhood [43]. Green points show the WBF for stars and brown dwarfs in the Trapezium Cluster [44, 45, 46, 47, 48, 49, 50, 51, 52, 20]. Red points are for PMOs in the Trapezium Cluster from this work.
formation scenarios for PMOs and JuMBOs alike. It is also clear that further simulations and modelling will be needed to understand how a substantial population of objects can form below \(5\,M_{\rm Jup}\) and how a significant fraction of them can end up in multiple systems.
## 2 Supplementary Materials
### Observations
The data presented in this paper were obtained with the Near Infrared Camera (NIRCam) onboard the James Webb Space Telescope (JWST), as part of Cycle 1 GTO programme 1256, (P.I. M. McCaughrean). The observations cover an \(11^{\prime}\times 7.5^{\prime}\) area focused on the inner region of the Trapezium Cluster. A total of 34.9 hours of observing were carried out between September 26th and October 2nd 2022, and split between 12 filters: F115W, F140M, F162M, F182M, F187N, F212N, F277W, F300M, F335M, F360M, F444W and F470N. The observations are split into two mosaic patterns. For two wide-band filters (F115W and F444W), a 7 mosaic covers the \(11^{\prime}\times 7.5^{\prime}\) field with considerable overlap between the rows and columns. Due to the mosaic pattern, the exposure time is not uniform across the field. This was chosen to ensure accurate registration of the full mosaic using stars in the overlapping regions to yield a good astrometric base for future proper motion studies. These observations used the INTRAMODULEX dither pattern, with four primary dithers, the BRIGHT1 (NGROUPS = 6, NINT = 1) readout pattern and a total exposure time of 515 seconds per visit. This readout pattern was selected to maximise the dynamic range. As the Trapezium Cluster contains a significant number of bright, massive stars, these sources will inevitably be saturated. However, maximising the dynamic range ensures a solid overlap between JWST and existent ground and space-based photometry for intermediate-bright sources in order to bootstrap calibrate the faintest sources in the JWST data. For the remaining five pairs of filters, a \(5\times 2\) mosaic covers the same region but with only marginal overlap in rows, allowing for a more efficient use of observing time. These observations used the INTRAMODULEX dither pattern with six primary dithers, the SHALLOW2 (NGROUPS = 3, NINT = 1) readout pattern to reduce data volumes and a total exposure time of 773 seconds per visit. For further details on the observations, JWST and its instruments and the Trapezium Cluster, we direct the reader to Paper I. Observations & overview (McCaughrean & Pearson 2023).
### Data reduction
To reduce the observations, we retrieved the stage 0 data products from the Barbara A. Mikulski Archive for Space Telescopes (MAST) and re-ran the stage 1, 2 and 3 reduction steps using a custom version of the 1.11.3 pipeline and Calibration Reference Data System mapping jwst _pmap_1100_. Stage 1 was run using the optional step argument det1.ramp_fit.suppress_one_group = False. Stage 2 was run using the default reduction pipeline. A custom version of the stage 3 pipeline was used to align the individual images to Gaia Data Release 3 (GDR3) [64, 65] and combine the images into the final full mosaics. A brief summary of this process is given below.
The WCS of visit 2 for (F140M, F162M, F182M, F187N, F212N, F277W, F300M, F335M and F360M) and visit 7 for (F115W and F444W) were found to be offset by \(\sim\)15 arcseconds. This was corrected by manually adding an offset to the wcs data stored in data model in the asdf tree in the fits header of each _cal.fits file. This is
was an approximate correction that does not take into account distortion effects, but significantly reduced the search radius needed for later fine alignment.
We first aligned the F470N data to GDR3, as this filter had the largest overlap between the faintest Gaia sources and unsaturated JWST sources. We compiled an absolute reference catalog of \(\sim 650\) high quality GDR3 sources that excluded flagged binaries, close pairs, extended galaxies and knots of nebulosity; the latter being a large source of contamination for H ii regions such as the Trapezium Cluster. As this catalogue forms the basis of the alignment, care should be taken to remove spurious sources in order to achieve an accurate registration.
For each of the stage 2 _cal.fits images we compiled an individual source catalogue. The x, y coordinates of the centre of the corresponding GDR3 sources were determined using a non-pipeline recentring routine. Each source was also weighted depending on the quality of the fit and whether it was found to be saturated in the _cal.fits data. The stage 3 TweakReg routine was then run on each of the _cal.fits individually. The absolute reference catalogue was passed to the TweakReg routine using the tweakreg.abs_refcat = path_to_file step argument. The source catalogues were saved as.ecsv files and were passed to the TweakReg routine by updating the asn with the file path. This process was repeated for each _cal.fits file individually, as the pipeline defaults to expanding the absolute reference catalogue, which causes alignment errors. The individually aligned files were then resampled into a full combined mosaic using step arguments: tweakreg.skip = True, skymatch.skip = True, resample.fillval = 'nan'.
From this F470N mosaic a new absolute reference catalog of \(\sim\)1500 sources was constructed. The F470N absolute reference catalogue had significantly more overlap of non-saturated sources than GDR3 for the remaining filters, which improved the alignment. This catalogue was used to repeat the above process for the remaining 11 filters, aligning the individual _cal.fits files files to the F470N absolute reference catalogue and then combining and resampling the full mosaics.
### Source detection
Sources were detected in the level 3 mosaics produced by stage 3 of the pipeline. First, the two dimensional background of each image was estimated and subtracted using the DAOPHOT MMM algorithm as implemented in Astropy [66, 67] using a \(30\times 30\) pixel box and a \(5\times 5\) pixel filter. We used the MMMBackground algorithm to divide the input data into a grid of \(30\times 30\) pixels boxes and then used its mode estimator of the form (3 \(\times\) median) - (2 \(\times\) mean) to calculate the background level of each box, thus creating a low resolution background map. This image was then median filtered to suppress local under or over estimations, with a window of size of \(5\times 5\) pixels. The final background map was calculated by interpolating the low-resolution background map. Sources were then identified using DAOSTarFinder with a threshold of \(2\sigma\) and a model PSF for each of the 12 JWST filters employed [68]. Sources that were detected in \(\geq 3\) filters were then added to a preliminary source catalogue, which was checked by eye against the images to remove spurious sources, such as bad pixels, knots of nebulosity, diffraction spikes, and persistence spots that had been erroneously flagged as point sources. The by-eye examination was also used to visually classify
other sources including proplyds, outflows, and galaxies. The final catalogue contains 3092 sources.
### Aperture photometry
Aperture photometry was performed using Photutils [66], a package of Astropy [67]. We used the aperture_photometry routine to obtain fluxes for all of the sources in our catalogue, using apertures of 2.5 and 4.5 pixels radius for the sources, while the background was measured in an annulus with inner and outer radii of 5 and 10 pixels, respectively, using a sigma-clipped median. The PIXAR_SR header keyword was used to convert from surface brightness (MJy sr\({}^{-1}\)) to point source flux (Jy) and then to Vega magnitudes using the zero-points provided by the Spanish Virtual Observatory (SVO) filter profile service [69]. To convert the aperture magnitudes to total magnitudes, we used the aperture corrections provided by the JWST reference files for the respective filter, interpolated to the corresponding aperture radius.
### Extended sources
Extended sources were identified using aperture photometry and comparing the apparent magnitudes that are calculated with inner apertures of 2.5 and 4.5 pixels. Unresolved point sources will have the same apparent magnitude independent of the choice of inner aperture, whereas extended sources, such as background galaxies and nebular knots, will appear brighter with larger apertures. Sources where the median difference across all 12 filters between 2.5 and 4.5 pixel apertures was greater than 0.1 mag were classified as extended. Sources with a neighbour within 1 arcsec were excluded from this automated classification and checked manually. As well as galaxies, this selection has the potential to flag highly embedded objects, objects with resolved disks and outflows, unresolved binaries and objects in highly featured areas of gas and dust. For this reason our sample of young PMO candidates may not be fully complete, but will be a clean sample of reliable candidates.
### Evolutionary models
Throughout our analysis we have utilised the CBPD22 evolutionary models [10], which combine the atmospheric models from ATMO 2020 [28] with the a new equation of state for dense hydrogen-helium mixtures [29]. In earlier models the equation of state was based on the so-called additive volume law [70, 71], which does not take into account the interactions between hydrogen and helium species. This updated equation of state takes these interactions into account and modifies the thermodynamic properties of the H/He mixture. This primarily affects the entropy profiles, which in turn alters the development of degeneracy and internal structure. The ATMO 2020 models use a 1D radiative-convective equilibrium code to generate three grids of atmospheric models, one using equilibrium chemistry (CEQ) and two using non-equilibrium chemistry (NEQ_weak & NEQ_strong). The non-equilibrium models use a _weak_ and a _strong_ scaling relation for the eddy diffusion coefficient with surface gravity, which alter the vertical mixing relationships. These models cover the planetary mass regime from
\((0.0004{\rm M}_{\odot}-0.015{\rm M}_{\odot})\). In order to cover the brown dwarf and stellar mass ranges \((0.01{\rm M}_{\odot}-1.4{\rm M}_{\odot})\) we have used the BHAC15 evolutionary models [8].
## 3 JuMBO Catalogue
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline Name & RA (deg) & DEC (deg) & M\_Pri & Av\_Pri & M\_Sec & Av\_Sec & Proj\_Sep & M\_Ter & Av\_Ter \\ \hline JuMBO 1 & 83.716375 & -5.374688 & 0.001 & 6.3 & 0.001 & 4.3 & 357.7 & - & - \\ JuMBO 2 & 83.718439 & -5.391585 & 0.002 & 16.4 & 0.002 & 13.1 & 114.7 & - & - \\ JuMBO 3 & 83.720854 & -5.379591 & 0.003 & 19.7 & 0.003 & 10.8 & 52.3 & - & - \\ JuMBO 4 & 83.727380 & -5.444921 & 0.002 & 23.7 & 0.001 & 10.6 & 324.4 & - & - \\ JuMBO 5 & 83.727997 & -5.389459 & 0.003 & 10 & 0.002 & 32.8 & 384.3 & - & - \\ JuMBO 6 & 83.734156 & -5.368803 & 0.003 & 46.6 & 0.003 & 56.5 & 70.2 & - & - \\ JuMBO 7 & 83.735012 & -5.387694 & 0.001 & 17.4 & 0.001 & 17.3 & 119 & - & - \\ JuMBO 8 & 83.736001 & -5.445662 & 0.002 & 21 & 0.002 & 15.9 & 101.2 & - & - \\ JuMBO 9 & 83.736884 & -5.332175 & 0.001 & 13.1 & 0.0007 & 8.8 & 211.5 & - & - \\ JuMBO 10 & 83.748149 & -5.445690 & 0.001 & 6.9 & 0.001 & 8.9 & 342.5 & - & - \\ JuMBO 11 & 83.753378 & -5.431788 & 0.0008 & 10.4 & 0.0007 & 15.9 & 192.2 & - & - \\ JuMBO 12 & 83.753580 & -5.354639 & 0.003 & 20.1 & 0.001 & 19.8 & 366.2 & - & - \\ JuMBO 13 & 83.760064 & -5.393619 & 0.001 & 20.5 & 0.001 & 26.5 & 192.6 & - & - \\ JuMBO 14 & 83.767052 & -5.406016 & 0.009 & 39.5 & 0.008 & 36 & 55.6 & - & - \\ JuMBO 15 & 83.768695 & -5.440258 & 0.003 & 39.8 & 0.002 & 26.5 & 329.8 & - & - \\ JuMBO 16 & 83.769429 & -5.415209 & 0.001 & 5.3 & 0.001 & 6.5 & 273.9 & - & - \\ JuMBO 17 & 83.775698 & -5.432976 & 0.001 & 24.5 & 0.0006 & 10.7 & 194.9 & - & - \\ JuMBO 18 & 83.779749 & -5.424113 & 0.003 & 11.7 & 0.002 & 6.6 & 150.6 & - & - \\ JuMBO 19 & 83.785686 & -5.345893 & 0.003 & 22.6 & 0.002 & 31.5 & 273.6 & - & - \\ JuMBO 20 & 83.786364 & -5.411568 & 0.003 & 19.1 & 0.002 & 11.3 & 149.4 & - & - \\ JuMBO 21 & 83.788762 & -5.398635 & 0.007 & 74.2 & 0.002 & 26.1 & 200.5 & - & - \\ JuMBO 22 & 83.801462 & -5.342754 & 0.004 & 51.6 & 0.003 & 29.4 & 127.4 & - & - \\ JuMBO 23 & 83.829058 & -5.446920 & 0.004 & 35.2 & 0.002 & 11.3 & 314.7 & - & - \\ JuMBO 24 & 83.831262 & -5.3934369 & 0.011 & 3.6 & 0.011 & 3.5 & 28 & - & - \\ JuMBO 25 & 83.836455 & -5.371124 & 0.005 & 14.2 & 0.004 & 16.4 & 46.1 & 0.004 & 6.1 \\ JuMBO 26 & 83.838007 & -5.366544 & 0.008 & 12.5 & 0.003 & 9.1 & 267.1 & - & - \\ JuMBO 27 & 83.846621 & -5.399533 & 0.009 & 2.4 & 0.002 & 2.8 & 333.1 & - & - \\ JuMBO 28 & 83.846940 & -5.392726 & 0.011 & 8.7 & 0.009 & 20.1 & 58.9 & - & - \\ JuMBO 29 & 83.847252 & -5.346677 & 0.012 & 11.9 & 0.003 & 14.4 & 135 & - & - \\ JuMBO 30 & 83.848540 & -5.405963 & 0.005 & 33.1 & 0.002 & 2.2 & 374.1 & - & - \\ JuMBO 31 & 83.856732 & -5.387897 & 0.007 & 12.8 & 0.003 & 15.2 & 206.7 & - & - \\ JuMBO 32 & 83.860453 & -5.388966 & 0.004 & 14.4 & 0.003 & 11.9 & 118 & - & - \\ JuMBO 33 & 83.863086 & -5.388234 & 0.004 & 17.8 & 0.004 & 23.1 & 73.7 & - & - \\ JuMBO 34 & 83.867221 & -5.388611 & 0.005 & 15.4 & 0.005 & 13.9 & 66.4 & - & - \\ JuMBO 35 & 83.868427 & -5.390019 & 0.004 & 10.1 & 0.003 & 10.3 & 84.5 & - & - \\ JuMBO 36 & 83.878803 & -5.340274 & 0.013 & 32.3 & 0.004 & 36 & 363 & - & - \\ JuMBO 37 & 83.882254 & -5.330745 & 0.003 & 18.3 & 0.002 & 32.2 & 317.6 & - & - \\ JuMBO 38 & 83.883267 & -5.351932 & 0.004 & 27.8 & 0.002 & 24.4 & 213.6 & - & - \\ JuMBO 39 & 83.886789 & -5.372932 & 0.004 & 41.9 & 0.002 & 32.9 & 251 & - & - \\ JuMBO 40 & 83.886856 & -5.364031 & 0.005 & 18.1 & 0.005 & 23 & 164.3 & - & - \\ JuMBO 41 & 83.887251 & -5.375283 & 0.011 & 31.7 & 0.0008 & 17.2 & 287.2 & - & - \\ JuMBO 42 & 83.897548 & -5.333713 & 0.003 & 17.8 & 0.0007 & 15.2 & 123.3 & 0.0007 & 10.8 \\ \hline \end{tabular}
\end{table}
Table 1: A short summary of the key JuMBO properties. All masses are in units of \(M_{\odot}\), projected separation are given in au. For an extend version of this table that includes photometry, see the supplementary catalogue.
## 4 Data Availability
The data presented in this paper were obtained with the Near Infrared Camera (NIRCam) onboard the NASA/ESA/CSA James Webb Space Telescope (JWST), as part of Cycle 1 GTO programme 1256, (P.I. M. McCaughrean). They are available on the Barbara A. Mikulski Archive for Space Telescopes (MAST): [http://dx.doi.org/10.17909/vjys-x251](http://dx.doi.org/10.17909/vjys-x251).
## 5 Acknowledgements
SGP acknowledges support through the ESA research fellowship programme. The time used to make these JWST observations come from the Guaranteed Time Observation allocation made to MJM upon selection as one of two ESA Interdisciplinary Scientists on the JWST Science Working Group (SWG) in response to NASA AO-01-OSS-05 issued in 2001. SGP would like to thank Victor See for helpful discussions and Katja Fahrion for valuable insights on the JWST calibration pipeline. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. This research has made use of the Spanish Virtual Observatory ([https://svo.cab.inta-csic.es](https://svo.cab.inta-csic.es)) project funded by MCIN/AEI/10.13039/501100011033/ through grant PID2020-112949GB-I00. This work made use of Astropy:2 a community-developed core Python package and an ecosystem of tools and resources for astronomy [67]. This research made use of Photutils, an Astropy package for detection and photometry of astronomical sources [66].
Footnote 2: [http://www.astropy.org](http://www.astropy.org)
|
2302.01020 | Meta Learning in Decentralized Neural Networks: Towards More General AI | Meta-learning usually refers to a learning algorithm that learns from other
learning algorithms. The problem of uncertainty in the predictions of neural
networks shows that the world is only partially predictable and a learned
neural network cannot generalize to its ever-changing surrounding environments.
Therefore, the question is how a predictive model can represent multiple
predictions simultaneously. We aim to provide a fundamental understanding of
learning to learn in the contents of Decentralized Neural Networks
(Decentralized NNs) and we believe this is one of the most important questions
and prerequisites to building an autonomous intelligence machine. To this end,
we shall demonstrate several pieces of evidence for tackling the problems above
with Meta Learning in Decentralized NNs. In particular, we will present three
different approaches to building such a decentralized learning system: (1)
learning from many replica neural networks, (2) building the hierarchy of
neural networks for different functions, and (3) leveraging different modality
experts to learn cross-modal representations. | Yuwei Sun | 2023-02-02T11:15:07Z | http://arxiv.org/abs/2302.01020v2 | # Meta Learning in Decentralized Neural Networks:
###### Abstract
Meta-learning usually refers to a learning algorithm that learns from other learning algorithms. The problem of uncertainty in the predictions of neural networks shows that the world is only partially predictable and a learned neural network cannot generalize to its ever-changing surrounding environments. Therefore, the question is how a predictive model can represent multiple predictions simultaneously. We aim to provide a fundamental understanding of learning to learn in the contents of Decentralized Neural Networks (Decentralized NNs) and we believe this is one of the most important questions and prerequisites to building an autonomous intelligence machine. To this end, we shall demonstrate several pieces of evidence for tackling the problems above with Meta Learning in Decentralized NNs. In particular, we will present three different approaches to building such a decentralized learning system: (1) learning from many replica neural networks, (2) building the hierarchy of neural networks for different functions, and (3) leveraging different modality experts to learn cross-modal representations.
## Progress to Date
Common sense is not just facts but a collection of models of the world. The global workspace theory [1] demonstrated that in the human brain, multiple neural network models cooperate and compete in solving problems via a shared feature space for common knowledge sharing, which is called the global workspace (GW). Within such a learning framework, using different kinds of metadata about individual neural networks such as measured performance and learned representations, shows the potential to learn, select, or combine different learning algorithms to efficiently solve a new task. The learned knowledge or representations from different neural network areas are leveraged for reasoning and planning. Therefore, we termed this research direction as Meta Learning in Decentralized Neural Networks which studies how a meta agent can solve novel tasks by observing and leveraging the world models built by these individual neural networks. We present the three different approaches to building such a decentralized learning system: (1) learning from many replica neural networks (2) building the hierarchy of neural networks, and (3) leveraging different modality experts.
## Learning from Many Replica Neural Networks
The proliferation of AI applications is reshaping the contours of the future knowledge graph of neural networks. Decentralized NNs is the study of knowledge transfer from different individual neural networks trained on separate local tasks to a global model. In a learning system comprising many replica neural networks with similar architecture and functions, the goal is to learn a global model that can generalize to unseen tasks without large-scale training [2]. In particular, we studied two practical problems in Decentralized NNs, i.e., learning with non-independent and identically distributed (non-iid) data and multi-domain data.
Notably, non-iid refers to data samples across local models are not from the same distribution, which hinders the knowledge transfer between local models. To tackle the non-iid problem, we proposed the Segmented-Federated Learning (Segmented-FL) [2] that employs periodic local model performance evaluation and learning group segmentation that brings neural networks training over similar data distributions together. Then, for each group, we train a different global model by transferring knowledge from the local models in the group. The global model can only passively observe the local model performance without access to the local data. We showed that the proposed method achieved better performance in tackling non-iid data of intrusion detection tasks compared to the traditional federated learning [10].
On the other hand, multi-domain refers to data samples across local models are from different domains with domain-specific features. For example, an autonomous vehicle that learns to drive in a new city might leverage the driving data of other cities learned by different vehicles. Since different cities have different street views and weather conditions, it would be difficult to directly learn a new model based on the knowledge of the models trained on multi-domain data. This problem is closely related to multi-source domain adaptation, which studies the distribution shift in features inherent to specific domains that bring in negative transfer degrading a model's generality to unseen tasks. To this end, we proposed a new domain adaptation method that reduces feature discrepancy between local models and improves the global model's generality to unseen tasks [22]. We devised two components of embedding match
ing and global feature disentangler to align learned features of different local models such that the global model can learn better-refined domain-invariant features. Moreover, we found that a simple voting strategy that produces multiple predictions and generates pseudo-labels based on the consensus of local models could further improve the global model performance. The results of both image classification tasks and a natural language sentiment classification task showed that the proposed domain adaptation method could greatly improve the transfer learning of local models.
### Building the Hierarchy of Neural Networks
Hierarchical neural networks consist of multiple neural networks concreted in a form of an acyclic graph. An early theory of the global workspace theory (GWT) [1] refers to multiple neural network models cooperating and competing in solving problems via a shared feature space for common knowledge sharing. Built upon the GWT, the conscious prior theory [1] demonstrated the sparse factor graphs in space of high-level semantic variables and simple mapping between high-level semantic variables. To study the hierarchy of neural networks, we proposed homogeneous learning for self-attention decentralized deep learning [22]. In particular, we devised a self-attention mechanism where a local model is selected as the meta for each training round and leverages reinforcement learning to recursively update a globally shared learning policy. The meta observes the states of local models and its surrounding environment, computing the expected rewards for taking different actions based on the observation. As mentioned in [1], with a model of external reality and an agent's possible actions, it can try out various alternatives and conclude which is the best action using the knowledge of past events. The goal is to learn an optimized learning policy such that the Decentralized NNs systems can quickly solve a problem by planning and leveraging different local models' knowledge more efficiently. The results showed that the learning of a learning policy greatly reduced the total training time for an image classification task by 50.8%.
### Leveraging Different Modality Experts
Information in the real world usually comes in different modalities. The degeneracy [23] in neural structure refers to any single function can be carried out by more than one configuration of neural signals and different neural clusters participate in several different functions. Intelligence systems build models of the world with different modalities where spatial concepts are generated via modality models. We demonstrate cross-modal learning in multimodal models [1]. Notably, we studied the Visual Question Answering (VQA) problem based on self-supervised learning [24]. By leveraging the contrastive learning of different model components, we aimed to align the modality representations encouraging the similarity of the relevant component outputs while discouraging the irrelevant outputs. Such that the learning framework learns better-refined cross-modal representations for unseen VQA tasks based on the knowledge learned from different VQA tasks of local models.
## Anticipated Progress
The vast majority of current neural networks lack sophisticated logical reasoning and action-planning modules. We aim to study a neuro-symbolic approach to improving the explainability and robustness of knowledge sharing in the Global Workspace (GW) of Decentralized NNs. Furthermore, we consider there are several necessary components for building such a neuro-symbolic learning framework, i.e., causal models and probabilistic Bayesian neural networks [10], and associative memory like Hopfield Network [12]. In particular, we aim to tackle the tasks of visual grounding such as visual question answering and image captioning. In this regard, we will revisit and reintegrate the classical symbolic methods into the decentralized neural networks theory to improve the hierarchical reasoning of the meta agent for leveraging different modality expert models. The anticipated contribution is establishing a new learning framework to perform efficient causal discovery and inferences based on decentralized neural networks for improving generality in visual language modeling.
|
2307.15853 | Improving Realistic Worst-Case Performance of NVCiM DNN Accelerators
through Training with Right-Censored Gaussian Noise | Compute-in-Memory (CiM), built upon non-volatile memory (NVM) devices, is
promising for accelerating deep neural networks (DNNs) owing to its in-situ
data processing capability and superior energy efficiency. Unfortunately, the
well-trained model parameters, after being mapped to NVM devices, can often
exhibit large deviations from their intended values due to device variations,
resulting in notable performance degradation in these CiM-based DNN
accelerators. There exists a long list of solutions to address this issue.
However, they mainly focus on improving the mean performance of CiM DNN
accelerators. How to guarantee the worst-case performance under the impact of
device variations, which is crucial for many safety-critical applications such
as self-driving cars, has been far less explored. In this work, we propose to
use the k-th percentile performance (KPP) to capture the realistic worst-case
performance of DNN models executing on CiM accelerators. Through a formal
analysis of the properties of KPP and the noise injection-based DNN training,
we demonstrate that injecting a novel right-censored Gaussian noise, as opposed
to the conventional Gaussian noise, significantly improves the KPP of DNNs. We
further propose an automated method to determine the optimal hyperparameters
for injecting this right-censored Gaussian noise during the training process.
Our method achieves up to a 26% improvement in KPP compared to the
state-of-the-art methods employed to enhance DNN robustness under the impact of
device variations. | Zheyu Yan, Yifan Qin, Wujie Wen, Xiaobo Sharon Hu, Yiyu Shi | 2023-07-29T01:06:37Z | http://arxiv.org/abs/2307.15853v1 | Improving Realistic Worst-Case Performance of NVCiM DNN Accelerators through Training with Right-Censored Gaussian Noise
###### Abstract
Compute-in-Memory (CiM), built upon non-volatile memory (NVM) devices, is promising for accelerating deep neural networks (DNNs) owing to its in-situ data processing capability and superior energy efficiency. Unfortunately, the well-trained model parameters, after being mapped to NVM devices, can often exhibit large deviations from their intended values due to device variations, resulting in notable performance degradation in these CiM-based DNN accelerators. There exists a long list of solutions to address this issue. However, they mainly focus on improving the mean performance of CiM DNN accelerators. How to guarantee the worst-case performance under the impact of device variations, which is crucial for many safety-critical applications such as self-driving cars, has been far less explored. In this work, we propose to use the k-th percentile performance (KPP) to capture the realistic worst-case performance of DNN models executing on CM accelerators. Through a formal analysis of the properties of KPP and the noise injection-based DNN training, we demonstrate that injecting a novel right-censored Gaussian noise, as KPP of DNNs. We further propose an automated method to determine the optimal hyperparameters for injecting this right-censored Gaussian noise during the training process. Our method achieves up to a 26% improvement in KPP compared to the state-of-the-art methods employed to enhance DNN robustness under the impact of device variations.
## I Introduction
Deep neural networks (DNNs) have demonstrated remarkable advancements, surpassing human performance in a wide range of perception tasks. The recent emergence of deep learning-based generation models, such as DALL-E [1] and the GPT family [2], has further reshaped our workflows. To date, the trend of incorporating on-device intelligence across edge platforms such as mobile phones, watches, and cars, has become an evident [3, 4, 5], transforming every walk of life. However, the limited computational resources and strict power constraints of these edge platforms present challenges. These circumstances necessitate more energy-efficient DNN hardware beyond the general-purpose CPUs and GPUs.
Compute-in-Memory (CiM) DNN accelerators [6], on the other hand, are competitive alternatives to replace CPUs and GPUs in accelerating DNN inference on edge. In contrast to the traditional von Neumann architecture platforms, which involve frequent data movements between memory and computation components, CIM DNN accelerators reduce energy consumption by enabling in-situ computation directly at the storage location of weight data. Moreover, emerging non-volatile memory (NVM) devices, such as ferroelectric field-effect transistors (FeFETs) and resistive random-access memories (RRAMs), allows NVCiM accelerators to achieve higher memory density and improved energy efficiency compared to conventional MOSFET-based designs [4]. However, the reliability of NVM devices can be a concern due to device-to-device (D2D) variations incurred by fabrication defects and cycle-to-cycle (C2C) variations due to thermal, radiation, and other physical impacts. These variations can have a notable negative impact on NVCiM DNN accelerators' inference accuracy, as they may introduce significant differences between the weight values read out from NVM devices during inference and their intended values.
Various strategies have been proposed to mitigate the impact of device variations. These strategies can be broadly categorized into two categories: reducing device value deviations and enhancing the robustness of DNNs in the presence of device variations. Device value deviations can be reduced through methods such as write-verify [7], which iteratively applies programming pulses to reduce device value deviation from the desired value after each write. On the other hand, there exist various approaches that enhance DNN robustness in the presence of device variations. One direction is to identify novel DNN topologies that are more robust in the presence of device variations. This can be achieved through techniques such as neural architecture search [8, 9] or by leveraging Bayesian Neural Networks [10] which use variational training to improve DNN robustness. Another line of methods focuses on training more robust DNN weights using noise injection training [3, 11, 12]. In this approach, randomly sampled noise is injected into DNN weights during the forward and backpropagation phases of DNN training. After the gradient is calculated through backpropagation, the noise is then removed and the weight value without noise is updated by gradient descent. By simulating a noisy inference environment, the noise injection training methods significantly enhance the robustness of DNN models across various DNN topologies.
However, all aforementioned methods merely focus on improving the average accuracy performance of CIM DNN accelerators in the presence of device variations, which may be acceptable for non-safety critical applications. In safety-critical applications like airplanes, autonomous driving, and medical devices, even a prediction failure that happens with an extremely low probability (namely worst-case scenario), is not affordable because it may result in loss of life, as has been demonstrated in the recent work [13]. The worst-case performance of a DNN model in the presence of device variations can be determined by carefully calibrating the perturbation injected on each weight value to reach the lowest possible DNN performance. Recent work [13] has demonstrated that even a weight value perturbation of less than 3% can degrade a DNN model's performance to the level of random guessing. However, the likelihood of such a worst-case scenario occurring is extremely low (\(<10^{-100}\)), which can be safely ignored in common natural environments [13]. Consequently, a more suitable metric to depict the realistic worst-case performance of DNNs in the presence of device variations is needed.
To capture realistic worst-case scenarios precisely in the presence of device variations, in this work, we propose to use the k-th percentile performance (KPP) metric, instead of the average or absolute worst-case performance. With a predetermined \(K\) value, the KPP metric aims to identify a performance score that the model's performance is consistently greater than this score in all but k% of cases.1 For example, if a model has a KPP of 0.912 when \(K=1\), this suggests that the likelihood of a model's performance being greater than 0.912 is 99% (except the 1% of the cases). When a realistically small \(K\) value is given, such as \(K=1\), KPP can capture a realistic worst-case performance of a DNN model because it (1) guarantees a lower bound of the model's performance and (2) filters out extreme corner cases.
Given the same \(K\) value, a higher KPP for a DNN model is desirable as it signifies that the model can consistently deliver high performance within a certain probability threshold.
Since improving KPP guarantees higher realistic worst-case performance of a DNN model, we revisited the state-of-the-art (SOTA) Gaussian noise injection training method to analyze its effectiveness in improving KPP. Gaussian noise injection training is widely used simply because it injects noises that statistically mirror the noises in the inference environment. Although it is empirically valid to state that a precise simulation of the inference environment during training would yield optimal results, there is no theoretical proof for it. Thus, to prove the effectiveness of Gaussian noise injection training, we thoroughly analyze the relationship between KPP of a DNN model and the properties of DNN weights to show what kind of models would provide higher KPP. Surprisingly, our analysis shows that Gaussian noise injection training is far from optimal in generating robust DNN models in the presence of device variations.
Specifically, our key observation is that achieving a higher KPP in the presence of device variation needs to satisfy the following three requirements simultaneously: (1) higher DNN accuracy under no device variation; (2) smaller \(2^{nd}\) derivatives _w.r.t._ DNN weights, and (3) larger \(1^{st}\) derivatives _w.r.t._ DNN weights. However, our analysis (see Section III-C) shows that the conventional Gaussian noise-injected training approaches can only fulfill the first two requirements, but not the third, making them ineffective for KPP improvement. Specifically, the third requirement necessitates distributions with non-zero expected values, a condition that the Gaussian distribution fails to satisfy.
To this end, we develop TRICE, a method that injects adaptively optimized right-censored Gaussian (RC-Gaussian) noise in the training process. The abbreviation of this method is derived from the name Training with Right-Censored Gaussian Noise(TRICE), to address all aforementioned three requirements simultaneously. TRICE differs from existing approaches in several aspects: (1) rather than using the general Gaussian noise, TRICE uses RC-Gaussian noise which exhibits a unique feature-for all sampled values greater than a designated threshold, the sample value is fixed (_i.e._, censored) to the threshold. This results in a negative expected value for the injected noise, thus meeting the third requirement. (2) TRICE requires additional hyperparameters tuning, _e.g._, via a dedicated adaptive training method to identify the optimal noise hyperparameters within a single run of DNN training, which is different from the conventional Gaussian noise-based approaches using the same noise hyperparameters in training and inference. The main contributions of this work are multi-fold:
* We analytically derive the relationship between KPP and the gradients of weights and demonstrate how noise injection training can improve KPP.
* We propose to inject right-censored Gaussian noise during DNN training to improve the KPP in the presence of device variations. An adaptive training method that can automatically identify optimal noise hyperparameters in the training process is developed accordingly.
* Extensive experimental results show that TRICE improves the \(1^{st}\) percentile performance (in terms of top-1 accuracy) in the presence of device variations by up to 15.42%, 25.09%, and 26.01% in LeNet for MNIST, VGG-8 for CIFAR-10 and ResNet-18 for CIFAR-10, respectively compared with SOTA baselines.
* We also demonstrate the scalability of our proposed TRICE. That is, in addition to evaluations on uniform RRAM devices, TRICE also improves the \(1^{st}\) percentile accuracy by up to 15.61%, and 12.34% in two different types of FeFET devices respectively.
* To the best of our knowledge, this is the first work that advocates improving KPP in NVCiM DNN accelerators with device variations specifically for safety-critical applications.
## II Related Works
### _Crossbar-based Computing Engine_
The computation engine driving NVCiM DNN accelerators is the crossbar array structure, which can perform matrix-vector multiplication in a single clock cycle. Crossbar arrays store matrix values (_e.g._, weights in DNNs) at the intersection of vertical and horizontal lines using NVM devices (_e.g._, RRAMs and FeFETs), while vector values (_e.g._, inputs for DNNs) are fed through horizontal data lines (word lines) in the form of voltage. The output is then transmitted through vertical lines (bit lines) in the form of current. While the crossbar array performs calculations in the analog domain according to Kirchhoff's laws, peripheral digital circuits are needed for other key DNN operations such as shift & add, pooling, and non-linear activation. Additional buffers are also needed to store intermediate data. Digital-to-analog and analog-to-digital conversions are also needed between components in different domains.
Crossbar arrays based on NVM devices are subject to a number of sources of variations and noise, including spatial and temporal variations. Spatial variations arise from defects that occur during fabrication and can be both local and global in nature. In addition, NVM devices are susceptible to temporal variations that result from stochastic fluctuations in the device material. These variations in conductance can occur when the device is programmed at different times. Unlike spatial variations, temporal variations are usually independent of the device but could be subject to the programmed value [14]. For the purpose of this study, we have considered the non-idealities to be uncorrelated among the NVM devices. However, our framework can be adapted to account for other sources of variations with appropriate modifications.
### _Evaluating DNN Robustness in the Presence of Device Variations_
Most existing research uses Monte Carlo (MC) simulations to assess the robustness of NVCiM DNN accelerators in the presence of device variations. This process typically involves extracting a device variation model and a circuit model from physical measurements. The DNN to be evaluated is then mapped onto the circuit model, and the desired value for each NVM device is calculated. In each MC run, one instance of a non-ideal device is randomly sampled from the device variation model, and the actual conductance value of each NVM device is determined. DNN performance (_e.g._, classification accuracy)
Fig. 1: Illustration of the NVCiM DNN accelerator architecture for (a) architecture overview and (b) crossbar (XBar) array. In a crossbar array, the input is fed horizontally and multiplied by weights stored in the NVM devices at each cross point. The multiplication results are summed up vertically and the sum serves as an output. The outputs are converted to the digital domain and further processed using digital units such as non-linear activation and pooling.
in this non-ideal accelerator, is then recorded. This process is repeated numerous times until the collected DNN performance distribution converges. Existing practices [11, 15] generally include around 300 MC runs. This number of MC runs is empirically sufficient according to the central limit theorem [8].
Only a few researchers are focusing on the worst-case scenarios of NVCiM DNN accelerators in the presence of device variations. A line of research [13, 16, 17] focuses on determining the worst-case performance by identifying weight perturbation patterns that can cause the most significant decrease in DNN inference performance, while still adhering to the physical bounds of device value deviations. One representative work [13] shows that DNN classification accuracy can drop to random guesses level when adding a less than 3% perturbation to weights. However, the likelihood of such a worst-case scenario occurring is lower than \(<10^{-100}\), which can be safely ignored in common natural environments [13]. Thus, such kinds of worst-case analyses are impractical in terms of accessing the robustness of an NVCiM DNN accelerator.
Thus, in this work, we advocate using k-th percentile performance, a metric that is both practical and precise, for capturing the worst-case performances of a DNN model.
### _Addressing Device Variations_
Various approaches have been proposed to deal with the issue of device variations in NVCiM DNN accelerators. Here we briefly review the two most common types: enhancing DNN robustness and reducing device variations.
A common method used to enhance DNN robustness in the presence of device variations is variation-aware training [18, 3, 11, 12]. Also known as noise injection training, the method injects variation to DNN weights in the training process, which can provide a DNN model that is statistically robust in the presence of device variations. In each iteration, in addition to traditional gradient descent, an instance of variation is sampled from a variation distribution and added to the weights in the forward pass. In the backpropagation pass, the same noisy weight and noisy feature maps are used to calculate the gradient of weights in a deterministic and noise-free manner. Once the gradients are collected, this variation is cleared and the variation-free weight is updated according to the previously collected gradients. The details of noise injection training are shown in Alg. 1. Another fashion of training more robust DNN weights is CorrectNet [19]. This approach uses a modified Lipschitz constant regularization during DNN training so that the regularized weights are less prone to the impact of device variations. Other approaches include designing more robust DNN architectures [10, 8, 3] and pruning [20].
To reduce device variations induced device value deviation, write-verify [7, 21] is commonly used during the programming process. An NVM device is first programmed to an initial state using a pre-defined pulse pattern. Then the value of the device is read out to verify if its conductance falls within a certain margin from the desired value (_i.e._, if its value is precise). If not, an additional update pulse is applied, aiming to bring the device conductance closer to the desired one. This process is repeated until the difference between the value programmed into the device and the desired value is acceptable. This approach is highly effective in reducing the device value deviations, but the process typically requires a few iterations, which is time-consuming. There are also various circuit design efforts [22, 23] that try to mitigate the device variations.
## III Proposed Method
In this section, we introduce a novel variant of the noise injection training method designed to improve the k-th percentile performance (KPP) of a DNN model. The conventional noise injection training injects Gaussian noise in the training process simply because it mirrors the impact of device variations occurring in inference. There is no theoretical proof that such practice would offer the most robust DNN models. In this section, we show through mathematical analysis that Gaussian noise injection training is far from optimal in improving KPP. Specifically, this section begins with a formal definition of KPP and an analysis of its relationship with DNN weights. Next, we analyze the noise injection training framework and identify the requirements for the noise injected during training. We show that Gaussian noise does not satisfy all requirements.
Thus, we propose several candidate noise types and select right-censored Gaussian noise through experimentation. Moreover, we develop an adaptive training method that automatically determines the optimal hyperparameters for the right-censored Gaussian noise injection. The resulting framework is called Training with Right-Censored Gaussian NoisE (TRICE).
### _K-th Percentile Performance_
The KPP of a DNN model is derived from the k-th percentile of a distribution. The k-th percentile of a distribution can be defined as the value \(z_{pk}\) that separates the lowest k% of the observations from the highest (100-k)% of the observations in a distribution. Formally speaking, given a random variable \(Z\) following a distribution \(\mathcal{D}ist\), there exists a value \(z_{pk}\) that, if sampling a value \(z_{i}\) from \(Z\), there is a k% probability that \(z_{i}\leq z_{pk}\). It is equivalent to:
\[k/100=\mathit{cdf}_{\mathcal{D}ist}(z_{pk}) \tag{1}\]
where \(\mathit{cdf}_{\mathcal{D}ist}\) is the cumulative distribution function of \(\mathcal{D}ist\).
In the context of a DNN model's performance in the presence of device variations, the KPP represents the minimum performance level that the model achieves with a probability of at least (100-k)%. For example, As shown in Fig. 2, the \(5^{th}\) percentile performance in terms of top-1 accuracy (_i.e._, k-th percentile accuracy) of this DNN model in the presence of device variations is 0.4623 which means for 5% of the cases the DNN accuracy will be lower than 0.4623, and for 95% of the cases, the DNN accuracy is greater than 0.4623.
KPP of a DNN model can be easily evaluated through Monte-Carlo simulation. Specifically, with \(N_{sample}\) Monte Carlo runs, \(N_{sample}\) performance values are collected. These performance values are then sorted in ascending order and the \((N_{sample}\times k\%)^{th}\) element of this sorted array is the estimation of KPP. The overall process is shown in Algorithm 2.
```
1://INPUT: DNN topology \(\mathcal{M}\), DNN weight \(\mathbf{w}\), noise distribution \(\mathcal{D}ist\), # of training epochs \(ep\), dataset \(\mathbf{D}\), learning rate \(\alpha\);
2:for\((i=0;\)\(i<ep;\)\(i++)\)do
3:for\(x\), \(GT\) in \(\mathbf{D}\)do
4: Sample \(\Delta\mathbf{w}_{i}\) from \(Dist\);
5:\(loss=\text{CrossEntropyLoss}(\mathcal{M}(\mathbf{w}+\Delta\mathbf{w}_{i},x),\,GT)\);
6:\(\mathbf{w}=\mathbf{w}-\alpha\frac{\beta loss}{\delta\mathbf{w}+\Delta \mathbf{w}_{i}}\)
7:endfor
8:endfor
```
**Algorithm 1** NoiseTrain (\(\mathcal{M}\), \(\mathbf{w}\), \(\mathcal{D}ist\), \(ep\), \(\mathbf{D}\), \(\alpha\))
After establishing the definition of the KPP, we proceed to analyze how it relates to the trained weights of the DNN model. We use the loss function as the metric for assessing the performance throughout this analysis.
Given a neural network model \(\mathcal{M}\) and its trained weight vector \(\mathbf{w}\), the output \(\mathbf{out}\) of this model from the input \(\mathbf{x}\) can be described as \(\mathbf{out}=\mathcal{M}(\mathbf{w},\mathbf{x})\). Further given the ground truth label \(\mathbf{GT}\) and the loss function \(f\), its loss can be described as \(loss=f(\mathcal{M}(\mathbf{w},\mathbf{x}),\mathbf{GT})\). Because the values of \(\mathbf{x}\) and \(\mathbf{GT}\) are fixed when inferencing on a given dataset, the loss expression can be simplified as a function of \(\mathbf{w}\), _i.e._, \(loss=f(\mathbf{w})\).
Here we study the impact of perturbing one element \(w_{0}\) in the weight vector \(\mathbf{w}\). Specifically, because this weight value is subjected to the impact of device variations, it is perturbed to \(w_{0}+\Delta w\), where \(\Delta w\) is the device variation-induced perturbation. We can then apply Taylor expansions to the loss function _w.r.t_. the perturbed weight:
\[\begin{split} f(w_{0}+\Delta w)=& f(w_{0})+f^{ \prime}(w_{0})\Delta w+\frac{f^{\prime\prime}(w_{0})}{2}(\Delta w)^{2}+o(( \Delta w)^{3})\\ \approx& f(w_{0})+f^{\prime}(w_{0})\Delta w+\frac{f^ {\prime\prime}(w_{0})}{2}(\Delta w)^{2}\end{split} \tag{2}\]
We can observe in Eq. 2 that the loss function can be approximated by a quadratic function of \(\Delta w\). Given that the weight perturbation \(\Delta w\) follows the distribution of device variations (\(\Delta w\sim Dist\)), we can calculate the k-th percentile of the loss as follows:
First, let \(q=k/100\) be the probability number of k-th percentile. We then let the unknown k-th percentile be \(loss_{q}\). According to the property of quadratic functions, along with the fact that \(f^{\prime\prime}(w)\geq 0\)[24] and \(loss_{q}\) is greater than the minimum value of Eq. 2, we know that there exist two real numbers \(\Delta w_{1}\) and \(\Delta w_{2}\), \(\Delta w_{1}<\Delta w_{2}\), such that if \(\Delta w_{1}<\Delta w<\Delta w_{2}\), then \(f(\Delta w)<loss_{q}\).
By the definition of KPP and the loss is the lower the better, we have \(q\) as the probability of \(f(\Delta w)\geq loss_{q}\), and then \(1-q\) is the probability of \(\Delta w_{1}\leq\Delta w\leq\Delta w_{2}\). Recalling that weight perturbation \(\Delta w\) follows the device variation distribution (\(\Delta w\sim Dist\)), we have:
\[1-q=cdf_{\mathcal{D}ist}(w_{2})-cdf_{\mathcal{D}ist}(w_{1}) \tag{3}\]
where \(cdf_{\mathcal{D}ist}\) is the cumulative distribution function (CDF) of \(\mathcal{D}ist\). Through the definition of \(w_{1}\), \(w_{2}\) and \(loss_{q}\), we also know that:
\[\begin{split} w_{1}&=\frac{-f^{\prime}(w_{0})-\beta }{f^{\prime\prime}(w_{0})}\\ w_{2}&=\frac{-f^{\prime}(w_{0})+\beta}{f^{\prime \prime}(w_{0})}\\ \beta&=\sqrt{f^{\prime}(w_{0})^{2}-2f^{\prime\prime} (w_{0})(f(w_{0})-loss_{q})}\end{split} \tag{4}\]
Combining Eq. 3 and Eq. 4, we can get an analytical relationship between \(q\) and \(loss_{q}\) and thus can calculate \(loss_{q}\) given the device value deviation distribution \(\mathcal{D}ist\) and the trained model weight \(w_{0}\).
In this work, we target a device model that the device value deviation follows Gaussian distribution \(\mathcal{N}(0,\sigma_{d})\), whose CDF is:
\[cdf_{\mathcal{D}ist}(w)=\int_{-\infty}^{\infty}e^{-t^{2}}dt \tag{5}\]
Combining Eq. 3 and Eq. 4 and the first-order approximation of Eq. 5, we obtain:
\[loss_{q}=-\frac{f^{\prime}(w_{0})^{2}}{2f^{\prime\prime}(w_{0})}+f(w_{0})+ \frac{f^{\prime\prime}(w_{0})\pi q^{2}\sigma_{d}^{2}}{4} \tag{6}\]
Considering \(f^{\prime}(w_{0})\) as a variable, it is clear that \(loss_{q}\) is a quadratic function _w.r.t._\(f^{\prime}(w_{0})\). Extensive research works [24, 25] have shown that when using cross-entropy loss with softmax as the loss function, the second derivatives of weights _w.r.t._ the loss is positive, _i.e._, \(f^{\prime\prime}(w_{0})>0\). Thus, it is clear that Eq. 6 reaches its maximum value when \(f^{\prime}(w_{0})=0\) and decreases when \(f^{\prime}(w_{0})\) diverges from \(0\). Therefore, by observing the first term of 6, to gain a low enough \(loss_{q}\), hence high enough KPP, a smaller \(f^{\prime\prime}(w_{0})\), and a \(f^{\prime}(w_{0})\) with larger absolute values is required. Similarly, by observing the second and the third term of 6, a smaller \(f(w_{0})\), and a smaller \(f^{\prime\prime}(w_{0})\) is required. Thus, to improve the KPP of a DNN model, the DNN training process needs to simultaneously minimize \(f(w_{0})\) and \(f^{\prime\prime}(w_{0})\), and maximize \(|f^{\prime}(w_{0})|\).
### _The Effect of Noise Injection Training_
According to the conclusion in Section III-B, the DNN training process needs to minimize \(f(w_{0})\) and \(f^{\prime\prime}(w_{0})\), then maximize \(|f^{\prime}(w_{0})|\) at the same time. We now analyze the noise injection training process to see how to satisfy these requirements.
Using similar denotations as Section III-B and recall Alg. 1, one iteration of the noise injection training process can be depicted as:
\[w_{t+1}=w_{t}-\alpha f^{\prime}(w_{t}+\Delta w) \tag{7}\]
where \(w_{t}\) is the current weight value, \(w_{t+1}\) is the updated weight value after this iteration of training and \(\alpha\) is the learning rate. By applying Taylor expansion on \(f^{\prime}(w_{t}+\Delta w)\), we obtain:
\[w_{t+1}\approx w_{t}-\alpha\left(f^{\prime}(w_{t})+\Delta wf^{\prime\prime}(w_{ t})+\frac{(\Delta w)^{2}}{2}f^{\prime\prime\prime}(w_{t})\right) \tag{8}\]
Considering a noise injection training process where in each iteration of training, the device variation-induced weight value perturbation \(\Delta w\) is sampled for enough instances instead of only once, the statistical behavior for such noise injection training is:
\[\begin{split} w_{t+1}&=w_{t}-\alpha E_{\Delta w}[f^{ \prime}(w_{t}+\Delta w)]\\ &\approx w_{t}-\alpha\left(f^{\prime}(w_{t})+E[\Delta w]f^{\prime \prime}(w_{t})+\frac{E[(\Delta w)^{2}]}{2}f^{\prime\prime\prime}(w_{t}) \right)\end{split} \tag{9}\]
where \(E[\Delta w]\) is the expected value (_i.e._, mean) of \(\Delta w\).
By observing Eq. 9 and by recalling the requirements derived through Eq. 6 that the DNN training process needs to (1) minimize \(f(w_{0})\), (2) minimize \(f^{\prime\prime}(w_{0})\), then (3) maximize \(|f^{\prime}(w_{0})|\) at the same
Fig. 2: Illustration of KPP (in terms of top-1 accuracy). The red curve represents the accuracy distribution of a DNN in the presence of device variations. The intersection point of each straight line and the x-axis represents the k-th percentile accuracy.
time. We can analyze the three terms after \(\alpha\) in Eq. 9 to design the noise distribution to be injected.
For the three terms after \(\alpha\), the first term \(f^{\prime}(w_{t})\) is the first-order gradient that is used in vanilla gradient descent that minimizes the value of \(f(w_{t+1})\). This satisfies the first requirement gained in Section III-B. Another side effect is that, when the training process is close to converging, this term would push the first-order gradient toward zero.
The third term \(\frac{E[(\Delta w)^{2}]}{2}f^{\prime\prime\prime}(w_{t})\) affects the second derivatives. Because \(E[(\Delta w)^{2}]\) is always positive, this term minimizes the value of \(f^{\prime\prime}(w_{t+1})\). This satisfies the second requirement gained in Section III-B.
For the second term \(E[\Delta w]f^{\prime\prime}(w_{t})\), it affects the first derivatives. As \(E[\Delta w]\) can be either positive, zero, or negative, this term would respectively minimize, not change, or maximize the first-order gradient. Combined with the first term that pushes the first-order gradient towards zero, injecting a noise with a negative mean would result in a maximized positive first-order gradient and vice versa. Because Eq. 6 requires a first-order gradient of larger absolute value, a noise distribution with a non-zero mean value is required. The widely used Gaussian distribution, whose mean value is zero, however, does not meet this requirement. Therefore, a new type of noise needs to be utilized for noise injection training.
### _Candidate Noise Distributions_
According to Section III-C, to improve the model robustness, the distribution injected in the training process needs to satisfy requirements that: \(E[(\Delta w)^{2}]>0\) and \(E[(\Delta w)]\neq 0\). We also need this distribution to yield a model with high enough accuracy when noise-free, according to Section III-B. We propose to consider four candidate noise distributions for our study, all of which are variations of the Gaussian distribution. These distributions include (a) Right-Censored Gaussian (RC-Gaussian), (b) Left-Censored Gaussian (LC-Gaussian), (c) Right-Truncated Gaussian (RT-Gaussian), and (d) Left-Truncated Gaussian (LT-Gaussian). In a Right-Censored Gaussian distribution, all values follow Gaussian distribution except that those greater than a certain threshold are set (censored) to be the threshold value. This applies similarly to the LC-Gaussian distribution except that the value smaller than the threshold is censored. The property of RC-Gaussian is shown in Eq. 10. Different from RC and LC-Gaussian, in the Right-Truncated Gaussian distribution, any value greater than a threshold is cut off, which means there is zero probability for the perturbation value to be greater than the threshold. This applies similarly to LT-Gaussian. The distribution histograms of the four candidates are shown in Fig. 3.
\[\text{{RC-Gaussian}}(th,\sigma_{t})=\begin{cases}th\times\sigma_{t},& \text{if }g\geq th\times\sigma_{t}\\ g,&\text{else}\end{cases} \tag{10}\]
With excessive experiments, we select to inject Right-Censored Gaussian distribution during noise injection training because it would result in the best KPP. The results of this study are shown in the experiment section.
### _Automated Hyperparameter Selection through Adaptive Training_
Right-Censored Gaussian noise injection training requires massive hyperparameter tuning. Unlike traditional Gaussian noise injection training, which employs noise hyperparameters the same as the device variation-induced weight value deviation during training to accurately replicate the inference environment, injecting RC-Gaussian noise introduces different types of noise during training and inference. Thus, the two hyperparameters, \(\sigma_{t}\) and \(th\), need to be calibrated for each different DNN model and \(\sigma_{d}\) value. The process of determining the optimal hyperparameters can be time-consuming and requires significant human effort. AutoML [9]-based methods are possible solutions but they typically require multiple trials to determine the optimal hyperparameter. Therefore, we propose an adaptive training method to find the optimal noise hyperparameters during the training process. This method requires no hyperparameter tuning and takes only one single training run to train the optimal model. To develop this method, we first conduct a grid search of hyperparameters. As shown in Fig. 4, for both hyperparameters (\(\sigma_{t}\) and \(th\)), as the value of the hyperparameter increases, the DNN performance initially increases and then decreases after reaching an optimal point. This property allows us to use a binary search-like method to find the optimal hyperparameter values.
Specifically speaking, during the training process, three identical DNN models are initialized and trained simultaneously. Each DNN model is trained by injecting noises with different hyperparameters. The hyperparameters each model uses are determined by the binary search engine. After each epoch, the KPP of each model trained under noises with different hyperparameters is evaluated. The binary search engine then updates the hyperparameters of each model according to their performance rankings. The weight of each model is also reassigned by the model that has the highest KPP. To stabilize training, the model is first trained by \(warm\) warm-up epochs without updating the noise hyperparameters. Moreover, to accelerate training, when the binary search method converges, which means all three models are using the same noise hyperparameters, the three models are merged into one model, which means only one model needs to be trained.
The binary search-like policy to identify the optimal value of one
Fig. 4: Results for the grid search of injecting right-censored Gaussian noise with different hyperparameters on model LeNet for dataset MNIST. The x-axis and y-axis represent the different choices of hyperparameter \(\sigma_{t}\) and \(th\), respectively. The z-axis represents the \(1^{st}\) percentile accuracy of the trained model. It is clear that the optimal solution sets in the middle of the search space for each hyperparameter.
Fig. 3: The distribution histogram of different candidate noise with \(\sigma_{t}=1\) and \(th=2\). The x-axis represents the perturbation magnitude and the y-axis represents the distribution density.
hyperparameter is as follows: with the starting point \(start\) and the ending point \(end\), in each iteration, the three candidate values are the three quartiles of \(start\) and \(end\), \(i.e.\), \(left=start+1\times(end-start)/4\), \(mid=start+2\times(end-start)/4\) and \(right=start+3\times(end-start)/4\). If the model trained with hyperparameter \(mid\) has the highest KPP, this means the optimal value is not in the range of \([start,left]\) and \([right,end]\), so we can perform \(start\gets left\) and \(end\gets right\). Similarly, if the model trained with hyperparameter \(left\) has the highest KPP, we only perform \(end\gets right\), and if the model trained with hyperparameter \(right\) has the highest KPP, we only perform \(start\gets left\). This process is performed iteratively until \(|end-start|\leq 1e-4\).
```
1://INPUT: DNN topology \(\mathcal{M}\), start and end perturbation magnitude \(start\), \(end\), RC-Gaussian threshold \(th\), number of training epochs \(ep\), number of warm up epochs \(warm\), number of evaluation samples during training \(N_{train}\), target device value variation \(\sigma_{d}\), target percentile \(q\), dataset \(\mathbf{D}\) and learning rate \(\alpha\);
2:Initialize three DNN models \(\mathcal{M}(\mathbf{w_{1}})\), \(\mathcal{M}(\mathbf{w_{2}})\), \(\mathcal{M}(\mathbf{w_{3}})\) of topology \(\mathcal{M}\);
3:for(\(i=0\); \(i<ep\); \(i++\)) do
4:if\(end-start<1e-4\)then
5: // Train only one model when \(start==end\).
6: NoiseTrain(\(\mathcal{M}\), \(\mathbf{w_{1}}\), RC-Gauss(\(th\), \(start\)), \(1\), \(\mathbf{D}\), \(\alpha\));
7:else
8: // Train three models with three different hyperparameters.
9:\(left\) = \(start+1\times(end-start)/4\);
10:\(mid\) = \(start+2\times(end-start)/4\);
11:\(right\gets start+3\times(end-start)/4\);
12: NoiseTrain(\(\mathcal{M}\), \(\mathbf{w_{1}}\), RC-Gauss(\(th\), \(left\)), \(1\), \(\mathbf{D}\), \(\alpha\));
13: NoiseTrain(\(\mathcal{M}\), \(\mathbf{w_{2}}\), RC-Gauss(\(th\), \(mid\)), \(1\), \(\mathbf{D}\), \(\alpha\));
14: NoiseTrain(\(\mathcal{M}\), \(\mathbf{w_{3}}\), RC-Gauss(\(th\), \(right\)), \(1\), \(\mathbf{D}\), \(\alpha\));
15:if\(i\geq warm\)then
16: // Only evaluate performance and update hyperparameters after warmup.
17: perf\({}_{1}\) = QuantiEval(\(\mathcal{M}\), \(\mathbf{w_{1}}\), \(\sigma_{d}\), \(q\), \(\mathbf{D}\), \(N_{train}\));
18: perf\({}_{2}\) = QuantiEval(\(\mathcal{M}\), \(\mathbf{w_{2}}\), \(\sigma_{d}\), \(q\), \(\mathbf{D}\), \(N_{train}\));
19: perf\({}_{3}\) = QuantiEval(\(\mathcal{M}\), \(\mathbf{w_{3}}\), \(\sigma_{d}\), \(q\), \(\mathbf{D}\), \(N_{train}\));
20: // use binary search to update hyperparameters
21: If max(perf\({}_{1}\), perf\({}_{2}\), perf\({}_{3}\)) == \(perf\) then
22:\(start,end\), \(\mathbf{w_{1}}\), \(\mathbf{w_{3}}=left,right\), \(\mathbf{w_{2}}\), \(\mathbf{w_{2}}\);
23:else if\(\max(\)perf\({}_{1}\), perf\({}_{2}\), perf\({}_{3})) == \(perf_{1}\)then
24:\(end\), \(\mathbf{w_{2}},\mathbf{w_{3}}=right,\mathbf{w_{1}},\mathbf{w_{1}}\)
25:else if\(\max(\)perf\({}_{1}\), perf\({}_{2}\), perf\({}_{3})) == \(perf_{3}\)then
26:\(start,\mathbf{w_{1}},\mathbf{w_{2}}=left,\mathbf{w_{3}},\mathbf{w_{3}}\)
27:endif
28:endif
29:endif
30:endfor
```
**Algorithm 3** TRICE (\(\mathcal{M}\), \(start\), \(end\), \(th\), \(ep\), \(warm\), \(N_{train}\), \(\sigma_{d}\), \(q\), \(\mathbf{D}\), \(\alpha\))
Note that there are more efficient hyperparameter tuning algorithms available compared to our method. The optimal solution requires training fewer models using different hyperparameters. However, our approach is better suited for noise injection training due to the following reasons. (1) It involves more estimations of model performances using different hyperparameters, thereby reducing the impact of imperfect KPP estimations obtained from a small number of Monte Carlo runs. (2) It continuously trains a model using a hyperparameter of \(mid=(start+end)/2\), which is closer to the final optimal hyperparameter. This makes the training process easier to converge.
In our practice, we use adaptive search to automatically find perturbation scale \(\sigma_{t}\) and manually determine \(th\).
The whole training framework with automated hyperparameter tuning is named Training with Blight-Censored Gaussian NoisE (TRICE) and shown in Algorithm 3.
## IV Experimental Evaluation
In this section, we comprehensively evaluate our proposed TRICE method in terms of KPP improvement for CiM DNN accelerators suffering from device variations. We first discuss how to link the device value variations to additive noise on weights based on the noise model. We then compare the effectiveness of TRICE against SOTA baselines using different datasets, models, and different types of NVM devices that can be used to build NVCiM DNN accelerators. Ablation studies that show the advantages of RC-Gaussian noise over different noise candidates are also conducted.
### _Modeling of Device Variation-induced Weight Perturbation_
Without loss of generality, we mainly focus on device variations originating from the programming process, in which the conductance value programmed to NVM devices can deviate from the desired value. Next, we show how to model the impact of device variations on DNN weights.
Assume a \(H\) bits DNN weight, the desired weight value after quantization (\(\mathcal{W}_{des}\)) can be represented as:
\[\mathcal{W}_{des}=\frac{\max|\mathcal{W}|}{2^{H}-1}\sum_{j=0}^{H-1}h_{j}\times 2^{j} \tag{11}\]
where \(h_{j}\in\{0,1\}\) is the value of the \(j^{th}\) bit of the desired weight value, \(\mathcal{W}\) is the floating point weight value and \(\max|\mathcal{W}|\) is the maximum absolute value of the weight. For an NVM device capable of representing \(B\) bits of data, since each weight value can be represented by \(H/B\) devices2, the corresponding mapping process can be expressed as:
Footnote 2: Without loss of generality, we assume that \(H\) is a multiple of \(B\).
\[g_{i}=\sum_{j=0}^{B-1}h_{i\times B+j}\times 2^{j} \tag{12}\]
where \(g_{i}\) is the desired conductance of the \(i^{th}\) device representing a weight. Note that negative weights are mapped in a similar manner. Considering the impact of device variations, the actually programmed conductance value \(gp_{i}\) is as follows:
\[gp_{i}=g_{i}+\Delta g \tag{13}\]
where \(\Delta g\) is the deviation from the desired conductance value \(g_{i}\).
Thus when weight is programmed, the actual value \(\mathcal{W}_{p}\) mapped on the devices would be:
\[\mathcal{W}_{p} =\frac{\max|\mathcal{W}|}{2^{H}-1}\sum_{i=0}^{H/B-1}2^{i\times B}gp_{i} \tag{14}\] \[=\mathcal{W}_{des}+\frac{\max|\mathcal{W}|}{2^{H}-1}\sum_{i=0}^{H/B-1} \Delta g\times 2^{i\times B}\]
To simulate the above process, we follow the settings consistent with existing works. Specifically, we set \(B=2\) based on existing works [3, 24], while \(H\) is specified by each model. For the device variation model, we adopt \(\Delta g\sim\mathcal{N}(0,\sigma_{d})\) (if not specified), which indicates that \(\Delta g\) follows Gaussian distribution with a mean of zero and a standard deviation of \(\sigma_{d}\). We constrain \(\sigma_{d}\leq 0.4\) as this is a reasonable range that can be realized by device-level optimizations
such as write-verify based on the measurement results. Our model and parameter settings are in line with that of RRAM devices reported in [7].
### _Experimental Setup_
**Platforms and Metrics**: All experiments are conducted on PyTorch using an on-the-shelf GPU. To precisely capture the performance (accuracy) of the DNN model under device variations, our report data points are averaged from 5 identical runs. For the evaluation metric, if not specified, we report the \(1^{st}\) percentile accuracy, which is KPP using accuracy as the performance metric and with \(k=1\). To obtain the KPP of a DNN model under sufficiently high precision, we choose to run 10,000 Monte Carlo simulations (\(N_{sample}\) = 10,000). Since our experiments show that 10,000 runs can output \(1^{st}\) percentile accuracy whose 95% confidence interval is \(\pm 0.009\) based on the central limit theorem.
**Baselines for Comparison**: We compare TRICE with three baselines that are built upon training: (1) training w/o noise injection, (2) CorrectNet [19], and (3) injecting Gaussian noise in training [3, 12]. For a fair comparison, we do not compare TRICE with other orthogonal methods like NAS-based DNN topologies design [3, 8] or Bayesian Neural Networks [10], given TRICE can be used together with them.
**Hyperparameters Setting**: For all experiments, TRICE uses the same hyperparameter setups: \(start=0\), \(end=2\times\sigma_{d}\), \(th=2\), \(ep=100\), \(warm=5\) and \(N_{train}=300\), where \(\sigma_{d}\) is the standard deviation for device variation. We limit the range of \(\sigma_{d}\) as suggested by Sect. IV-A and report the effectiveness of TRICES across different \(\sigma_{d}\) values within that range. For other training hyperparameters such as learning rate, batch size, and learning rate schedulers, we follow the best practice in training a noise-free model.
### _The Effectiveness of TRICE on MNIST Dataset_
We first compare TRICE with the aforementioned baselines using the model LeNet to recognize the 10-class handwritten digits dataset MNIST [26]. LeNet is a plain convolutional neural network consisting of two convolution layers and three fully connected layers. All weights and layer outputs (_i.e._, activations) are quantized to four bits (\(H=4\)). We also compare TRICE with injecting right-censored Gaussian noise with handpicked hyperparameters (RC-Manual) as an ablation study. Table I shows the \(1^{st}\) percentile accuracy of models trained with different training methods under different levels of device variations (\(\sigma_{d}\)) following the noise model discussed in Section IV-A. As shown in Table I, compared with training w/o noise, CorrectNet improves the \(1^{st}\) percentile accuracy by up to 19.94%, but this is not comparable to the improvement of up to 49.44% by injecting Gaussian noise and up to 58.01% by our proposed TRICE. We can also observe that, compared with injecting Gaussian noise, TRICE can improve the \(1^{st}\) percentile accuracy by up to 15.42%. It is clear that TRICE outperforms all baselines in generating models with higher \(1^{st}\) percentile accuracy in all simulated \(\sigma_{d}\) values. Moreover, TRICE demonstrates more significant improvement when facing large device variations while still delivering comparable accuracy when \(\sigma_{d}\) is too small to distinguish the difference between different training methods. Because CorrectNet cannot generate a model with higher robustness compared with injecting Gaussian noise, we do not show the results for it in the latter experiments. The ablation study also shows that TRICE outperforms injection right-censored Gaussian with handpicked hyperparameters (RC-Manual) and the improvement in \(1^{st}\) percentile accuracy is up to 9.95%, so we do not show the result of RC-Manual in the remainder of this paper.
### _The Effectiveness of TRICE in Large Models_
After showing the effectiveness of TRICE in a small model LeNet for MNIST, here we further demonstrate the effectiveness of TRICE by comparing it with the baselines in larger DNN models for larger datasets. We choose two representative models VGG-8 [27] and ResNet-18 [28]. Both models use a 6-bit quantization (\(H=6\)) for weights and activations. They both perform image classification tasks for dataset CIFAR-10 [29]. As shown in Fig. 4(a) and Fig. 4(b), TRICE clearly outperforms all baselines in most device value deviation values and performs similarly as baselines in some rare cases where device value deviation is too small to make an impact or too large to perform a valid classification. Compared with injecting Gaussian noise, TRICE
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Dev. var. & \multicolumn{5}{c}{Training Method} \\ (\(\sigma_{d}\)) & w/o noise & CorrectNet & Gauss. & RC-Manual & TRICE \\ \hline
0.00 & 99.01 & 97.99 & 98.86 & 98.88 & 98.94 \\
0.05 & 93.31 & 97.56 & 97.45 & 96.89 & **98.08** \\
0.10 & 70.72 & 90.66 & 95.59 & 95.47 & **95.99** \\
0.15 & 38.15 & 67.70 & 87.60 & 90.43 & **90.58** \\
0.20 & 19.81 & 39.54 & 66.04 & 75.47 & **77.82** \\
0.25 & 11.95 & 22.26 & 40.27 & 50.14 & **54.12** \\
0.30 & 08.58 & 14.26 & 23.09 & 28.56 & **38.51** \\
0.35 & 06.89 & 10.83 & 14.38 & 16.83 & **25.29** \\
0.40 & 06.05 & 09.23 & 10.38 & 11.61 & **17.94** \\ \hline \hline \end{tabular}
\end{table} TABLE I: Effectiveness of TRICE method on model LeNet for MNIST across different \(\sigma_{d}\) values. The performance is shown in \(1^{st}\) percentile accuracy. The baselines are vanilla DNN training w/o noise injection, CorrectNet [19], and injecting Gaussian noise in training [3, 12]. Injecting RC-Gaussian noise with hand-picked hyperparameters (RC-Manual) is also shown as an ablation study.
Fig. 5: Comparison of the \(1^{st}\) percentile accuracy achieved by models trained using TRICE and baseline methods on (a) VGG-8 and (b) ResNet-18 for dataset CIFAR-10. The x-axis represents the magnitude of device value variation (\(\sigma_{d}\)) and the y-axis represents the \(1^{st}\) percentile accuracy.
### _The Effectiveness of TRICE in Different Devices_
To demonstrate the scalability of TRICE, we also show the effectiveness of TRICE on NVCiM platforms using different types of NVM devices. As discussed in Section IV-A, previous experiments use a four level (2-bit, \(B=2\)) device as in [3, 24]. More specifically, it is a four-level RRAM device whose device value deviation model is \(\Delta g\sim\mathcal{N}(0,\sigma_{d})\), which means \(\Delta g\) follows Gaussian distribution with a mean of zero and a standard deviation of \(\sigma_{d}\), independent of the programmed device conductance.
We further analyze the effectiveness of TRICE on two real-world FeFET devices whose device value deviation magnitude varies as its programmed conductance changes. Their device models are derived from measurement results in [30]. Specifically, a generalized device value variation model for a four-level device is:
\[\begin{array}{ll}gp_{i}&=g_{i}+\Delta g\\ \Delta g&\sim\mathcal{N}(0,\sigma_{h})\end{array},\quad\sigma_{h}=\begin{cases} \sigma_{d0},&if\ g_{i}=0\\ \sigma_{d1},&if\ g_{i}=1\\ \sigma_{d2},&if\ g_{i}=2\\ \sigma_{d3},&if\ g_{i}=3\end{cases} \tag{15}\]
which means \(\Delta g\) follows Gaussian distribution with a mean of zero and a standard deviation of \(\sigma_{h}\) but the \(\sigma_{h}\) value differs as its programmed conductance changes. We abstract the behaviors of the two FeFET devices to be:
\[\text{FeFET}_{1}\rightarrow\{\sigma_{d0}=\sigma_{d3}=\sigma_{d}, \sigma_{d1}=\sigma_{d2}=4\sigma_{d}\} \tag{16}\] \[\text{FeFET}_{2}\rightarrow\{\sigma_{d0}=\sigma_{d3}=\sigma_{d}, \sigma_{d1}=\sigma_{d2}=2\sigma_{d}\} \tag{17}\]
This means the devices suffer from more device variations when they are programmed to value 1 and 2 and suffer from less device variations when they are programmed to value 0 and 3. As a comparison, we show the conductance (\(gp\)) distribution of the previously used RRAM device and FeFET\({}_{2}\) in Fig. 6(a) and Fig. 6(b), respectively.
We report the effectiveness of TRICE in NVCiM platforms using FeFET\({}_{1}\) and FeFET\({}_{2}\) in Fig. 5(a) and Fig. 5(b), respectively. As expected, again, it is obvious that TRICE outperforms all baselines in most \(\sigma_{d}\) values and performs similarly as baselines where device value deviation is too small to make an impact. Compared with injecting Gaussian noise, TRICE improves the \(1^{st}\) percentile accuracy by up to 15.61%, and 12.34% in FeFET\({}_{1}\) and FeFET\({}_{2}\), respectively.
### _Ablation Study for Different Noise Candidates_
We also show the effectiveness of injecting RC-Gaussian noise in the training process by comparing it against injecting other three noise candidates: LC-Gaussian, RT-Gaussian, and LT-Gaussian noise. Here the result of training with Gaussian noise is also included as a baseline. Without loss of generality, we perform this study on the LeNet for MNIST dataset using uniform RRAM devices with \(\sigma_{d}=0.25\). As shown in Fig. 8, training with RC-Gaussian noise shows a clear advantage over training with other types of noise by at least 8.76%. Note that training with left and right truncated Gaussian performs even worse than injecting Gaussian noise because they exhibit lower accuracy w/o the presence of device variations.
## V Conclusions
In this work, we propose to use k-th percentile performance (KPP) instead of widely used average performance as a metric to evaluate the realistic worst-case performance of a DNN model. By analyzing the properties of DNN models and noise injection-based training, we show that the conventional Gaussian noise injection training is far from optimal in improving KPP. Thus, we propose TRICE which injects right-censored noise during training. Extensive experiments show that TRICE clearly outperforms SOTA baselines in improving the k-th percentile performance of DNN models.
Fig. 8: Comparison of injecting different types of noise in training LeNet for MNIST. The y-axis represents the \(1^{st}\) percentile accuracy of models trained by injecting different types of noise when \(\sigma_{d}=0.25\).
Fig. 6: Comparison of the \(1^{st}\) percentile accuracy achieved by models trained using TRICE and baseline methods on LeNet for dataset MNIST targeting devices (a) FeFET\({}_{1}\) and (b) FeFET\({}_{2}\). The x-axis represents the magnitude of device value variation (\(\sigma_{d}\)) and the y-axis represents the \(1^{st}\) percentile accuracy.
Fig. 7: Illustration of uniform and non-uniform devices. (a) Uniform devices suffer from the same magnitude of noise when programmed to different conductance values. (b) Non-uniform devices suffer from different magnitudes of noise when programmed to different conductance values. The perturbation is more significant when the conductance value is 1 and 2, |
2302.06997 | Data Release of the AST3-2 Automatic Survey from Dome A, Antarctica | AST3-2 is the second of the three Antarctic Survey Telescopes, aimed at
wide-field time-domain optical astronomy. It is located at Dome A, Antarctica,
which is by many measures the best optical astronomy site on the Earth's
surface. Here we present the data from the AST3-2 automatic survey in 2016 and
the photometry results. The median 5$\sigma$ limiting magnitude in $i$-band is
17.8 mag and the light curve precision is 4 mmag for bright stars. The data
release includes photometry for over 7~million stars, from which over 3,500
variable stars were detected, with 70 of them newly discovered. We classify
these new variables into different types by combining their light curve
features with stellar properties from surveys such as StarHorse. | Xu Yang, Yi Hu, Zhaohui Shang, Bin Ma, Michael C. B. Ashley, Xiangqun Cui, Fujia Du, Jianning Fu, Xuefei Gong, Bozhong Gu, Peng Jiang, Xiaoyan Li, Zhengyang Li, Charling Tao, Lifan Wang, Lingzhe Xu, Shi-hai Yang, Ce Yu, Xiangyan Yuan, Ji-lin Zhou, Zhenxi Zhu | 2023-02-14T12:08:27Z | http://arxiv.org/abs/2302.06997v1 | # Data Release of the AST3-2 Automatic Survey from Dome A, Antarctica
###### Abstract
AST3-2 is the second of the three Antarctic Survey Telescopes, aimed at wide-field time-domain optical astronomy. It is located at Dome A, Antarctica, which is by many measures the best optical astronomy site on the Earth's surface. Here we present the data from the AST3-2 automatic survey in 2016 and the photometry results. The median 5\(\sigma\) limiting magnitude in \(i\)-band is 17.8 mag and the light curve precision is 4 mmag for bright stars. The data release includes photometry for over 7 million stars, from which over 3,500 variable stars were detected, with 70 of them newly discovered. We classify these new variables into different types by combining their light curve features with stellar properties from surveys such as StarHorse.
keywords: surveys - catalogues - stars:variables:general
## 1 Introduction
Time-domain astronomy has led to many astronomical discoveries through exploring the variability of astronomical objects over time. Transient targets such as supernovae (SNe), gamma-ray bursts, and tidal disruption events (TDEs) give valuable insights in astronomy and fundamental physics. Many survey projects have been undertaken to search for variable sources by repeatedly scanning selected sky areas. Deep surveys over wide areas of sky require specialized telescopes such as the Large Binocular Telescope (LBT; Hill & Salinari, 2000) and the Large Synoptic Survey Telescope (LSST; Ivezic et al., 2008), and results from such surveys will doubtless make revolutionary discoveries in coming years. High cadence is also important for time-domain surveys when searching for transients such as exoplanets, rapidly-changing objects, and short-term events. The Wide Angle Search for Planets (WASP; Pollacco et al., 2006) consortium has discovered numerous exoplanets with its high cadence. The Zwicky Transient Facility (ZTF; Bellm et al., 2019) has discovered over 3,000 supernovae from its first year of operations with a cadence as rapid as 3 days.
The Antarctic plateau is an ideal site for ground-based time-domain astronomy with its long clear polar nights that can provide long-term continuous observing time as well as other excellent observing conditions (Storey, 2005, 2007; Ashley, 2013). The clean air can minimize the scattering of light, the cold air is good for infrared observations due to the low thermal background, and the stable atmosphere provides remarkably good seeing.
As the highest location on the Antarctic ice cap, Dome A was first reached by the 21st CHInese National Antarctic Research Expedition (CHINARE) in 2005. It is also the place where the Chinese Kunlun station was established. Many site testing studies have been conducted here during the past decade, and the results have confirmed that Dome A is an excellent site for astronomical observations. A complete summary of the astronomy-related work at Dome A can
be found in Shang (2020). We present some important results briefly below.
The Chinese Small Telescope ARray (CSTAR) showed that the median \(i\)-band sky background of moonless clear nights is 20.5 mag arcsec\({}^{-2}\)(Zou et al., 2010). The KunLun Cloud and Aurora Monitor (KLCAM) showed that the nighttime clear sky rate is 83 per cent, which is better than most ground-based sites (Yang et al., 2021). Moreover, the Surface layer NOn-Doppler Acoustic Radar (SNODAR; Bonner et al., 2010) showed a very shallow atmospheric turbulent boundary layer at Dome A, with a median thickness of only 13.9 m. The multilayer Kunlun Automated Weather Station (KLAWS) showed that a temperature inversion often occurs near the ground, which leads to a stable atmosphere where cooler air is trapped under warmer air (Hu et al., 2014, 2019). The results from SNODAR and KLAWS suggest that extremely good seeing is relatively easy to obtain at Dome A since the telescope only has to be above the shallow turbulent boundary layer to achieve free-atmosphere conditions. This is impractical at traditional observatory sites where the boundary layer is typically many hundreds of metres above the ground. In 2019, the two KunLun Differential Image Motion Monitors (KL-DIMMs) directly confirmed these ideas by measuring the seeing at Dome A from an 8 m tall tower. Superb night-time seeing as good as 0.13''was recorded. The median free-atmosphere seeing was 0.31''and the KL-DIMMs reached the free atmosphere from the 8m tower 31% of the time (Ma et al., 2020). In summary, the studies described above have demonstrated that by many measures Dome A has the best optical observational conditions from the Earth's surface.
With such exceptional observing conditions, telescopes were planned and constructed to operate at Dome A for time-domain astronomy. The first-generation optical telescope, CSTAR, was installed in 2008 January (Yuan et al., 2008; Zhou et al., 2010). It observed a 20 deg\({}^{2}\) sky area centred at the South Celestial Pole with four co-aligned 14.5cm telescopes. CSTAR obtained data for three years and has contributed to many studies on stellar variability (Wang et al., 2011; Yang et al., 2015; Zong et al., 2015; Liang et al., 2016; Oelkers et al., 2016). The three Antarctic Survey Telescopes (AST3; Cui et al., 2008) were later planned as the second-generation optical telescopes at Dome A, with larger apertures and the ability to point and track over the sky, as opposed to CSTAR's conservative engineering approach of having a fixed altitude.
The first AST3 telescope (AST3-1) was installed at Dome A in 2012 by the 28th CHINARE. AST3-1 surveyed a sky area of roughly 2000 deg\({}^{2}\) and the data have been released (Ma et al., 2018). AST3-1 also monitored some specific sky regions such as the Large and Small Magellanic Clouds. These data were used for research on exoplanets and variable stars. For example, AST3-1 detected about 500 variable stars around the Galactic disk centre, with 339 of them being newly discovered (Wang et al., 2017).
The AST3 telescopes were originally conceived as multi-band survey telescopes operating together, but the goal has not been achieved due to various logistic difficulties, such as the required amount of electrical power. The second AST3 telescope (AST3-2) was installed in 2015 by the 31st CHINARE. This work is based on the data from AST3-2. The third AST3 (AST3-3) has been constructed and will be equipped with a K-dark infrared camera (Burton et al., 2016; Li et al., 2016).
Here we present the data and photometry from the AST3-2 sky survey as well as an analysis of the light curves. We first present the basic design of AST3-2 in section 2 and go on to discuss the survey parameters and operational strategy in section 3. In section 4 we discuss the data reduction process and results. In section 5 we present the light curves, the result of period searches, and the classification of objects. The overall statistics of the catalogue and data access are discussed in section 6. Finally, we summarize the results in section 7.
## 2 Instrument
The details of the AST3 system have been presented in previous works (Yuan et al., 2010; Yuan & Su, 2012; Yuan et al., 2014). Here we briefly describe the basic features of the AST3-2, the second telescope of AST3.
AST3-2 has the same modified Schmidt optical design as the AST3-1. It has a 680mm primary mirror, an entrance pupil diameter of 500mm, a 3.73 f-ratio, and an SDSS \(i\) filter. The AST3 telescopes were designed specially to work in the harsh environment of Dome A where the ambient temperature in the observation season ranges from \(-80\)degC to \(-50\)degC. The AST3 telescopes and the mounting system were built with low thermal expansion materials such as Invar to minimize the thermal effects. This design enables the AST3-2 to work in extremely low temperatures, but we still had occasional problems with gears being stuck or jammed by ice. To cope with optical element frosting problems that are common in Antarctica, a defrosting system was designed with an indium-tin-oxide (ITO) coating on the entrance aperture to the telescope and a warm blower inside the tube. However, in the first year of operation, the frosting problem on the first surface was not completely solved. The ITO coating was sometimes insufficient to defrost the ice and the blow heater had to work frequently, resulting in significant tube seeing and poor image quality. To solve this problem, an external defrosting blower system was installed in front of the telescope in 2016.
AST3-2 is equipped with a 10K \(\times\) 10K STA1600FT CCD with a pixel size of 9\(\mu\)m. There are 16 read-out channels for the CCD to reduce the read-out time, which is 2.5s in fast read-out mode and 40s in slow read-out mode. To prevent shutter failure in cold weather, the camera works without a mechanical shutter, instead relying on frame-transfer mode and dedicating half of the CCD area to a buffer that is not exposed to light. The astronomically usable area of the CCD is therefore 10K \(\times\) 5K pixels, with a scale of 1''/pixel over a FOV of 2.93deg\(\times\) 1.47deg. Since the CCD camera is installed inside the telescope tube, it also faced some heat dissipation problems, causing the CCD to often operate at temperatures as warm as -50degC to -40degC, leading to a significant dark current. Since we could not take dark frames on-site and the previously-taken laboratory dark images have different patterns, a new method was developed to derive a dark frame from the science images and will be discussed in section 4.1.2. There was also a problem with the AST3 CCD in that the photon transfer curve became non-linear at a level around 25000 ADU, leading to the brighter-fatter effect (Ma et al., 2014). Fig. 1 shows a raw image taken by AST3-2. Detailed laboratory tests of the CCD performance can be found in Ma et al. (2012) and Shang et al. (2012).
The AST3-2 is powered by the PLATeau Observatory for Dome A (PLATO-A; Ashley et al., 2010). PLATO-A is a self-contained automated platform providing an average power of 1kW for at least 1 year. It also provides Internet access through the Lidium satellite constellation. The hardware and software of the control, operation, and data system (CODS) of AST3-2 were designed to be responsible for the automated sky survey (Shang et al., 2012; Hu et al., 2016; Shang et al., 2016; Ma et al., 2020). The CODS consists of the main control system, the data storage array, and the pipeline system. To ensure the success of the sky survey, we developed the CODS to be stable and reliable under the conditions of low power availability (1 kW), low data bandwidth (a maximum of about 2 GB over the course of the
year), and the unattended situation in the harsh winter of Dome A. The supporting software provides a fully automatic survey control and a real-time data processing pipeline on-site.
## 3 Observations and Data
The observing season at Dome A starts in mid-March when the Sun reaches 13 degrees below the horizon, i.e., at the end of twilight (Zou et al., 2010). The automated and unattended AST3 sky survey strategy was designed to optimize the available observing time and was realized with a survey scheduler in the CODS software (Liu et al., 2018). The scheduler provides three different survey modes depending on the scientific requirements. The SN survey mode mainly focuses on a survey for SNe and other transients, the exoplanet search mode aims at discovering and monitoring short-period exoplanets, and an additional special mode mainly targets the follow-up of transients.
Following twilight, the AST3-2 was initially dedicated to the SN survey mode, lasting from 2016 March 24 to May 16, at which point the long continuous polar night began and the survey switched to exoplanet mode. The SN survey was designed for the early discovery of SNe as well as other transients, and for time-domain astronomy of variable stars. It surveyed sky areas of 2200 deg\({}^{2}\), covering 565 fields with about 30 visits each in a cadence of a half to a few days based on the fraction of dark time within a day. Fig. 2 shows the sky coverage of this survey. The real-time pipeline from CODS performed onsite data reduction and sent the SN or other transient candidates back to China for further confirmation and follow-up observations. For example, the real-time pipeline discovered the SN 2016ccp (Hu et al., 2016) and the Type IIP SN 2017bfq (Wang et al., 2017). During the test observations in Mohe, China, the AST3-2 recorded the SN 2014J in M82(Ma et al., 2014) and discovered the type Ia SN 2014M (Ma et al., 2014). The real-time pipeline is also capable of detecting other variables such as dwarf rovae (Ma et al., 2016), although most of the variables were not reported in the real-time pipeline. So in this work, we mainly use the SN survey mode data when the hard disks were physically returned from Dome A to obtain the photometric catalogue and light curves of other variables.
The AST3-2 exoplanet project is named the CHinese Exoplanet Searching Program from Antarctica (CHESPA). To search for short-period exoplanets rapidly and continuously, the exoplanet search mode started during the period of continuous dark polar nights: from May 16 to June 22. The exoplanet search covered a smaller sky area than the SN survey, with 10 to 20 fields in each target region. The target region during 2016 contained 10 adjacent fields from the southern continuous viewing zone of TESS (Ricker et al., 2009). This part of the data has been analysed in previous works (Zhang et al., 2019, 2019; Liang et al., 2020).
Finally, a special mode was designed for the rapid follow-up of observations of interesting transients from the AST3-2 SN or exoplanet surveys, or from surveys by other telescopes. This mode has the highest priority. When an interesting target triggers the alert, it will pause other observations and resume them after the special observation is finished. In 2017, AST3-2 successfully detected the first optical counterpart of the gravitational wave source GW170817 (Hu et al., 2017).
## 4 Data Reduction
The 2016 data of AST3-2 was retrieved by the 33rd CHINARE. We focus on the SN survey data for this work. First, we carried out the image corrections for CCD image pre-processing, cross-talk, image trimming, overscan, dark current, flat-field, and an unusual diagonal stripes noise described below. Then we performed photometric and astrometric calibration to obtain the source catalogue. Finally, we cross-matched the catalogues to obtain the light curves. Details of the data reduction process are discussed in the subsections below.
### Preprocessing
#### 4.1.1 Image trimming and overscan subtraction
The AST3 raw image has \(12000\times 5300\) pixels including overscan regions and is divided into 16 channels with a size of \(1500\times 2650\) each. As described in section 2, the AST3 CCD works in frame-transfer mode, which means it does not have a shutter. Since the
Figure 1: An example of a raw image that is taken from the survey fields by AST3-2. There are 16 readout channels with different bias levels and overscan regions. The lower 8 channels are read out towards the bottom of the CCD, and the upper 8 channels are read out towards the top. Each of the readouts has an area of 1500 pixels \(\times\) 2660 pixels including overscan. The overscan regions have 180 columns on the right of each readout and 20 horizontal rows in the middle of the image.
Figure 2: The sky coverage of survey observations from AST3-2 in 2016. Each rectangle region is a target sky field based on the survey scheduler.
zero-second exposure is not a true zero because the frame transfer period takes time, photons would be gathered in the 0s bias frame when there is no shutter. This design makes it hard to take a bias frame on-site. Instead, we used the overscan regions to remove the effect of the bias voltage. As Fig. 1 shows, the overscan regions are the right 180 columns of each channel and 20 rows in the middle of the full raw image.
Because the top and bottom rows of the CCD are insensitive to light, we removed another 80 rows each from the top and bottom of the CCD full images. After overscan correction and image trimming, the final raw images have a size of \(10560\times 5120\) pixels.
#### 4.1.2 Dark current subtraction
As described in section 2, the CCD temperature was not very stable and could be above \(-50^{\circ}\)C sometimes, making the dark current non-negligible. Moreover, the laboratory dark images had different patterns and were not usable for dark correction in practice. Additionally, we could not take dark frames on site because the CCD does not have a shutter and the AST3 was unattended for at least one year. Therefore, a new method was developed to derive the dark frame from scientific images to solve this problem and it had been successfully applied to the AST3-1 images (Ma et al., 2014, 2018). Here we briefly describe this method and how we utilized it in the AST3-2 preprocessing.
The brightness \(I\) of a pixel \((x,y)\) can be described as follow:
\[I(x,y)=S_{T}+D(T)+\Delta d(T,x,y), \tag{1}\]
where \(S_{T}\) is the sky background, \(D(T)\) is the median dark current level at temperature \(T\), and \(\Delta d(T,x,y)\) is the deviation from the median dark current in pixel \((x,y)\) at temperature \(T\). The stars can be ignored by a median algorithm if we combine large numbers of images from different sky fields. For a single image, \(D(T)\) can be considered constant. Also, the sky brightness can be considered a constant because it is spatially flat enough after twilight (Yang et al., 2017). The first two terms on the right-hand side of the equation (1) can be considered constant for a single image. To derive the distribution of the deviation from the median dark current level \(\Delta d(T,x,y)\), we need to take two scientific images that were taken at the same temperature but with different sky brightnesses. By scaling the two images to an equivalent median level and subtracting one from another, we can derive a \(\Delta d(T,x,y)\) image at a specific temperature \(T\). We repeated this process for different pairs of images at the same \(T\) and combined the dark images to construct a master dark image for the specific temperature \(T\). Fig. 3 shows the master dark image derived from the 2016 observations.
For different temperatures, the dark current level doubles as the temperature increases every \(7.3^{\circ}\) for the AST3 CCD between \(-80^{\circ}\) and \(-40^{\circ}\)(Ma et al., 2012). We used this relation to scale the master dark image to different temperatures and correct the dark current for all images.
#### 4.1.3 Flat field correction
During the beginning of the observing season, we took numerous twilight sky images and produced a master flat-field image. The large FOV of AST3 led to a non-uniform large-scale gradient of the twilight images, which varied with the Sun elevation and angular distance from the field. The method of brightness gradient correction was studied in Wei et al. (2014). Two-dimensional fitting was applied to each flat image to correct the brightness gradient. Finally, we median combined the corrected flat images to construct a master flat-field image. After the correction, the mean root-mean-square (RMS) of the master flat was far below 1 per cent.
#### 4.1.4 Cross-talk and stripey noise corrections
Due to the simultaneous CCD readouts, when one amplifier reads a saturated pixel, other amplifiers will be affected. There are significant CCD cross-talk effects in the raw images. As Fig. 4 shows, when one saturated pixel is read in one readout channel, the other 15 channels will have a negative ghost image at the exact position of the saturated pixel presenting as a dark spot. To remove the effect, we initially planned to locate all the saturated pixels, find the position of the related ghost pixels, and add the appropriate negative values back. However, the unsaturated pixels around the saturated ones also have cross-talk effects, making the ghost images hard to locate. So we developed a method to correct the cross-talk effect during the correction for the stripy noise, described below.
As Fig. 5 shows, the raw images of AST3 in 2016 have shown an unusual kind of stripey noise. After careful investigation, we found that the diagonal stripes were due to electromagnetic interference at 16 kHz caused by a broken ground shield in the cables for the telescope's DC motor drives. Because this noise lies in exactly the same positions in each of the 16 CCD channels, and is extremely
Figure 4: An example of the cross-talk effect in the AST3 raw image. The left panel shows a saturated star in one channel that would cause the cross-talk effect in other channels. The middle panel shows the mirror pixels where the saturated pixels are in another CCD channel. The stripey noise can also be seen in this area. The right panel shows the same image as in the middle panel but after the cross-talk effect and the stripey noise are corrected.
Figure 3: The dark frame generated from observation. The difference between bright and dark regions is obvious.
reproducible, for each channel we constructed a filtered image from the other 15 channels by median combining the star-removed images of single channels. By subtracting from each channel the filtered image, we can remove the stripy noise to the point where it is not detectable, as Fig. 5 shows. The pattern of the noise is similar to the cross-talk effect, which also lies at the same position of different readout channels. So, the above method also helped to correct the cross-talk problem.
### Photometry and astrometry
#### 4.2.1 Photometry
We performed aperture photometry using the Source Extractor (Bertin & Arnouts, 1996). Considering the changing full width at half maximum (FWHM) of our images, we used multiple apertures to adapt to the varying image quality. The aperture radii were set to 3, 5, and 7 pixels. Because the median FWHM of our data is 5 pixels, we set the default aperture radius as 5 pixels, or 5\({}^{\prime\prime}\)at our pixel scale of 1\({}^{\prime\prime}\)/pixel. An additional Kron-like elliptical aperture magnitude MAG_AUTO was adopted for galaxies. Fig. 6 shows the photometric accuracy of two consecutive images by comparing the magnitude differences between them.
#### 4.2.2 Astrometry
For astrometry, we used SCAMP to solve the World Coordinate System (Bertin, 2006). We adopted the Position and Proper Motions eX-tended catalogue (PPMX) as a reference, which contains 440 sources per deg\({}^{2}\) with the one-dimensional precision of 40 mas (Roser et al., 2008). As a result, the external precision of our astrometric calibration is 0.1\({}^{\prime\prime}\)and the internal precision is 0.06\({}^{\prime\prime}\), in both Right Ascension (RA) and Declination (Dec.).
#### 4.2.3 Flux calibration
We adopted the SkyMapper catalogue as the \(i\)-band magnitude reference for the flux calibration (Wolf et al., 2018). The SkyMapper Southern Survey is a southern hemispheric survey carried out with the SkyMapper Telescope at Siding Spring Observatory in Australia. It covers an area of 17,200 deg\({}^{2}\) and the limiting magnitude reaches a depth of roughly 18 magnitudes in \(uvgriz\) pass band.
We first chose our best frame of each survey field for absolute calibration. The "best" refers to the images that have the best image quality in one field considering the number of detected sources, background brightness, FWHM, and elongation. Then we calculate the zero point in \(i\)-band magnitude for calibration. We only chose the stars that are between 11 and 14 magnitudes in the \(i\)-band for calibration to balance high accuracy with a sufficient number of stars.
After the absolute calibration, we used these calibrated images as references to relatively calibrate the other images of each survey field. However, we found that the zero point changes with position
Figure 5: Upper. An example of the stripy noise in the AST3 raw image due to a break in a cable shield. Lower: The same image area but with the noise removed through the method discussed in section 4.1.4.
Figure 6: The upper panel presents the magnitude differences between two consecutive exposures b160505.000122 and b160505.000123 as a function of \(i\)-band magnitude measured with a circular aperture of 3\({}^{\prime\prime}\)in radius. The lower panel presents the magnitude RMS calculated in 0.2 magnitude bins. The different colours represent different circular apertures. The solid lines are the expected error from photon noise of 5\({}^{\prime\prime}\)radius. The trend of magnitude \(\sigma\) changing with aperture goes contrary at different magnitude ends. At the brighter end, the \(\sigma\) is lower when the aperture radius is higher. It means for brighter stars larger apertures are appropriate when the photon noise of the star itself is dominant. The stars brighter than 11 magnitudes are saturated and have higher \(\sigma\). At the fainter end, the \(\sigma\) is lower when the aperture radius is lower. This is because the sky background dominated the noise for fainter stars and a smaller aperture is more suitable. The number of stars is insufficient at the very faint end, thus the \(\sigma\) is not real and seems to be smaller than the ideal photon noise.
and the cause still requires further investigation. To avoid large field non-uniformity of the zero point, we decide to do the flux calibration in each readout channel.
For the AST3-2 survey, we only have \(i\)-band data. To investigate the colour term, we compared the AST3-2 \(i\)-band data with SkyMapper \(i\)- and \(g\)-band data as Fig. 7 shows. The colour coefficient is 0.02, much smaller than that for AST3-1 reported by Ma et al. (2018). However, we used a different reference catalogue from AST3-1, which adopted the AAVSO Photometric All-Sky Survey catalogue (APASS; Henden et al. (2016)). Our \(i\)-band magnitude matches relatively well with the SkyMapper catalogue, but to compare with other catalogues observed in the same band we need to be cautious.
### Data quality
Fig. 8 displays the distributions of data quality, showing the median value of elongation of \(\sim\)1.17, FWHM of 5\({}^{\prime\prime}\), background of 670 ADU, and limiting magnitude of 17.8 mag. Some issues with tracking stability AST3-2 led to the elongated star profiles. We also see this problem from the range of FWHM, which varies from 3 to 7 arcseconds. Another cause of the wide FWHM distribution was the changing tube seeing. In the extremely cold and high relative humidity conditions at Dome A, there can be frost on the first surface of the optical system that reduces the transmission and changes the point-spread function through scattering. As described in section 2, a heater and a blower were used to prevent the frosting problem, and the tube seeing would be unstable when they were working. As a result, the limiting magnitude is not as good as that of the first AST3 telescope AST3-1 (Ma et al., 2018).
## 5 Stellar variability and statistics
### Time series
Images with poor quality were first excluded to ensure the quality of the light curves. Such images could be due to heavy frost, or doubling of stars images from tracking problems. We also excluded images with a background brightness larger than 10,000 ADU, median FWHM larger than 8\({}^{\prime\prime}\), fewer than 2000 stars, and median elongations larger than 2. In this way, we excluded about 30 per cent of the images. We then cross-matched the targets in each field and obtained light curves. Finally, an additional outlier elimination was performed to remove the false targets with obvious anomalous magnitudes and FWHMs. Fig. 9 shows a typical light curve dispersion with an aperture radius of 5\({}^{\prime\prime}\).
### Period search
On average, we observed each survey field 30 times during the year. Some of the targets might not be detected in some images due to poor image quality etc. Thus, together with the image selections in section 5.1, for each target, the total number of epochs could be less than 30. To analyse the stellar variability with enough detection and better image quality, we restricted ourselves to sky fields with more than 30 observations. There are about one-third of the observations were exposed 3 times continuously, originally for image combination. Due to the tracking problem discussed in section 4.3 and section 5.1, some of the multi-images would be excluded in the image selections and we tend not to combine them. For the remaining multi-images, we did not count them as individual observations, but we used them as independent data points in light curve analysis. Then, we rejected the targets that were only detected in less than 50 per cent of the images. Finally, in the period analysis, we chose the light curves with a significant variability of more than 2.5\(\sigma\).
For our survey data, the sampling in the light curves with time is not uniform and thus we used the Lomb-Scargle (LS) method for period search (Lomb, 1976; Scargle, 1982). Light curves with a signal-to-noise ratio (SNR) larger than 5 are considered eligible candidates. Then we cross-matched the candidate light curves with the International Variable Star Index (VSX; Watson et al., 2006) and found 3,551 known variables. For candidates that were not in the VSX catalogue, we visually inspected whether their periodicities were significant or not. For candidates that were significantly variable and periodic, we then checked whether it was a false signal. For example, Fig.10 shows a comparison of the true and false EA-type variable candidates. The former is an EA-type variable candidate included in the VSX catalogue. The latter shows a similar light curve pattern but turned out to be a false signal affected by an outlier. We manually excluded these kinds of false signals and we take the true ones as variable candidates. In total, we found 70 new variables.
### New variables
For the newly discovered variable candidates, we tried to visually classify them into different classes by their periods, amplitudes, and light curve patterns. We also obtained their effective temperature, surface gravity, and metallicity from StarHorse (Anders et al., 2019) to help the classification. Moreover, we obtained their B-V from the UCAC4 (Zacharias et al., 2013), APASS9 (Henden et al., 2016), NOMAD (Zacharias et al., 2004), and SPM4.0 (Girard et al., 2011) catalogues. However, due to insufficient observations, it was still hard to classify them such as the example in section 5.2. The insufficient observations at the minimum luminosity make it hard to classify. Many under-sampled candidates were excluded from the candidate list if they were not a known variable.
For this reason, we were only able to classify the candidates into 5 different classes. There are 17 candidates classified as long-period variables (LPV) either because they were observed in less than one period or because we could hardly distinguish one periodic signal from its light curve. We found 5 candidates with Cepheid-like signals and we classified them as pulsating stars (PUL). We found 4 candidates as eclipsing binary (EC) candidates by their periods and patterns. Of the remaining candidates, 24 of them have small amplitudes (\(<0.1\)) and long periods (a few days to a dozen days), and they are likely to be rotational variables (ROT). The final 20 candidates have periods shorter than 2 days and some of their periods are even shorter than 0.2 days. Most of these candidates have strange phase diagram patterns and we are not sure whether they are real or a result of a lack of data points. Under this circumstance, we classified them as possible rotational variables (pROT). Fig. 11 shows the typical phased or time-series light curves of each class.
As mentioned in section 5.2, when we try to classify the light curves, we only consider the ones with 30 epochs or more to ensure there are enough observations for a reliable period. We can confirm some of them that have obvious and distinctive light curve patterns. But for many variables that met the 30 epoch threshold, the absence of critical data points in the light curves might lead to a false period, and an incorrect pattern in their phase diagrams. In such cases we erred on the side of not claiming them as newly discovered variables.
We cross-matched our variable candidates with the VSX catalogue version 2022-10-31. Interestingly, we initially used an earlier version of the VSX catalogue and our count of new candidates was 126; 56 of these were listed in the latest version, which gave us the opportunity of
comparing our classifications with VSX. The classifications agreed well, with disagreements mainly with LPVs and ECs. Some stars were identified as rotational variables in the VSX that we classified as LPVs because we have relatively time coverage and we considered all the light curves with less than one period as LPVs. As for the ECs in VSX, we classified some of them as ROTs or PROTs since we did not have enough critical data points to confirm them.
## 6 Data Availability
The AST3-2 data is available through the Chinese Astronomical Data Center (CADC)1. 2. The data contains an \(i\)-band catalogue, a light curve catalogue, and preprocessed images.
Footnote 1: [https://cstr.cn/11379.11.160669](https://cstr.cn/11379.11.160669)
Footnote 2: [https://doi.org/10.12149/100669](https://doi.org/10.12149/100669)
The \(i\)-band catalogue contains over 7 million sources with a median limiting magnitude of 17.8 mag. For objects with multiple observa
Figure 8: The statistics of the star elongation, FWHM, sky background, and limiting magnitude of the data, which had median values of 1.17, 5\({}^{\rm\sigma}\), 670 ADU, and 17.8 mag, respectively.
Figure 7: Left panel: The magnitude difference between AST3-2 and SkyMapper in the AST3-2 image b160505.000122. Right panel: The difference between the \(i\)-band catalogue of SkyMapper and AST3-2 versus the SkyMapper \(r-i\) magnitude.
tions, we adopted their median positions and median magnitudes. Table 1 shows the Database Schema of the catalogue.
Table 2 details the information in the light curve catalogue. The light curves are presented as time series and the catalogue contains information from every observation after quality filtering. The periodic variables discussed in this work are also presented and listed in Appendix A.
There are also 22576 images in the format of Flexible Image Transport System (FITS) presented in the data set. These are the preprocessed FITS images discussed in section 4 with observing information such as date, exposure time, and WCS coordinates.
## 7 Summary
The second AST3 telescope AST3-2 was deployed at Dome A, Antarctica in 2015. In 2016, it worked fully automatically on a sky survey for SNae and semi-automatically on an exo-planet search. In this work, we report on the 2016 SN survey data observed between Mar. 23 and May 16. We surveyed 2200 deg\({}^{2}\) fields with about 30 visits each in a cadence of a half to a few days. After the raw data was retrieved, we preprocessed the data, performed aperture photometry, calibrated the magnitudes, obtained the light curves of the 565 sky fields, and briefly studied the variability of the light curves. In this paper, we present the data release of the photometric data from the AST3-2 SN survey in 2016. It consists of 22000 scientific images,
\begin{table}
\begin{tabular}{l l} \hline Column Name & Description \\ \hline ID & Source index \\ RA & Right Ascension in J2000 (deg) \\ Dec. & Declination in J2000 (deg) \\ MAG & Median aperture magnitudes (mag) \\ MAGERR & Standard deviation of magnitudes (mag) \\ COUNT & Number of observations \\ \hline \end{tabular}
\end{table}
Table 1: AST3-2 survey catalogue Table Schema
\begin{table}
\begin{tabular}{l l} \hline Column Name & Description \\ \hline DATE & The beginning time of observation in \\ & ISO time \\ MJD & The beginning time of observation in \\ & Modified Julian date \\ X & Windowed X position in CCD (pixel) \\ Y & Windowed Y position in CCD (pixel) \\ RA & Right Ascension in J2000 (deg) \\ DEC & Declination in J2000 (deg) \\ MAG & Aperture magnitudes in 5\({}^{\prime\prime}\)radius (mag) \\ MAGERR & Aperture magnitude errors in 5\({}^{\prime\prime}\)radius (mag) \\ FLUX & Flux (ADU) \\ FLUXERR & Flux error (ADU) \\ MAG, AUTO & Magnitude in Kron aperture (mag) \\ MAGERR, AUTO & Magnitude error in Kron aperture (mag) \\ BACKGROUND & Background brightness (ADU) \\ FWHM & Full width at half-maximum in Gaussian \\ & profile (pixel) \\ ELONGATION & Ratio of semi-major to semi-minor axis \\ A & Semimajor axis length (pixel) \\ B & Semimajor axis length (pixel) \\ THETA & Position angle of semimajor axis (degrees \\ east from north) \\ MAG\_3 & Aperture magnitudes in 3\({}^{\prime\prime}\)radius (mag) \\ MAGERR\_3 & Aperture magnitude errors in 3\({}^{\prime\prime}\)radius (mag) \\ & (mag) \\ MAG\_7 & Aperture magnitudes in 7\({}^{\prime\prime}\)radius (mag) \\ MAGERR\_7 & Aperture magnitude errors in 7\({}^{\prime\prime}\)radius (mag) \\ & (mag) \\ \hline \end{tabular}
\end{table}
Table 2: AST3-2 light curve catalogue Table Schema
Figure 10: Upper: An example light curve of an EA-type variable star folded in 2 phases. Lower: An example false signal showing an EA-type variable pattern.
Figure 9: The light curve rms as a function of magnitude. Each data point represents a light curve from the region b160505.000122.
7 million sources brighter than \(i\sim\)18 with photometry, astrometry, and light curves.
The 5\(\sigma\) limiting magnitude of this dataset is 17.8 mag with 4 mmag precision in the light curves of bright stars. The median FWHM, elongation, and background brightness are 5.0'', 1.17, and 670 ADU, respectively. We found 70 new variable candidates out of \(\sim\) 3,500 variable stars. We check the stellar properties from surveys such as StarHorse to help us classify these variables into 5 types.
## Acknowledgements
We thank the CHINARE for their great efforts in installing AST3-2, maintaining AST3-2 and PLATO-A, and retrieving data. This work has been supported by the National Natural Science Foundation of China under Grant Nos. 11873010, 11733007, 11673037, 11403057, and 11403048, the Chinese Polar Environment Comprehensive Investigation and Assessment Programes under grant No. CHINARE2016-02-03, and the National Basic Research Program of China (973 Program) under Grant No. 2013CB834900. PLATO-A is supported by the Australian Antarctic Division. Data Publishing is supported by China National Astronomical Data Center (NADC), CAS Astronomical Data Center and Chinese Virtual Observatory (China-VO).
|
2301.01570 | Ultra-narrowband interference circuits enable low-noise and high-rate
photon counting for InGaAs/InP avalanche photodiodes | Afterpulsing noise in InGaAs/InP single photon avalanche photodiodes (APDs)
is caused by carrier trapping and can be suppressed successfully through
limiting the avalanche charge via sub-nanosecond gating. Detection of faint
avalanches requires an electronic circuit that is able to effectively remove
the gate-induced capacitive response while keeping photon signals intact. Here
we demonstrate a novel ultra-narrowband interference circuit (UNIC) that can
reject the capacitive response by up to 80 dB per stage with little distortion
to avalanche signals. Cascading two UNIC's in a readout circuit, we were able
to enable high count rate of up to 700 MC/s and low afterpulsing of 0.5 % at a
detection efficiency of 25.3 % for 1.25 GHz sinusoidally gated InGaAs/InP APDs.
At -30 degree C, we measured 1 % afterpulsing at a detection efficiency of 21.2
%. | Yuanbin Fan, Tingting Shi, Weijie Ji, Lai Zhou, Yang Ji, Zhiliang Yuan | 2023-01-04T12:44:18Z | http://arxiv.org/abs/2301.01570v2 | Ultra-narrowband interference circuits enable low-noise and high-rate photon counting for InGaAs/InP avalanche photodiodes
###### Abstract
Afterpulsing noise in InGaAs/InP single photon avalanche photodiodes (APDs) is caused by carrier trapping and can be suppressed successfully through limiting the avalanche charge via sub-nanosecond gating. Detection of faint avalanches requires an electronic circuit that is able to effectively remove the gate-induced capacitive response while keeping photon signals intact. Here we demonstrate a novel ultra-narrowband interference circuit (UNIC) that can reject the capacitive response by up to 80 dB per stage with little distortion to avalanche signals. Cascading two UNIC's in a readout circuit, we were able to enable a high count rate of up to 700 MC/s and a low afterpulsing of 0.5 % at a detection efficiency of 25.3 % for 1.25 GHz sinusoidally gated InGaAs/InP APDs. At a temperature of -30 \({}^{\circ}\)C, we measured an afterpulsing probability of 1 % at a detection efficiency of 21.2 %.
## Introduction
Semiconductor avalanche photodiodes (APD's) are versatile for weak light detection, with applications from remote ranging [1, 2], quantum communication [3] and fluorescence lifetime imaging [4] to optical time-domain reflectometry [5, 6]. For practical fiber quantum key distribution (QKD), InGaAs/InP APD's are the detector of choice because they are compact and low cost, and allow cryogenic-free or even room-temperature operation [3]. However, they suffer from spurious afterpulsing arising from carrier trapping by defects in the multiplication layer, especially at high detection efficiencies [7, 8]. To minimise afterpulsing, an APD can be biased on for a sub-nanosecond duration only when a photon arrival is expected. In doing so, charge per avalanche can be reduced to the order of 10 fC [9, 10, 11], corresponding to a transient current of less than 0.1 mA. Such weak avalanches have to be discriminated through use of a readout circuit that removes the strong capacitive response to the applied gates. Presently, gated InGaAs detectors are capable of counting photons at up to 60% efficiencies [12] and 1 GHz rate [13] and with photon number resolution [14]. Thanks to this success, gating approach has been applied to traditionally free-running Si devices for performance enhancement [15, 16].
Existing readout circuits include band stop [11, 8, 17] or low-pass [18, 19, 12] filtering under sine-wave gating [11], self-differencing [7, 20], and transient reference cancellation [10, 21]. While simple for implementation, frequency filtering [11, 17, 18, 19, 8] distorts the avalanche signals due to its rejection of a sizeable portion of frequency components, thus increasing time jitter and temporal errors in photon registrations [18]. Self-differencing [20] and reference cancellation methods [10] are able to maintain avalanche signal fidelity but may suffer operational complexities. The former requires a wideband performance for the entire circuitry and thus inconveniently an adjustable delayline [9] for frequency alignment, while the latter [10] can be unstable because the transient reference is derived separately from the capacitive response.
Here we propose and experimentally demonstrate a simple, low-distortion ultra-narrowband interference circuit (UNIC) that can suppress the capacitive response for a 1.25 GHz gated InGaAs/InP APD single photon detector. The circuit is an asymmetric radio-frequency (RF) interferometer. One of its arms contains a narrow band pass filter (BPF) based on surface acoustic wave resonator (SAW) to retrieve the fundamental wave of the gating signal. The filtered wave then interferes destructively with the same frequency component transmitted via the other arm through a coupling module, thereby eliminating the capacitive response. This interference occurs over a narrow band, so it can provide a broad and continuous pass band in frequency domain to maintain the avalanche signal with little distortion. This allows to achieve ultra-low afterpulsing probabilities and an excellent jitter performance at high detection efficiencies from two InGaAs APD's that exhibit capacitive responses of very different amplitudes.
## Detector characterisation setup
Figure 1(a) shows our single photon characterisation setup for InGaAs APDs. A 1550 nm passively mode-locked laser serves as the light source and provides stable short pulses of 5-10 ps duration at a repetition rate of 10 MHz. The laser output power is monitored by an optical power meter of \(\pm 5\) % uncertainty (EXFO FTB-1750) and its pulse intensity is set by a variable optical attenuator (VOA, EXFO FTB-3500) to 0.1 photon/pulse at the fiber input of APD under test. It provides a 10 MHz reference to a signal generator (SG) for synthesising a 1.25 GHz sinusoidal wave with up to 27 V voltage swing. In combination of a suitable DC bias, this AC signal periodically gates the APD above its breakdown voltage (\(60-70\) V) to achieve the single photon sensitivity with an effective gate width of 150 ps. The APD output is processed by the readout module consisting of two identical 1.25 GHz UNIC's, one 2.5 GHz band stop filter
Figure 1: **(a)** Single-photon characterisation setup for 1.25 GHz sinusoidally gated InGaAs/InP APDs using UNICs for avalanche impulse readout; **(b)** A histogram of the photon detection events measured by the characterisation setup **(a)** on an InGaAs APD detector that was regulated at a temperature of 30 \({}^{\circ}\)C. The photon detection peak exhibits a 30 dB width of 650 ps. AMP: amplifier; APD: avalanche photodiode; BSF: band stop filter; DISC: discriminator; SG: signal generator; TDC: time-to-digital converter; UNIC: ultra-narrowband interference circuit; VOA: variable optical attenuator.
(BSF) of a 10 dB stop band of 100 MHz and three RF amplifiers (AMPs) of 6 GHz bandwidth. Amplification of the raw APD signals is useful as it prevents weak avalanche signals from falling below thermal noise by attenuation of the first UNIC. The readout signal is discriminated by a discriminator for avalanches before feeding to a time-digital-converter (TDC) with a dead time of 2 ns for time-resolved photon counting. Figure 1**(b)** is a typical histogram obtained with this setup.
APD under test is temperature-regulated using their integrated thermal-electric cooler, which is driven by a temperature controller (Thorlabs TED200C). A source-measure unit (Keithley 2635B) provides the DC bias and simultaneously monitors the current flowing through the APD. In characterising the maximum count rate, we replace the 10 MHz laser with a continuous-wave distributed feedback laser (DFB) laser, the output of which is carved into 1.25 GHz, 50 ps pulse train using an intensity modulator. We use a high speed digital oscilloscope to record the detector output and extract the count rate through digital discrimination in software. The oscilloscope method is carefully calibrated at low count rate regimes to be consistent with the hardware discriminated result using the photon counter (Stanford Research SR400).
The setup is able to measure the dark count probability, afterpulsing probability, detection efficiency, maximum count rate, avalanche charge and time jitter. With no performance screening, two fiber-pigtailed APDs from different manufacturers were used in this study, named APD#1 and APD#2 respectively.
## Ultra-narrowband interference circuit (UNIC)
Figure 2: **(a)** Schematic for the ultranarrow interference circuit (UNIC); **(b)** Transmission spectrum of a heroic UNIC PCB; Inset: Magnified view for region of 1.24 – 1.26 GHz. **(c)** Raw capacitive responses from APD#1 (top) and APD#2 (bottom) under identical 27.0 V V\({}_{p-p}\) gating; **(d)** Recovered avalanche impulses. ATT: attenuator; SAW BPF: surface acoustic wave band pass filter.
Under a sub-nanosecond gating, a photon induced avalanche is an impulse and contains a wide frequency spectrum. In contrast, the capacitive response is periodic and has its most energy concentrated at the gating frequency or its higher harmonics. This spectral difference allows a frequency-dependent signal process to remove the capacitive response and keep the wide-band impulses intact. Figure 2**(a)** shows a circuit diagram of UNIC. It is an RF interferometer containing two couplers of a 9:1 power splitting ratio, a \(\pi\)-resistive attenuator (ATT) and a surface acoustic wave (SAW) band pass filter. Two of the ports are terminated by 50 \(\Omega\) resistors. The SAW BPF features a central frequency of 1.25 GHz, a 20-dB passband of 35 MHz, a transmission loss of 3 dB, and a group delay of 34 ns. It filters out the fundamental wave of the gating frequency, which then interferes with the APD signal transmitted through the other arm. The attenuation and differential delay are set to enable destructive interference for the 1.25 GHz frequency component at the UNIC output port. The UNIC differential delay (\(\Delta t\)) meets the condition below
\[\Delta t=T_{g}^{SAW}+\delta t=(N+1/2)/f_{g}, \tag{1}\]
where \(T_{g}^{SAW}\) is the group delay of the SAW BPF, \(\delta t\) the delay caused by the track length difference between the two interferometer arms, \(f_{g}=1.25\) GHz the APD gating frequency, and \(N\) is an integer number. For a compact circuit, we choose \(\delta t\) to be less than the half-wave of the gating signal. With the SAW device used, \(N=42\) and \(\delta t=155\) ps. The resulting UNIC unit has a small footprint of \(38\times 15\) mm\({}^{2}\) on printed circuit boards (PCBs).
The large \(T_{g}^{SAW}\) brings two additional benefits. Firstly, it substantially increases the PCB manufacturing tolerance, as a 0.5 mm deviation in the RF track length will just alter the circuit central frequency by less than \(10^{-4}\). This eliminates the requirement of an adjustable delayline [9] for a precise frequency alignment. Secondly, it helps to produce an ultra-narrow band rejection at its designed frequency. Figure 2(b) shows the measured transmission spectrum (S21 parameter) of our heroic UNIC PCB, and its inset expands the frequency section of 1.24 - 1.26 GHz to show the narrowness of the transmission loss dip in the close proximity of the resonance frequency of 1.25 GHz. The dip of the heroic (typical) PCB features a loss of -95 dB (-80 dB), representing a suppression of 80 dB (65 dB) as compared with the background loss for other frequencies under 2 GHz. The dip has a 30 dB linewidth of just 30 kHz, thus ensuring crucial suppression of the APD gating signal without overly distorting the avalanche signals. The background loss of about 14 dB is caused mainly by the 9:1 couplers and can be reduced in future with more balanced splitters.
Cascading two UNIC's enables a stable 100 dB suppression of the primary gating frequency and thus provides a healthy performance redundancy. Their attenuation to the avalanche signal is compensated by using RF amplifiers (Fig. 1**(a)**). Second order harmonics (2.5 GHz) is suppressed by a band stop filter of conventional LC design. Figure 2**(c)** shows raw outputs from two different APD's under identical sinusoidal gating. Their respective capacitive responses are measured to have amplitudes of 0.42 V and 1.75 V. Despite their 4 times differences, UNIC's can successfully reject the sinusoidal responses and retrieve avalanches with excellent signal-to-background ratio, as shown in Fig. 2**(d)**. For APD#2, we just adjusted the gain of the first AMP to avoid amplification saturation and signal distortion.
## Results and discussion
Time-resolved photon counting allows precise extraction of the net photon detection efficiency (\(\eta_{net}\)) and the afterpulsing probability (\(P_{A}\)), which is defined as the ratio of the total afterpulses per photon induced event. Figure 1**(b)** shows a histogram of avalanche events measured for APD#1 under 10 MHz pulsed excitation of 0.1 photon/pulse. The illuminated peak has a full-width of 1/1000 maximum (30 dB width) of just 650 ps, which is shorter than the gating period of 800 ps and thus allows low-error clock number assignment that is essential for high speed QKD. The counts at non-illuminated gates arise from detector dark and afterpulse noise,
and their counting rate is 3 orders of magnitude lower than that of the illuminated gate. We extract quantities of \(P_{I}\) and \(P_{NI}\), _i.e._, the respective counting probabilities for each illuminated and non-illuminated gate. With a separate measurement of the detector dark count probability (\(P_{D}\)), we calculate the afterpulsing probability using the standard method [17, 20],
\[P_{A}=\frac{(P_{NI}-P_{D})\cdot R}{P_{I}-P_{NI}}, \tag{2}\]
where \(R=125\) here is the ratio of the gating frequency (1.25 GHz) to the laser illumination (10 MHz). Excluding the dark and afterpulse count probabilities, the net single photon detection efficiency is given by [7]
\[\eta_{net}=\frac{1}{\mu}\text{ln}\frac{1-P_{NI}}{1-P_{I}}, \tag{3}\]
where \(\mu\) is the average incident photon number per illumination pulse.
Figure 3 shows the characterisation results for APD#1 and APD#2. We fixed the amplitude of the 1.25 GHz sinusoidal signal at 27.0 V, and measured the relevant parameters as a function of the applied direct current (DC) bias, but for clarity the results are plotted as a function of the net detection efficiency (\(\eta_{net}\)). Each device was measured at several different temperatures, while APD#2 could only be cooled to -20 \({}^{\circ}\)C due to the incompatibility of its thermal-electric cooler with the temperature control driver. Qualitatively, two devices behave similarly. Both dark count and afterpulsing probabilities increase with photon detection efficiency, and exhibit opposite dependencies on temperature. For both APDs at \(\eta_{\text{net}}=30\) %, the afterpulsing probabilities are less than 2.3 % at their lowest measurement temperatures with corresponding dark count probabilities of \(1.25\times 10^{-6}\) and \(1.6\times 10^{-6}\) for APD#1 (-30 \({}^{\circ}\)C) and APD#2 (-20 \({}^{\circ}\)C), respectively. Moreover, our UNIC-APDs can offer record low afterpulsing probabilities, as summarised for APD#1 in Figure 4. At -30 \({}^{\circ}\)C, APD#1 is able to achieve 5 % and 21.2 % detection efficiencies at 0.5 % and 1.0 % afterpulsing probabilities. At these afterpulsing probabilities, the maximum
Figure 3: Dark count (top) and afterpulse (bottom) probabilities as a function of photon detection efficiency of **(a)** APD#1 and **(b)** APD#2 measured for several different temperatures.
detection efficiency increases with temperature and reaches 25.3 % and 34.2 % at 30 \({}^{\circ}\)C. At a raised afterpulsing probability of 5.9 %, APD#2 reaches a detection efficiency of 50 % at a dark count probability of \(1.1\times 10^{-4}\) and a temperature of 30 \({}^{\circ}\)C.
Maximum count rate is a crucial parameter for a number of applications, for example, high bit rate QKD [3] and rapid phase tracking in twin-field QKD [22, 23]. To determine their maximum count rates, we used a DFB laser transmitting at 1.25 GHz as the illumination source and measure the count rate as a function of photon flux. Figure 5 shows an exemplar result obtained from APD#1 at a temperature of 30 \({}^{\circ}\)C with its detection efficiency set to 25.3 % in the low flux regime. The detector maintains a linear dependence with incident flux for count rates exceeding 100 MHz, while a maximum count rate of 700 MHz is obtained at the few photons/pulse regime. We attribute the high count rate to the UNIC's ability of removing the capacitive response and thus allowing discrimination of faint avalanches. From the accompanying current measurement,
Figure 4: Temperature dependencies of the photon detection efficiency for APD#1 at the given afterpulsing probabilities of 0.5 % (blue) and 1 % (red).
Figure 5: Maximum count rate (blue) and photocurrent (red) _vs_ incident flux for APD#1.
we extract an average avalanche charge of 38 fC, comparable to the best value of 35 fC [9] obtained with the identical photocurrent measurement method. The ability to detect such weak avalanches ensures low afterpulsing probabilities in our UNIC-APDs. APD#2 was measured to have a similar avalanche charge as that of APD#1. When setting its efficiency to 50 %, APD#2's avalanche charge rose to 65 fC due to stronger bias applied. Nevertheless, it was still able to achieve a maximum count rate of 600 MHz.
Table 1 compares our results with those gigahertz-gated detectors equipped with different readout circuits. For impartiality, we list just data measured at a fixed temperature of -30 \({}^{\circ}\)C whenever possible. Here, our UNIC-APD achieved an impressive 1% afterpulsing probability at \(\eta_{\text{net}}=21.2\) %, considerably outperforming most other methods among filtering [8, 12], self-differencing [7] and reference subtraction [21]. In terms of detection efficiency, our result improves marginally over the previous best [19], but which was achieved with help of a custom variable width discriminator to mitigate signal distortion by excessive filtering. We attribute the outstanding performance of our detectors to low-distortion signal processing by UNIC's.
It is useful to compare our UNIC-APDs with detectors deployed in QKD systems. In the QKD system optimised for secure key rates (SKRs) [3], the room-temperature self-differencing detectors featured \(f_{\text{g}}=1\) GHz, \(\eta_{\text{net}}=31\) %, \(P_{\text{A}}=4.4\)% and \(P_{D}=2.25\times 10^{-4}\) and a SKR of 13.72 Mb/s over a 2 dB channel was obtained. Our UNIC-APD could outperform in all these parameters. At 30 \({}^{\circ}\)C and with \(P_{A}=4.4\) %, APD#2 offers a higher efficiency of 49 % efficiency and twice lower dark count probability of \(9.4\times 10^{-5}\), see Fig. 3**b**. Combined with its high count capability, UNIC detectors are expected to allow a SKR exceeding 25 Mb/s over the same channel loss. This provides an interesting technological path towards 100 Mb/s QKD via wavelength multiplexing.
## Conclusion
To summarise, we have developed a novel approach of using UNICs for reading out avalanche signals from 1.25 GHz sinusoidally gated InGaAs APDs. UNIC-APDs were characterised to exhibit excellent performance across the temperature range between \(\pm 30^{\circ}\)C, and can offer >20 % detection efficiency at an ultra low afterpulsing probability of 1 %. This performance, together with the circuit's compactness and manufacturing tolerance, will allow UNIC-APDs a considerable potential in QKD applications.
## Funding
National Natural Science Foundation of China (62250710162).
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \(P_{\text{A}}\)(\%) & \(\eta_{\text{net}}\) (\%) & \(P_{\text{D}}\) (gate\({}^{-1}\)) & T (\({}^{\circ}\)C) & \(f_{\text{g}}\) (GHz) & Readout Method \\ \hline \hline This work & 1.0 & 21.2 & 5.4\(\times 10^{-7}\) & -30 & 1.25 & UNIC \\ \hline He _et al_[19] & 1.0 & 20.7 & 7.6\(\times 10^{-7}\) & -30 & 1.00 & low-pass filter + \\ & & & & & & variable width discriminator \\ \hline Tada _et al_[8] & 1.8 & 27.7 & 8\(\times 10^{-7}\) & -35 & 1.27 & band stop filter \\ \hline Fang _et al_[12] & 2.5 & 20 & 1.1\(\times 10^{-6}\) & -30 & 1.25 & low-pass filter \\ \hline Comandar _et al_[7] & 2.9 & 20 & 1.0\(\times 10^{-6}\) & -30 & 1.00 & self-differencing \\ \hline Liang _et al_[21] & 4.5 & 20 & 3.2\(\times 10^{-6}\) & -30 & 1.25 & reference subtraction \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance comparison of sub-nanosecond gated InGaAs detectors using different types of readout circuits.
Disclosures.The authors declare that there are no conflicts of interest related to this article.
Data availability.Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
|
2305.13483 | Extracting Protocol Format as State Machine via Controlled Static Loop
Analysis | Reverse engineering of protocol message formats is critical for many security
applications. Mainstream techniques use dynamic analysis and inherit its
low-coverage problem -- the inferred message formats only reflect the features
of their inputs. To achieve high coverage, we choose to use static analysis to
infer message formats from the implementation of protocol parsers. In this
work, we focus on a class of extremely challenging protocols whose formats are
described via constraint-enhanced regular expressions and parsed using
finite-state machines. Such state machines are often implemented as complicated
parsing loops, which are inherently difficult to analyze via conventional
static analysis. Our new technique extracts a state machine by regarding each
loop iteration as a state and the dependency between loop iterations as state
transitions. To achieve high, i.e., path-sensitive, precision but avoid path
explosion, the analysis is controlled to merge as many paths as possible based
on carefully-designed rules. The evaluation results show that we can infer a
state machine and, thus, the message formats, in five minutes with over 90%
precision and recall, far better than state of the art. We also applied the
state machines to enhance protocol fuzzers, which are improved by 20% to 230%
in terms of coverage and detect ten more zero-days compared to baselines. | Qingkai Shi, Xiangzhe Xu, Xiangyu Zhang | 2023-05-22T20:58:06Z | http://arxiv.org/abs/2305.13483v4 | # Extracting Protocol Format as State Machine via Controlled Static Loop Analysis
###### Abstract
Reverse engineering of protocol message formats is critical for many security applications. Mainstream techniques use dynamic analysis and inherit its low-coverage problem -- the inferred message formats only reflect the features of their inputs. To achieve high coverage, we choose to use static analysis to infer message formats from the implementation of protocol parsers. In this work, we focus on a class of extremely challenging protocols whose formats are described via constraint-enhanced regular expressions and parsed using finite state machines. Such state machines are often implemented as complicated parsing loops, which are inherently difficult to analyze via conventional static analysis. Our new technique extracts a state machine by regarding each loop iteration as a state and the dependency between loop iterations as state transitions. To achieve high, i.e., path-sensitive, precision but avoid path explosion, the analysis is controlled to merge as many paths as possible based on carefully-designed rules. The evaluation results show that we can infer a state machine and, thus, the message formats, in five minutes with over 90% precision and recall, far better than the state of the art. We also applied the state machines to enhance protocol fuzzers, which are improved by 20% to 230% in terms of coverage and detect ten more zero-days compared to baselines.
## 1 Introduction
In the era of the internet of things, any vulnerability in network protocols may lead to devastating consequences for countless devices that are inter-connected and spread worldwide. For instance, in 2020, a protocol vulnerability led to the largest ever DDoS attack that targeted Amazon Web Service, affecting millions of active users [1]. To ensure protocol security by automated analyses including fuzzing [39, 50], model checking [79, 30], verification [31], and many others, a key prerequisite is to acquire a formal specification of the message formats. However, this is a hard challenge.
There have been many works on automatically inferring the formats of network messages [49, 92, 80, 99]. However, almost all existing works are in a fashion of dynamic analysis -- either network trace analysis [96, 74, 97, 64, 42, 105, 63] or dynamic program analysis [34, 35, 58, 71, 72, 73, 54, 54, 54]. The former captures online network traces and uses statistical methods including machine learning to cluster the traces into different categories and then perform message alignment and field identification. The latter runs the captured network traces against the protocol implementation and leverages the runtime control or data flows to infer message formats. Despite being useful in many applications, as dynamic analyses, they cannot infer message formats not captured by the input network traces. For instance, a recent work reported a highly precise technique but with coverage lower than 0.1 [105]. This means that it may miss message formats that are important for downstream security analysis.
To infer message formats with high coverage, we use static analysis, which does not rely on any input network traces but can thoroughly analyze a protocol parser. We target open protocols that have publicly available source code. While these protocols often have available specifications, they are usually documented in a natural language that is not machine-readable and contains inconsistencies, ambiguities, and even vulnerabilities [76]. Hence, inferring formal specifications for open protocols deserve dedicated studies. Particularly, we target a category of extremely challenging protocols, namely _regular protocols_, which have two main features. First, the format of a regular protocol can be specified by a constraint-enhanced regular expression (ce-regex), such as \((a|b)^{+}c\) where \(a\), \(b\), and \(c\) are respectively one-, two-, and four-byte variables satisfying the constraints \(a\) mod \(10=4\), \(b>3\), and \((c>16)+c>100\). Compared to a common regular expression (com-regex), the constraints in a ce-regex allow us to specify rich semantics in a network protocol. Note that a com-regex can be regarded as a simple instance of ce-regex. For instance, a com-regex \((a|b)^{+}c\) can be viewed as a ce-regex with the constraints \(a\) = 'a', \(b\) = 'b', and \(c\) = 'c'. Second, the messages of a regular protocol are parsed via a finite state machine. This is common in performance-sensitive and embedded systems for the benefit of low latency [56]. That is, with a state machine,
we can parse a protocol without waiting for the entire message -- whenever receiving a byte, we parse it and record the current state; the recorded state allows us to continue parsing once we receive the next byte.
It is inherently challenging for static program analysis to infer the formats of a regular protocol from its parser. This is because a state machine for parsing is often implemented as a multi-path loop1 that involves complex path interleaving that mimics the state transitions, but conventional static analysis -- loop unwinding, loop invariant inference, and loop summarization -- cannot handle such loops well. First, loop unwinding unrolls a loop with a limited number of iterations and, hence, will miss program behaviors beyond the unrolling times. Second, loop invariant techniques compute properties that always hold in each loop iteration. They rely on abstract interpretation for fixed point computation and, to ensure the termination, use the widening operators that often lead to significant loss of precision [52, 53, 57, 60, 65, 78, 81, 86, 20]. Third, loop summarization techniques precisely infer the input and output relations of a loop by induction. They are good at handling single-path loops [51, 84] or some simple multi-path loops [94, 102]. When used to analyze a multi-path loop that implements a state machine, they either fail to work or have to enumerate all paths in the loop body [100, 101], thus suffering from path explosion. The path explosion problem not only significantly slows down the static analysis but also leads to the explosion of states and state transitions, making the output state machine not operable.
Footnote 1: A single-path loop contains only a single path in its loop body. A multi-path loop contains multiple paths in its loop body.
To infer state machines from a parsing loop, our static analysis regards each loop iteration as a state and the dependency between loop iterations as state transitions. It mitigates the path explosion problem with the key insight that a state machine can be significantly compressed by merging states and state transitions. For instance, both state machines in Figure 1 represent the com-regex \((a|b)^{+}c\), but the one in Figure 1(b) is notably compressed. This observation guides us to design a static analysis that merges as many program paths as possible when analyzing an iteration of the parsing loop, producing a super state for the merged program paths, e.g., the state \(F\), instead of many small states for individual program paths, e.g., the states \(B\) and \(C\). As a result, our analysis notably alleviates the path explosion problem and infers highly compressed state machines, e.g., Figure 1(b), even from the implementation of complex state machines, e.g., Figure 1(a). As for state transitions, we record the pre- and post-condition of each loop iteration. These conditions allow us to compute the dependency between two consecutive loop iterations and are regarded as state-transition constraints. As a whole, an inferred state machine represents the message formats and can drive many security analyses.
There are three key differences between our approach and the state of the art. First, we do not assume the availability of network traces which, however, are required by existing works but could be hard to obtain [80]. Hence, our approach could be a promising alternative when high-quality network traces are not available. Second, different from many existing works that understand message formats by segmenting a message into multiple fields, we understand message formats via the parsing state machine. Such state machines allow us to specify message formats with both high precision and high coverage and, as will be illustrated in SS3, they are not effective when dealing with state-machine-based parsers, thus exhibiting low precision and recall. Third, our work is also different from many previous works [43, 38, 39, 46, 66, 68, 75, 87, 106, 105] that infer system state machines such as the one describing TCP's handshake mechanism. In this work, state machines are used to specify message formats. In summary, we make the following contributions.
* We developed a novel static analysis that mitigates the path-explosion problem in conventional approaches and can infer highly compressed state machines from code.
* We applied the static analysis to reverse engineering message formats. The analysis is highly precise and fast with high coverage. To the best of our knowledge, this is the first static analysis that formulates the problem of message format inference as extracting state machines.
* We implemented our approach, namely StateLifter, and evaluated it on ten protocols from different domains. StateLifter is highly efficient as it can infer a parsing state machine or, equivalently, the message formats in five minutes. StateLifter is also highly precise with a high recall as its inferred state machine can uncover \(\geq 90\%\) protocol formats with \(\leq 10\%\) false ones. By contrast, the baselines often miss \(\geq 50\%\) of possible formats and may produce \(\geq 40\%\) false ones. We use the inferred finite state machines to improve two state-of-the-art protocol fuzzers. The results demonstrate that, with the inferred state machines, the fuzzers can be improved by \(20\%\) to \(230\%\) in terms of coverage. We have discovered \(12\) zero-day vulnerabilities but the baseline fuzzers only find two of them. We also provide case studies of applying our approach to domains beyond network protocols.
Figure 1: Example to illustrate the insight of our approach.
Problem Scope
We target regular protocols, of which (1) the message formats can be described as constraint-enhanced regular expressions and (2) the messages are parsed via finite state machines (FSM). Formally, considering the equivalence of regular expression and FSM, we define a regular protocol in Definition 2.1 as an FSM enhanced by first-order logic constraints. The problem we address is to infer the FSM from the parser of a regular protocol. An FSM can be either deterministic or not. Since any non-deterministic FSM can be converted to a deterministic one, for simplicity, FSM means non-deterministic FSM by default in this paper. Note that a non-deterministic FSM may contain multiple start states and a state may transition to multiple successor states with the same inputs.
**Definition 2.1**.: _An FSM is a quintuple \((\Sigma,\mathbb{S},\mathbb{S}_{0},\mathbb{F},\mathbb{S})\) where_
* \(\Sigma\) _is a set of first-order logic constraints over a byte sequence_ \(\mathfrak{G}^{n}\) _of length_ \(n\)_. We use_ \(\mathfrak{G}^{n}_{i}\) _and_ \(\mathfrak{G}^{n}_{i,j}\) _to represent the_ \(i+1\)_th byte and a subsequence of_ \(\mathfrak{G}^{n}\)_, respectively. A typical constraint could be_ \(\mathfrak{G}^{2}_{0}\mathfrak{G}^{2}_{0}>10\)_, which means that the value of a two-byte integer with_ \(\mathfrak{G}^{2}_{1}\) _the most significant byte and_ \(\mathfrak{G}^{2}_{0}\) _the least is larger than ten. We simply write_ \(\mathfrak{G}\) _as a shorthand of_ \(\mathfrak{G}^{1}_{0}\) _and_ \(\mathfrak{G}^{1}\)_._
* \(\mathbb{S}\) _is a non-empty set of states;_ \(\mathbb{S}_{0}\subseteq\mathbb{S}\) _is a non-empty set of start states;_ \(\mathbb{F}\subseteq\mathbb{S}\) _is a non-empty set of final states._
* \(\delta:\mathbb{S}\times\Sigma\mapsto\mathfrak{2}^{\mathbb{S}}\) _is the transition function, meaning that when obtaining a byte sequence satisfying a constraint at a state, we will proceed to some possible states._
By definition, a sequence of transitions from a start state to a final state defines a possible message format. For instance, \(\delta(A\in\mathbb{S}_{0},\mathfrak{G}^{2}_{1}\mathfrak{G}^{2}_{0}>10)=\{B\}\) and \(\delta(B,\mathfrak{G}=5)=\{C\in\mathbb{F}\}\) are two transitions -- one from a start state \(A\) to the state \(B\) with the constraint \(\mathfrak{G}^{2}_{1}\mathfrak{G}^{2}_{0}>10\) and the other from the state \(B\) to a final state \(C\) with the constraint \(\mathfrak{G}=5\). It implies a message format where the first two bytes satisfy \(\mathfrak{G}^{2}_{1}\mathfrak{G}^{2}_{0}>10\) and the third byte must be 5. Such an FSM allows us to generate valid messages following the state-transition constraints.
**Why Regular Protocols?** In practice, the formats of a wide range of network protocols, such as HTTP and UDP, can be specified via ce-regex. This is acknowledged by many existing works, such as LeapFrog [48] that verifies protocol equivalence via FSMs, and P4 [33], a domain-specific language developed by the open networking foundation, which allows us to specify protocols via FSMs. As an example, we can specify an HTTP request using the following ce-regex:
Method Space URI Space Version CRLF ((General-Header | Request-Header | Entity-Header) CRLF)* CRLF Body?, where each field, e.g., Method, satisfies certain constraints such as Method = 'Get' \(\vee\) Method = 'Post' \(\vee\cdots\).
While a protocol that can be specified by ce-regex is unnecessary to be parsed via FSMs, an FSM parser can greatly improve the performance. Graham and Johnson [56] reported that an FSM parser can achieve over an order of magnitude performance improvement, and a hand-written FSM parser could scale better than widely-used implementations such as the Nginx and Apache web servers. The key factor contributing to this improvement is that an FSM parser can parse each byte of a network message as soon as the byte is received, without having to wait for the entire message. As an illustration, consider the FSM parser in Figure 2(a) that parses \((a|b)^{*}c\). Each iteration of the parser processes one byte received by the function _read_next_msg_byte()_. The parser's state, tracked by the variable _state_, allows it to continue parsing once the next byte is received. Hence, we can perform important business logic, such as preparing responses and updating system status, before a full message is received.
Due to this performance merit, regular protocols are frequently utilized in performance-critical systems, particularly in embedded systems that cannot tolerate latency. Typical examples include Mavlink [12] and MQTT [19], both of which are well-established in their respective fields. Mavlink is a standard messaging protocol for communicating with unmanned vehicles and is used in popular robotic systems such as Ardupilot [3] and PX4 [13]. MQTT, on the other hand, is a standard messaging protocol for the internet of things and is employed across various industries, such as automotive, manufacturing, and telecommunications, to name a few. In our evaluation, we include ten regular protocols from different embedded systems and designed for edge computing, musical devices, amateur radio, and many others.
## 3 Limitation of Existing Works
**Network Protocol Reverse Engineering.** Conventional techniques for inferring message formats are either network trace analysis or dynamic program analysis. They only capture the features in a set of input messages and cannot effectively infer message formats for regular protocols.
_(1) Network Trace Analysis (NTA)._ NTA does not analyze the implementation of protocols [64, 63, 64, 74, 96, 105, 23, 42]. Given a set of messages, they use statistical methods including machine learning to identify fields in a message or infer an FSM to represent message formats. The formats inferred by them strongly depend on the shape of input messages. For instance, assume that a valid message format satisfies the regular expression \((a|b)^{+}c\), meaning that a message can start with any combination of '\(a\)' and '\(b\)'. If all messages input to a typical NTA, such as ReverX [23] and NemeSys [63, 64], start with '\(aaa\)', it is very likely to infer an incorrect format starting with '\(aaa\)'. In more complex cases where the format is a ce-regex, NTA cannot precisely infer constraints in the ce-regex, e.g., \(a\) mod \(10=4\), \(b>3\), and \((c\gg 16)+c>100\). This motivates us to use program analysis so that we can precisely infer the constraints by tracking path conditions.
_(2) Dynamic Program Analysis (DPA)._ DPA is more precise than NTA as it tracks data flows in protocols' implementation [34, 35, 58, 71, 72, 73, 54, 54, 59]. However, it shares the same limitation with NTA as the inferred formats also only capture the features of input messages. Typically, techniques like AutoFormat [71] infer neither repetitive fields nor held constraints. For instance, given a set of messages, e.g., { 'aaac', 'abac',... }, which satisfy the ce-regex \((a|b)^{+}c\) where \(a=\) 'a', \(b=\) 'b', and \(c\geq\) 'c', while AutoFormat will run these messages against the protocol's implementation, it does not extract conditions like \(c\geq\) 'c' from the code and may produce a com-regex \(a(a|b)ac\) as the format. The FSM of the com-regex is shown in Figure 2(c), which is not correct as it cannot parse messages with repetitive fields and the last transition is not labeled by the correct constraint \(\sigma\geq\) 'c' and, thus, is considered to be a false transition.
Compared to AutoFormat, Tupni [44] handles parsing loops with the assumption that loops are used to parse repetitive fields in a network message. However, this is not true for regular protocols. For example, Figure 2 shows the implementation of the FSM in Figure 1(a). We can observe that the loop parses all fields in a message, no matter a field is repetitive, e.g., \(a\) and \(b\), or just a single byte, e.g., \(c\). Hence, Tupni will produce a format like \((a|b|c)^{+}\) as the byte \(c\) is also handled in the loop and regarded as a repetitive field. Figure 2(d) shows the corresponding FSM, which does not represent a correct format. For example, in the inferred FSM, the incoming transitions of the final state may have the constraint \(\sigma=\) 'a', but in the correct FSM shown in Figure 1, the incoming transitions of the final state are only constrained by \(\sigma=\) 'c'.
**Static Loop Analysis.** Unlike NTA and DPA which only capture formats in their input messages, we propose to use static analysis to infer all possible formats in the form of FSM. However, we fail to find any practical static analysis that can infer such formats with high precision, recall, and speed.
_(1) Loop Unwinding and Loop Invariant._ Loop unwinding limits the number of loop iterations to a constant \(k\)[25, 88, 89, 103]. When analyzing the parser in Figure 2(a), it will only produce the formats of the first \(k\) bytes as each iteration analyzes one byte. Loop invariant techniques [20, 57, 53, 57, 65, 78, 81, 86] do not infer FSMs, either. They compute constraints that always hold after every loop iteration. For instance, a possible invariant of the loop in Figure 2(a) could be 'a' \(<in<\) 'c'. This is far from our goal of FSM inference.
_(2) Loop Summarization for FSM Inference._ There are some static analyses that infer an FSM from loops [37, 90, 100, 101]. Chen et al. [37] assume that an FSM parsing loop follows a simple pattern and thus is not practical for real-world protocol parsers. For instance, they regard a program variable as a state variable iff it is both modified in a loop iteration and referenced in future iterations. They assume such state variables have a limited number of values, e.g., the variable _state_ in Figure 2 only has four possible values. This assumption is often violated in a real protocol parser. A typical example is in Figure 4 where the variable _tok_ satisfies their definition of state variables but its value is not enumerable. In addition, this approach suffers from two explosion problems. First, they regard every possible combination of the state variables as a state, but the number of combinations could be explosive. For instance, if we have five state variables and each has five possible values, the resulting FSM will contain \(5^{5}>3000\) states. Second, they depend on symbolic execution, which is well-known to suffer from path explosion. These explosion problems not only make static analysis unscalable but also significantly blow up FSMs with unnecessary states and transitions. Shimizu et al's approach has similar problems [90].
To the best of our knowledge, Proteus [100, 101] is the most recent and systematic approach to FSM inference. It regards every path within the body of a parsing loop as an FSM state and the dependency between two paths executed in two
Figure 2: (a) Implementation of the FSM in Figure 1(a). (b) The FSM inferred by the state-of-the-art static analysis, i.e., Proteus. (c) The FSM that represents the message format inferred by AutoFormat. (d) The FSM that represents the message format inferred by Tupni. (e) The FSM inferred by our approach, which is exactly the same as the compressed FSM in Figure 1(b).
consecutive loop iterations as a state transition. Figure 2(b) shows the FSM inferred by Proteus, where \(s_{i}\) represents a state and also a path that goes through Line \(i\). Each transition from \(s_{i}\) to \(s_{j}\) is labeled by the path condition of \(s_{i}\). It means that if the parser executes the path \(s_{i}\) with its path condition, the next iteration may execute the path \(s_{j}\). For instance, the state transitions from \(s_{6}\) to \(s_{10}\), \(s_{11}\), and \(s_{12}\) are labeled by the condition \(\sigma=\) 'a'. It means that if the loop executes the path \(s_{6}\), of which the path condition is \(\sigma=\) 'a', the loop may execute the path \(s_{10}\), \(s_{11}\), or \(s_{12}\) in the next iteration.
The FSM inferred by Proteus is non-deterministic but correct to represent the format \((a|b)^{+}c\). For instance, the string '\(abbc\)' can be parsed via the transitions \(s_{6}s_{11}s_{16}s_{17}s_{19}\). However, the FSM is too complex compared to the one we intend to implement, i.e., Figure 1(a). We observe that the core problem is that it enumerates all paths in the loop body as a priori but the number of paths is notoriously explosive. Thus, the resulting FSM contains an overwhelming number of states and transitions, and Proteus is impractical due to path explosion.
## 4 Technical Overview
At a high level, we follow a similar idea in terms of regarding a loop iteration as an FSM state and dependency between loop iterations as state transitions. However, unlike Proteus, we do not enumerate all individual paths in the loop but put as many paths as possible into a path set which, as a whole, is regarded as a single FSM state. This design simplifies the output FSM, significantly mitigates path explosion, but incurs new challenges. In what follows, we discuss two examples, one for our basic idea and the other for the detailed designs.
**Basic Idea: Path Set as State.** We perform a precise abstract interpretation over each iteration of the parsing loop. The basic steps of analyzing the code in Figure 2 are shown in Figure 3. In the first iteration of the parsing loop, due to the initial value of the variable _state_, we analyze the paths \(s_{6}\) and \(s_{7}\), depending on the condition: \(\Phi_{E}\equiv\sigma=\) 'a' \(\vee\sigma=\) 'b'. Thus, we create the state \(E\) to represent the path set \(\{s_{6},s_{7}\}\) and label the outgoing edge of \(E\) with the condition \(\Phi_{E}\).
After the first iteration, the value of the variable _state_ is either 'B' or 'C'. Thus, in the second iteration, the abstract interpretation analyzes all paths in \(F=\{s_{10},s_{11},s_{12},s_{15},s_{16},s_{17}\}\) with the path condition \(\Phi_{F}\equiv\sigma=\) 'a' \(\vee\sigma=\) 'b' \(\vee\sigma=\) 'c'. Hence, we create the state \(F\) with the outgoing condition \(\Phi_{F}\).
After the second iteration, the value of the variable _state_ could be 'B', 'C', or 'D'. Thus, in the third iteration, we analyze the paths in \(H=F\cup G,G=\{s_{19}\}\) with the path condition \(\Phi_{H}\equiv\sigma=\) 'a' \(\vee\sigma=\) 'b' \(\vee\sigma=\) 'c'. Hence, we create the state \(H\) with the outgoing condition \(\Phi_{H}\).
Since the state \(H\) overlaps the state \(F\), we split \(H\) into \(F\) and \(G\), just as in the last graph in Figure 3. Since the state \(H\) is split, the original edge from \(F\) to \(H\) is also split accordingly. For instance, the condition from \(F\) to \(G\) is \(\sigma=\) 'c' because, only when we go through the paths \(s_{12},s_{17}\in F\), of which the path condition is \(\sigma=\) 'c', we can reach the path \(s_{19}\in G\). The state \(G\) is a final state because it stands for the path \(s_{19}\) that leaves the parsing loop. Finally, we merge the two \(F\) states, forming a self-cycle as illustrated in Figure 2(e).
**Algorithm Framework.** Algorithm 1 sketches out our approach. Its parameter is the initial program environment \(\mathbb{E}_{init}\), which provides necessary program information such as the initial path condition and the initial value of every program variable before entering a parsing loop. Line 2 analyzes the first iteration of the parsing loop and outputs the analyzed path set as well as the resulting program environment, i.e., \((S,\mathbb{E}_{S})\). Line 3 initializes the FSM and a worklist.
```
1Procedure infer_state_machine (\(\mathbb{E}_{init}\))
2\((S,\mathbb{E}_{S})=\) abstract_interpretation (\(\mathbb{E}_{init}\));
3Worklist = \(\{(S,\mathbb{E}_{S})\}\); \(FSM=\emptyset\);
4whileWorklist not emptydo
5\((S,\mathbb{E}_{S})=\) Worklist.pop();
6\((S^{\prime},\mathbb{E}_{S^{\prime}})=\) abstract_interpretation (\(\mathbb{E}_{S}\));
7 add (\(S,\mathbb{E}_{S},\mathbb{E}_{S^{\prime}}\)) into FSM;
8\(/\)* splitting operations */
9foreach state \(X\) that should be splitdo
10 split \(x\) into \(X_{1},X_{2},\ldots\);
11 replace \((X,\mathbb{E}_{X},Y)\in FSM\) with \((X,\mathbb{E}_{X},Y)\);
12 replace \((Y,\mathbb{E}_{X},X)\in FSM\) with \((Y,\mathbb{E}_{Y},X_{i})\);
13
14assume\(S^{\prime}\) is split into \(S^{\prime}_{i}\), or \(S^{\prime}=S^{\prime}_{i}\) if \(S^{\prime}\) is not split;
15if\(\exists(S^{\prime}_{i},\mathbb{E}_{S^{\prime}},s)\in FSM\), where \(*\) means any state then
16 add \((S^{\prime}_{i},\mathbb{E}_{S})\) into Worklist;
17 /* merging operations */
18 merge states that represent the same path set into one state;
19foreach pair of states \((X,Y)\) such that there are multiple transitions \((X,\mathbb{E}_{X},Y),(X,\mathbb{E}_{\mathbb{E}_{\mathbb{E}_{\mathbb{E}_{\mathbb{E} }}}},Y),\cdots\in FSM\)do
20\(\mathbb{E}_{X}=\) merge(\(\mathbb{E}_{X},\mathbb{E}_{X},Y\)) into \((X,\mathbb{E}_{\mathbb{E}_{\mathbb{E}}},Y)\) in FSM;
21if\(\forall\mathbb{E}_{X},\mathbb{E}_{X}\neq\mathbb{E}_{X}\)then add \((X,\mathbb{E}_{X})\) into Worklist;
22
23
24returnFSM;
```
**Algorithm 1**State Machine Inference.
The FSM is represented by a set of state transitions. Each transition is a triple \((S,\mathbb{E}_{S},S^{\prime})\) and describes the analyses of
Figure 3: Basic steps of our approach.
two consecutive iterations of the parsing loop -- one analyzes the path set \(S\) and outputs \(\mathbb{E}_{S}\); the other uses \(\mathbb{E}_{S}\) as the precondition, which lets us analyze the path set \(S^{\prime}\). Each item in the worklist is the analysis result from an iteration of the parsing loop, i.e., \((S,\mathbb{E}_{S})\). We use the worklist to perform a fixed-point computation. That is, whenever we get a new pair \((S,\mathbb{E}_{S})\) that has not been included in the FSM, we add it to the worklist, because using a new \(\mathbb{E}_{S}\) as the initial program environment may result in new analysis results from the parsing loop.
Lines 5-7 continue the analysis of the next loop iteration and add the new state transition to the FSM. Lines 8-11 split a state into multiple sub-states, just like we split the state \(H\) in Figure 3. Lines 12-14 update the worklist by adding \((S^{\prime}_{i},\mathbb{E}_{S^{\prime}_{i}})\) if the pair has not been included in the FSM. Line 15 merges the states that represent the same path set, just like that we merge the two states \(F\) in the last example. If the procedure above yields multiple but non-equivalent transitions between a pair of states, e.g., \((X,\mathbb{E}_{X\geq 1},Y)\), Lines 16-19 merge them into one, \((X,\mathbb{E}_{X},Y)\). If \(\mathbb{E}_{X}\equiv\mathbb{E}_{Xi}\), we do not need to add \((X,\mathbb{E}_{X})\) to the worklist, because the resulting transition \((X,\mathbb{E}_{Xi},Y)\) has been in the FSM. Otherwise, \((X,\mathbb{E}_{X})\) should be added to the worklist for further computation.
The details of the merging operation will be discussed later in SS5, but it is sound and also guarantees the convergence of a fixed-point computation. That is, while we keep merging transitions from \(X\) to \(Y\) whenever a new transition between the two states is produced, the merging operation ensures that we will not endlessly generate new transitions from \(X\) to \(Y\). Instead, it will converge, i.e., reach a fixed point.
**Controlled State Splitting and Merging.** The previous example shows the power of regarding multiple paths as a single state, which mitigates the path explosion problem and produces compressed FSMs. However, we observe that we cannot arbitrarily put all possible paths in a single state. Otherwise, invalid FSMs may be generated or the algorithm performance may be seriously degraded. Thus, we establish dedicated rules to control state splitting and merging. They are implemented into two key operations in Algorithm 1, namely split and merge. Next, we informally discuss them in three parts: (1) we list the rules of splitting and merging states; (2) we use a detailed example to show how these rules are used; and (3) we briefly justify the rationale behind the rules.
_(1) Splitting and Merging Rules._ We establish the following rules to split a state or merge multiple states.
* Splitting Rule (**SR1**): If two states represent overlapping path sets, we split them into multiple disjoint path sets. This rule has been illustrated in Figure 3 where the state \(H\) is split into \(F\) and \(G\), so that we can reuse the state \(F\).
* Splitting Rule (**SR2**): If a state represents a path set that includes both loop-exiting paths and paths that go back to the loop entry, we split it into a final state containing the exiting paths and a state containing the others. Otherwise, it will be hard to decide if an FSM terminates.
* Splitting Rule (**SR3**): If a state represents a path set where a variable is defined recursively in some paths, these paths should be isolated from others. For example, the paths \(s_{12}\) and \(s_{13}\) in Figure 4 define the variable _tok_ in two manners. The path \(s_{13}\) defines the variable _tok_ recursively based on its previous value. Hence, we put the two paths \(s_{12}\) and \(s_{13}\) in different path sets.
* Merging Rule (**MR1**): Given a set of states that represent the same path set with the same path conditions, we merge them into a single state. This rule has been illustrated in Figure 3 where we merge the two states \(F\).
* Merging Rule (**MR2**): Given a sequence of transitions between a pair of states, we merge them into a single transition either by induction or, if induction fails, via a widening operator from classic abstract interpretation. Let us use the following examples to illustrate.
* Given multiple transitions between a pair of states where the transition constraints form a sequence such as \(\sigma=1\), \(\sigma=2\), \(\sigma=3\),..., we can apply inductive inference [22] to merge them into a single state transition with the constraint \(\sigma=k\), meaning the \(k\)th transition constraint.
* If the transition constraints are \(\sigma=0\), \(\sigma=3\), \(\sigma=1\),..., we cannot inductively merge them as before. Instead, we merge them into \(0\leq\sigma\leq 3\) using the classic widening operator from interval-domain abstract interpretation [40]. This merging operation is sound but may lose precision.
* Merging Rule (**MR3**): To ensure the validity, i.e., a state transition does not refer to inputs consumed by previous transitions, we perform this rule after Algorithm 1 terminates. That is, given two consecutive transitions, e.g., \(\delta(A,\Phi_{A})=\{B\}\) and \(\delta(B,\Phi_{B})=\{C\}\), they are valid by definition iff \(\Phi_{A}\) and \(\Phi_{B}\) respectively constrain two consecutive but disjoint parts of an input stream. If the inputs constrained by \(\Phi_{A}\) and \(\Phi_{B}\) overlap, we either (1) replace the transition constraints with \(\Phi^{\prime}_{A}\) and \(\Phi^{\prime}_{B}\) such that \(\Phi^{\prime}_{A}\wedge\Phi^{\prime}_{B}\equiv\Phi_{A}\wedge\Phi_{B}\) and neither \(\Phi^{\prime}_{A}\) nor \(\Phi^{\prime}_{B}\) refers to previous inputs, or (2) merge the transitions, yielding \(\delta(A,\Phi_{A}\wedge\Phi_{B})=\{C\}\) if \(\Phi^{\prime}_{A}\) and \(\Phi^{\prime}_{B}\) cannot be computed.
**Theorem 1** (Convergence).: _The splitting and merging rules guarantee the convergence of Algorithm 1._
Proof.: Given a parsing loop that contains \(n\) program paths in the loop body, SR1 ensures that we split these paths into at most \(n\) disjoint path sets. Thus, Algorithm 1 generates at most \(n\) states. While we may generate different transitions between a pair of states, Algorithm 1 leverages MR1-2 to merge them by conventional inductive inference [22] or interval-domain abstract interpretation [40], until a fixed point is reached. Thus, we compute at most one fixed-point transition between each
pair of states. Since both the inductive inference and abstract interpretation converge, Algorithm 1 converges after generating at most \(n\) states and \(n^{2}\) fixed-point state transitions.
_(2) Detailed Example._ Figure 4 shows a common but complex case in protocol parsers. It looks for a nonempty token between the symbol 'A' and the symbol ':'. The token _tok_ is initialized to be an empty string and is reset when the input is 'A' (Line 12). If the input character is a letter, the character is appended to _tok_ (Line 13). If the input character is ':', it will check if the token _tok_ is a nonempty keyword (Line 10).
**Figure 4(a).** Since the variable _state_ and the variable _tok_ are respectively initialized as _TOK_ and an empty string, in the first iteration, we analyze the paths \(s_{11}\), \(s_{12}\), and \(s_{13}\) as other paths are infeasible. By SR3, the paths \(s_{12}\) and \(s_{13}\) cannot be in the same state. Thus, we create the states \(A_{0}=\{s_{11},s_{13}\}\) and \(B=\{s_{12}\}\). The outgoing constraint of each state is the path constraint, where we use the symbol \(\mathfrak{v}^{n}\) to represent the input byte stream of length \(n\) before the current loop iteration. In the first iteration, _tok_ is an empty string and denoted as \(\mathfrak{v}^{0}\).
**Figure 4(b).** The first iteration creates two states, \(A_{0}=\{s_{11},s_{13}\}\) and \(B=\{s_{12}\}\). If we follow the state \(B\), i.e., the first iteration runs the path \(s_{12}\), the code only resets the variable _tok_ and, after the reset, it is like we never enter the loop. Hence, in the second iteration, we analyze the paths in \(A_{0}\cup B\) again just as in the first iteration. By MR1, we reuse the state \(A_{0}\) and the state \(B\). That is, we add a self-cycle on the state \(B\) and a transition from the state \(B\) to the state \(A_{0}\).
**Figure 4(c).** If we follow the state \(A_{0}\), i.e., the first iteration runs the paths in \(A_{0}=\{s_{11},s_{13}\}\), the second iteration will analyze the paths in \(C=\{s_{10},s_{11},s_{12},s_{13},s_{16}\}\). Thus, we create the state \(C\) and add the transition from \(A_{0}\) to \(C\). The outgoing transition of \(C\) is the path condition of all paths in \(C\).
**Figure 4(d).** By SR1 and SR2, we split the state \(C\) into four sub-states \(A_{1}\), \(B\), \(D=\{s_{16}\}\), and \(E=\{s_{10}\}\). We reuse the state
Figure 4: A detailed example. (a)-(h) The steps of FSM inference.
\(B\) but create a new state \(A_{1}\) because the states \(A_{0}\) and \(A_{1}\) have different post-conditions. We then replace the state \(C\) with the four sub-states. The transition constraint from the state \(A_{0}\) to each sub-state is the original constraint from the state \(A_{0}\) to the state \(C\). The outgoing constraint of each sub-state is the constraint of paths represented by the sub-state. For instance, for the sub-state \(D=\{s_{16}\}\), its path condition is \(state=\text{ERR}\) where the value of _state_ is \(\text{\emph{ite}}(\tau^{1}=\text{`}\cdot\text{'},\text{ERR},\text{TOK})\), meaning that if the previous input is \(\text{`}\cdot\text{'}\), \(state=\text{ERR}\) and, otherwise, \(state=\text{TOK}\). Thus, the outgoing constraint of \(D\) is \(\tau^{1}=\text{`}\cdot\text{'}\).
The incoming and outgoing constraints of a state can be cross-simplified. For instance, the outgoing constraint of \(E\) includes \(\tau^{1}\neq\text{`}\cdot\text{'}\). This means that the incoming constraint of \(E\) satisfies \(\sigma\neq\text{`}\cdot\text{'}\), and thus, can be simplified to 'a' \(\leq\sigma\leq\text{`}\text{z'}\).
**Figure 4(e).** We continue a similar analysis of the next iteration from the states \(D\), \(E\), or \(A_{1}\) because they have undetermined target states. From the state \(D=\{s_{16}\}\), since the path \(s_{16}\) exits the loop, we stop the analysis and mark the state \(D\) as a final state. Similarly, we can find the final state \(F\).
**Figure 4(f) and Figure 4(g).** If we continue the analysis from the state \(A_{1}\), we will find a repetitive state sequence, i.e., \(A_{0},A_{1},A_{2}\), and so on. We use MR2 to inductively merge them into \(A_{k}\) as shown in Figure 4(g). The merged state \(A_{k}\) means the \((k+1)\)th state \(A\). Thus, the self-cycle on \(A_{k}\) loops \(k\) times and each time consumes an input satisfying 'a' \(\leq\sigma\leq\text{`b'}\). For state transitions, e.g. the one from \(E\) to \(F\), since the constraints between them in Figure 4(f) form the sequence: \(\sigma=\text{`}\cdot\wedge\text{iskey}(\tau^{1})\), \(\sigma=\text{`}\cdot\text{'}\wedge\text{iskey}(\tau^{2})\), \(\sigma=\text{`}\cdot\text{'}\wedge\text{iskey}(\tau^{3})\) and so on, the transition constraint from \(E\) to \(F\) in Figure 4(g) is summarized as \(\sigma=\text{`}\cdot\wedge\text{iskey}(\tau^{k+1})\).
**Figure 4(h).** To ensure that a state transition does not refer to symbols in previous transitions, we merge the incoming and outgoing constraints of the state \(A_{k}\) and \(E\) by MR3, yielding the final FSM in Figure 4(h). The inferred FSM is correct. For instance, given a string "\(\wedge\wedge\wedge\text{abcd:}\)" where we assume "\(\text{abcd}\)" is a keyword, the FSM can parse it by the transitions \(BBBA_{k}EF\). That is, the transitions \(BBBA_{k}\) consumes the prefix "\(\wedge\wedge\wedge\)" and the transition from \(A_{k}\) to \(E\) consumes the keyword "abcd" by instantiating the induction variable \(k=4\). Finally, the transition from \(E\) to \(F\) consumes the colon.
_(3) Consequences of Violating Rules._ As stated in the proof of Theorem 1, SR1, MR1, and MR2 contribute to the convergence of the algorithm. Violating these rules may make the algorithm not terminating. SR2 and MR3 ensure the validity of an FSM by definition. That is, SR2 distinguishes final states from other states, and MR3 ensures that a state transition does not refer to symbols in previous transitions.
Particularly, SR3 facilitates the use of induction in MR2. Figure 5 shows the case where we do not use SR3 and, thus, merge the states \(A\) and \(B\). In this case, after each iteration, the variable _tok_ may be either reset or recursively defined, depending on if the previous input is '\(\wedge\)'. In result, the value sequence of the variable _tok_, as shown in Figure 5, cannot be summarized as an expression parameterized by an induction variable \(k\). According to MR2, to merge such repetitive states, we have to rely on widening operators, which are sound but imprecise [40]. Recall that, in Figure 4(f) where SR3 is used, the value of _tok_ is a sequence of \(\tau^{1}\), \(\tau^{2}\), \(\tau^{3}\), and so on. Thus, we can precisely summarize its value as \(\tau^{k+1}\) via MR2.
## 5 Formalizing the Approach
In this section, the notation \(a[b/c]\) returns the expression \(a\) after using \(b\) to replace all occurrences of \(c\) in \(a\). We use \(\text{sat}(\phi)\) and \(\text{unsat}(\phi)\) to mean that the constraint \(\phi\) is satisfiable and not. An \(\text{ite}(v_{1},v_{2},v_{3})\) formula returns \(v_{2}\) and \(v_{3}\) if the condition \(v_{1}\) is true and false, respectively. We use a simplification procedure [47], \(\phi_{1}^{\prime}=\text{simplify}(\phi_{1},\phi_{2})\), to simplify \(\phi_{1}\) but keep the equivalence of \(\phi_{1}\) and \(\phi_{1}^{\prime}\) in terms of \(\phi_{2}\Rightarrow(\phi_{1}\equiv\phi_{1}^{\prime})\).
**Abstract Language.** For clarity, we use a C-like language in Figure 6 to model a parser that implements an FSM via a double loop. We use a do-while loop as it is a general form of loops with initialization, i.e., \(\mathcal{S};\textbf{while}(1)\{\mathcal{S}_{i}\}\). The statements could be assignments, binary operations, read statements that read the next byte of a message to parse, exit statements that exit the loop, and branching statements that are uniquely identified by the identifier \(\kappa\). To use our approach, users manually annotate the statement reading the inputs, e.g., the read function. The rest is fully automated. Although we do not include function calls or returns for simplicity, our system is interprocedural as a call statement is equivalent to assignments from the actual parameters to the formals, and a return statement is an assignment from the return value to its receiver. The language abstracts away pointer operations because the pointer analysis is not our technical contribution and, in the implementation, we follow existing works to resolve pointer relations [103]. We do not assume nested loops for simplicity as we focus on the outermost loop that implements the FSM. In practice, we observe that inner loops often serve for parsing repetitive fields in a network message rather than implementing the FSM. Hence, in the implementation, we follow traditional techniques to analyze inner loops [84, 51].
**Abstract Domain.** An abstract value of a variable represents all possible concrete values that may be assigned to the variable during program execution. The abstract domain specifies
Figure 5: Violation of SR3.
the limited forms of an abstract value. In our analysis, the abstract value of a variable \(v\) is denoted as \(\tilde{v}\) and defined in Figure 7. An abstract value could be a constant value \(c\) and a byte stream of length \(k\), i.e., \(\sigma^{k}\) and \(\tau^{k}\), which respectively represent the input byte stream read in the current loop iteration and the previous iterations. The symbols \(\tau^{n}_{i}\), \(\tau^{n}_{i-j}\), and \(\tau\) are defined similarly as \(\sigma^{n}_{i}\), \(\sigma^{n}_{i-j}\), and \(\sigma\). An abstract value can also be a first-order logic formula over other abstract values. To ease the explanation, we only support binary and it formulas. Especially, we also include an interval abstract value to mean a value between two constants. As discussed later in Algorithm 3, such interval abstract values allow our analysis to fall back to conventional interval-domain abstract interpretation [40], in order to guarantee convergence and soundness.
**Abstract Interpretation.** The abstract interpretation is described as transfer functions of each program statement. Each transfer function updates the program environment \(\mathbb{E}=(\mathbb{I},\phi)\). Given the set \(\mathbb{V}\) of program variables and the set \(\mathbb{V}\) of abstract values, \(\mathbb{I}:\mathbb{V}\mapsto\mathbb{V}\) maps a variable to its abstract value. The constraint \(\phi\) captures the skeletal path constraint, which stands for a path set executed in a single loop iteration. We say \(\phi\) is a skeletal path constraint because it is in a form of conjunction or disjunction over the symbols \(\kappa\) or \(\neg\kappa\), e.g., \(\kappa_{1}\wedge(\kappa_{2}\vee\neg\kappa_{2})\), where each symbol \(\kappa\) uniquely identifies a branch and is not evaluated to its branching condition. The real path constraint is denoted by the uppercase Greek letter \(\Phi=\phi[\mathbb{I}(\kappa)/\kappa]\) where each \(\kappa\) is replaced by its abstract value. We list the transfer functions in Figure 8, which describe how we analyze a loop iteration, i.e., the procedure abstract_interpretation in Algorithm 1. In these transfer functions, we use \(\mathbb{E}\vdash\mathcal{S}\cdot\mathbb{E}^{\prime}\) to describe the environment before and after a statement.
To initialize the analysis of a loop iteration, we set the initial environment to \(\mathbb{E}=(\mathbb{I},\phi)\), which is obtained from the previous iteration, and assume that abstract values in \(\mathbb{I}\) use the symbols \(\tau^{k}_{i}\) and \(\sigma^{k^{\prime}}_{\ell^{\prime}}\). This means that the previous iteration depends on an input stream of length \(k+k^{\prime}\), in which \(k\) bytes from iterations before the last iteration and \(k^{\prime}\) bytes from the last iteration. For the current iteration, all \(k+k^{\prime}\) bytes are from previous iterations. Hence, we rewrite all \(\sigma\) to \(\tau\).
The rules for assignment, binary operation, read, and exit are straightforward, which update the abstract value of a variable. The sequencing rule says that, for two consecutive statements, we analyze them in order. The branching rule states how we handle conditional statements. In the branching rule, \((\mathbb{I},\phi)\) represents the environment before a branching statement. \((\mathbb{I}_{1},\phi\wedge\phi_{1})\) and \((\mathbb{I}_{2},\phi\wedge\phi_{2})\) are program environments we respectively infer from the two branches. At the joining point, we either use the analysis results of one branch if the other branch is infeasible, or merge program environments from both branches. When merging results from both branches, variables assigned different values from the two branches are merged via the _ite_ operator. Path constraints are merged via disjunction with the common prefix pulled out.
**Abstract Finite State Machine.** We use a graph structure to represent an FSM. That is, an FSM is a set of labeled edges. Each edge is a triple \((S,\mathbb{E}_{S},S^{\prime})\) where = \(\mathbb{E}_{S}=(\mathbb{I}_{S},\phi_{S})\), meaning a transition from the state \(S\) to the state \(S^{\prime}\) with the transition constraint \(\phi_{S}[\mathbb{I}_{S}(\kappa)/\kappa]\). In the triple, \(\mathbb{E}_{S}\) is the resulting program environment after analyzing the path set \(S\) in a loop iteration. Next, we formally describe the other two key procedures, i.e., split and merge, in Algorithm 1.
_(1) Splitting Rules (SR1-3)._ Splitting a state consists of two steps -- splitting the path set the state represents and recomputing its outgoing program environment.
SR1 splits two overlapping path sets \(S_{1}\) and \(S_{2}\) into at most three subsets, respectively represented by \(\phi_{S_{1}}\wedge\neg\phi_{S_{2}}\) that means paths in the first set but not in the second, \(\phi_{S_{1}}\wedge\phi_{S_{2}}\) that means paths shared by the two sets, and \(\neg\phi_{S_{1}}\wedge\phi_{S_{2}}\) that means paths not in the first set but in the second. We create a state for each of the three skeletal constraints if it is satisfiable. SR2 and SR3 isolate some special paths from a path set. Given the path set \(S_{1}\) and the paths \(S_{2}\) to isolate, we create two states represented by \(\phi_{S_{1}}\wedge\neg\phi_{S_{2}}\) and \(\phi_{S}\wedge\phi_{S_{2}}\), respectively.
After a state is split into multiple sub-states, we recompute the outgoing program environment for each sub-state. Algorithm 2 and Figure 9 show the splitting procedure, where we assume we split the state \(S_{2}\) into multiple sub-states \(S_{2i}\) and split its outgoing transition \((S_{2},\mathbb{E}_{S_{2}},S_{3})\) into \((S_{2i},\mathbb{E}_{S_{2i}},S_{3})\). The splitting procedure consists of two steps. First, Line 4 in Algorithm 2 computes the real path constraint according to the skeletal path constraint of each sub-state. Second, Line 5
Figure 6: Language of target programs.
Figure 7: Abstract values.
recomputes each abstract value under the new path constraint. Basically, this step is to remove values from unreachable branches. For instance, assume \(\mathbb{I}_{\mathbb{S}_{2}}(v)=\text{ite}(\tilde{v}_{1},\tilde{v}_{2},\tilde{v}_{3})\), meaning that after analyzing the path set \(S_{2}\), the abstract value of the variable \(v\) is either \(\tilde{v}_{2}\) or \(\tilde{v}_{3}\), depending on if the branching condition \(\tilde{v}_{1}\) is true. If paths in the subset \(S_{21}\) ensures \(\tilde{v}_{1}=\text{true}\), we then rewrite the abstract value as \(\mathbb{I}_{\mathbb{S}_{21}}(v)=\tilde{v}_{2}\).
_(2) Merging Rules (MR1)_. MR1 merges two equivalent states. Lines 13-14 of Algorithm 1 implements this rule. We show the idea in Figure 10, where we assume \(S^{\prime}_{1}\equiv S_{1}\) and \(\mathbb{E}_{S^{\prime}_{1}}\equiv\mathbb{E}_{S_{1}}\). In this case, we merge \(S_{1}\) and \(S^{\prime}_{1}\), but do not compute the next states using \(\mathbb{E}_{S^{\prime}_{1}}\) because we have already computed them using its equivalent counterpart \(\mathbb{E}_{S_{1}}\). Thus, Algorithm 1 does not add \((S^{\prime}_{1},\mathbb{E}_{S^{\prime}_{1}})\) to the worklist at Lines 13-14.
_(3) Merging Rules (MR2)_. MR2 merges two states that represent the same path sets but have non-equivalent outgoing program environments. Let us consider the example in Figure 11 to understand how Algorithm 1 deals with this case. Figure 11(a) is the same as Figure 10(b) except that we assume \(\mathbb{E}_{S^{\prime}_{1}}\not\equiv\mathbb{E}_{S_{1}}\). In this situation, we add \((S_{1},\mathbb{E}_{S^{\prime}_{1}})\) to the worklist (see Lines 13-14 in Algorithm 1). When \((S_{1},\mathbb{E}_{S^{\prime}_{1}})\) is popped out, we will perform abstract interpretation using \(\mathbb{E}_{S^{\prime}_{1}}\) as the initial program environment (see Lines 5-6 in Algorithm 1). Assume the abstract interpretation produces \((S^{\prime}_{2},\mathbb{E}_{S^{\prime}_{2}})\) where \(S^{\prime}_{2}\equiv S_{2}\) as illustrated in Figure 11(b). In Figure 11(c), we merge \(S_{2}\) and \(S^{\prime}_{2}\), yielding multiple non-equivalent transitions between \(S_{1}\) and \(S_{2}\). Lines 16-19 in Algorithm 1 merge such transitions, yielding Figure 11(d). If the merged environment, i.e., \(\text{merge}(\mathbb{E}_{S_{1}},\mathbb{E}_{S^{\prime}_{1}})\) equals \(\mathbb{E}_{S_{1}}\) or \(\mathbb{E}_{S^{\prime}_{1}}\), we do not add \((S_{1},\text{merge}(\mathbb{E}_{S_{1}},\mathbb{E}_{S^{\prime}_{1}}))\) to the worklist because the resulting transition \((S_{1},\mathbb{E}_{S_{1}},S_{2})\) or \((S_{1},\mathbb{E}_{S^{\prime}_{1}},S_{2})\) has been in the FSM. Otherwise, the pair \((S_{1},\text{merge}(\mathbb{E}_{S_{1}},\mathbb{E}_{S^{\prime}_{1}}))\) will be added to the worklist for further computation.
A naive merging procedure is shown in Algorithm 3, which utilizes the traditional interval abstract domain to guarantee soundness and convergence. Lines 3-4 convert each abstract value to an interval, \(\text{int}(c_{min},\,c_{max})\), by solving two optimization problems via an SMT solver. Basically, solving the optimization problems respectively produces the minimum and maximum solutions, \(c_{min}\) and \(c_{max}\), of the abstract value \(\tilde{v}\) with respect to the path constraint. Lines 5-6 merge the interval values via the traditional widening operator [40]. As proved by Cousot and Cousot [40], the widening operator ensures convergence and soundness, which, in our context, means that it ensures the convergence and soundness of computing a fixed-point transition between two states. Nonetheless, the naive merging procedure could result in a significant loss of precision because both the computation of intervals (Lines 3-4) and the merging of intervals (Lines 5-6) over-approximate each abstract value. Thus, before using the interval abstract domain to merge transitions, we always try an induction-based solution, which is discussed below.
The induction-based solution is sound and does not lose precision [22]. In the solution, we delay the transition merg
Figure 8: Transfer functions as inference rules for analyzing a loop iteration.
Figure 10: MR1. (a) Before merging. (b) After merging.
Figure 9: SR1-3. (a) Before splitting. (b) After splitting.
ing operation until the number of transitions between a pair of states reaches a predefined constant. For instance, in Figure 12(a), we do not merge transitions until the number of transitions between each pair reaches 3. Given a list of transitions between a pair of states, we can then perform the inductive inference in two steps - guess and check. For instance, in Figure 12(a), assume \(\mathbb{I}_{11}(v)=\sigma+1\), \(\mathbb{I}_{12}(v)=\sigma+2\) and \(\mathbb{I}_{13}(v)=\sigma+3\). As shown in Figure 12(b), we then inductively "guess" the \(k\)th abstract value of the variable \(v\) as \(\mathbb{I}_{1k}(v)=\sigma+k\). To check the correctness of \(\mathbb{I}_{1k}(v)=\sigma+k\), as shown in Figure 12(c), we rerun the abstract interpretation using \(\mathbb{E}_{S_{\text{dis}}}\) as the initial program environment, if in the resulting program environment, the abstract value of \(v\) is \(\sigma+(k+1)\), it means the summarized value \(\mathbb{I}_{1k}(v)=\sigma+k\) is correct. This guess-and-check procedure follows the procedure of mathematical induction [85] and, thus, is correct.
_(4) Merging Rules (MR3)_. MR3 ensures the validity of FSM by eliminating state transitions that refer to inputs consumed by previous transitions. It is performed after an FSM is produced by Algorithm 1. Algorithm 4 and Figure 13 demonstrate how it works on two transitions, one is from the state \(S_{1}\) to the state \(S_{2}\) and consumes \(k\) bytes, i.e., \(\sigma^{k}\); the other is from the state \(S_{2}\) to the state \(S_{3}\), consumes \(l\) bytes, i.e., \(\sigma^{l}\), and, meanwhile, constrains \(m\) bytes consumed by previous transitions, i.e., \(\tau^{m}\). First, for conjunctive constraints, e.g., \(g(\sigma^{l})\wedge h(\tau^{m})\) in Figure 13(a), we only need to move the constraint \(h(\tau^{m})\) to the previous transition and perform constraint rewriting. Such rewriting does not change the semantics of the transition constraint but just lets it follow the definitions of \(\sigma\) and \(\tau\). Second, for disjunctive constraints, e.g., \(g(\sigma^{l})\lor h(\tau^{m})\) in Figure 13(c), we split the state \(S_{2}\) to eliminate the disjunctive operator as shown in Figure 13(d) and then use the method for conjunction discussed above. Third, for constraints that cannot isolate \(\tau\)-related sub-formulas via disjunction or conjunction, as shown in Figure 13(f), we merge the transitions into one.
```
1Proceduremerge(\((S_{1},\mathbb{E}_{S_{1}},S_{2}),(S_{2},\mathbb{E}_{S_{2}},S_{3})\)) assume\(\mathbb{E}_{S_{1}}=(\mathbb{I}_{S_{1}},\phi_{S_{1}})\)and\(\mathbb{E}_{S_{2}}=(\mathbb{I}_{S_{2}},\phi_{S_{2}})\); let\(\phi_{S_{1}}=\phi_{S_{1}}[\mathbb{E}_{S_{1}}(\kappa)/\kappa]\); \(\mathbb{E}_{S_{2}}=\phi_{S_{2}}[\mathbb{E}_{S}(\kappa)/\kappa]\); let\(\phi_{S_{1}}=\text{simplify}(\phi_{S_{1}},\phi_{S_{2}})\); \(\phi_{S_{2}}=\text{simplify}(\phi_{S_{2}},\phi_{S_{1}})\); if\(\phi_{S_{2}}\) does not use any symbol \(\dagger\)thenreturn; assume\(\phi_{S_{1}}=f(\sigma^{l})\); if\(\phi_{S_{2}}=g(\sigma^{l})\wedge h(\tau^{m})\)then let\(\phi_{S_{1}}=f(\sigma^{l})\wedge h(\tau^{m})[\sigma^{l}_{i:m+k}/\tau^{m}_{i:m +k}][\tau^{m-k}/\tau^{m}_{i:m-k}]\); let\(\phi_{S_{2}}=g(\sigma^{l})\); elseif\(\phi_{S_{2}}=g(\sigma^{l})\lor h(\tau^{m})\)then split the state \(S_{2}\) as shown in Figure 13(c-d) and recursively call this procedure. else let\(\phi_{S_{1}}=f(\sigma^{k})[\sigma^{l+l}_{i}/\sigma^{l}_{i}]\); \(\phi_{S_{2}}=g(\sigma^{l},\tau^{m})[\sigma^{k+l}_{i+l}/\sigma^{l}_{i}]\); if\(m\geq k\)then let\(\Phi=\Phi_{S_{1}}\wedge\phi_{S_{2}}[\sigma^{l+l}_{i:m+k}/\tau^{m}_{i:m-k}][\tau^{m-k} /\tau^{m}_{i:m-k}]\); elselet\(\Phi=\Phi_{S_{1}}\wedge\Phi_{S_{2}}[\sigma^{l+l}_{i:m+k}/\tau^{m}_{i:m}]\); merge transitions into one from \(S_{1}\) to \(S_{3}\) constrained by \(\Phi\);
```
**Algorithm 4**Merging Rules (MR3).
**Theorem 2** (Soundness and Completeness).: _Given a program in the language defined in Figure 6, Algorithm 1 is sound using the aforestated splitting and merging rules. It is complete if the interval domain is never used during the analysis._
Proof.: The proof is discussed in Appendix A.
Figure 12: MR2 via induction. \(\mathbb{E}_{S_{ij}}=(\mathbb{I}_{S_{ij}},\phi_{S_{i}})\). (a) Delay merging. (b) Guess. (c) Fixed-point computation.
Figure 13: MR3. Eliminating \(\tau\) in (a-b) conjunctive constraints, (c-d) disjunctive constraints, and (e-f) constraints where \(\tau\) cannot be isolated by disjunction or conjunction.
**Discussion**. We propose a static analysis that can infer an FSM from a parsing loop. While it is undecidable to check if an input loop intends to implement an FSM, as discussed in Theorem 2, given any loop in our abstract language, our approach guarantees to output a sound result. Nevertheless, the implementation in practice shares some common limitations with general static analysis. For instance, our static analysis is currently implemented for C programs and does not handle virtual tables in C++. We focus on source code and do not handle inline assembly. For libraries without available source code, e.g., crc16(0) and md5(), which are widely used to compute checksums or encrypt messages, we manually model these APIs. A common limitation shared with the state of the art is that, if the code implements a wrong FSM, the FSM we infer will be incorrect, either. Nevertheless, we will show that our approach is promising via a set of experiments.
## 6 Evaluation
On top of the LLVM compiler framework [67] and the Z3 theorem prover [45], we have implemented StateLifter for protocols written in C. The source code of a protocol is compiled into the LLVM bitcode and sent to StateLifter for inferring the FSM. In StateLifter, LLVM provides facilities to manipulate the code and Z3 is used to represent abstract values as symbolic expressions and solve path constraints.
**Research Questions.** First, we compare our approach to the state-of-the-art static analysis for FSM inference, i.e., Proteus [100, 101]. Second, we compare StateLifter to dynamic techniques, including ReverX [23], AutoFormat [71], and Tupni [44]. Third, to show the security impacts, we apply StateLifter to fuzzing and applications beyond protocols.
**Benchmarks.** Our approach is designed to work on the C code that implements the FSM parsing loop for regular protocols. We do not find any existing test suite that contains such C code. Thus, we build the test suite. To this end, we search the Github for regular protocols implemented in C language via the keywords, "protocol parser", "command parser", and "message parser", until we found the ten in Table 1. These protocols include text protocols such as ORP and binary protocols such as MAVLINK. They are widely used in different domains in the era of the internet of things. For example, ORP allows a customer asset to interact with Octave edge devices. MAVLINK is a lightweight messaging protocol for communicating with drones. TINY specifies the data frames sent over serial interfaces such as UART and telnet. SML defines the message formats for smart meters. RDB is a protocol for communicating with Redis databases. MQTT is an OASIS standard messaging protocol for IoT devices. MIDI is for musical devices and KISS is for amateur radio.
**Environment.** All experiments are conducted on a Macbook Pro (16-inch, 2019) equipped with an 8-core 16-thread Intel Core i9 CPU with 2.30GHz speed and 32GB of memory.
### Against Static Inference Techniques
Our key contribution is a static analysis that infers FSMs without suffering from path explosion. To show the impacts of our design, we run both StateLifter and the state-of-the-art technique, Proteus [100, 101], against the benchmark programs on a 3-hour budget per program. The time cost of each analysis is shown in Figure 14 in log scale. As illustrated, Proteus cannot complete many analyses within the time limit due to path explosion. By contrast, all our analyses finish in five minutes, exhibiting at least 70\(\times\) speedup compared to Proteus. Since both Proteus and StateLifter perform path-sensitive analysis, they have the same precision and recall when both of them succeed in inferring the FSM for a protocol, e.g., ORP. We detail the results of precision and recall in SS6.2
Table 1 shows the size of each inferred FSM by StateLifter and Proteus. Observe that the FSMs inferred by our approach are much (4\(\times\)-40\(\times\)) smaller than those inferred by Proteus. It demonstrates that our design not only significantly mitigates the path explosion problem but also infers highly compressed FSMs, which can be expected to be easier to use in practice.
### Against Dynamic Inference Techniques
Dynamic analysis is orthogonal to static analysis. Thus, in general, they are not comparable. Nevertheless, for the purpose of reference rather than comparison, we evaluate three dynamic analyses, including ReverX [23], AutoFormat [71], and Tupni [44]. ReverX is a black-box approach that learns an FSM from input messages without analyzing the code. It instantiates general automata induction techniques like L* [21] and is specially designed for protocol format inference. AutoFormat and Tupni are white-box approaches that rely on dy
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline \multirow{2}{*}{**Protocols**} & \multicolumn{2}{c|}{**StateLifter**} & \multicolumn{2}{c}{**Proteus**} \\ & \#states & \#transitions & \#states & \#transitions \\ \hline ORP[11] & 5 & 8 & 42 & 92 \\ MAVLINK[12] & 42 & 197 & - & - \\ IHEX[5] & 15 & 63 & - & - \\ BITSTR[8] & 22 & 75 & - & - \\ TINY[16] & 14 & 54 & 151 & 872 \\ SML[7] & 32 & 89 & - & - \\ MIDI[17] & 19 & 81 & 765 & 3812 \\ MQTT[18] & 28 & 87 & 105 & 581 \\ RDB[15] & 22 & 57 & - & - \\ KISS[6] & 6 & 12 & 24 & 142 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Sizes of the Inferred State Machines
Figure 14: Time cost. The X-axis lists the ten protocols.
namic dataflow analysis. They generate message formats in BNF, which can be easily converted to FSMs. Given that all analyses can complete within a few minutes, our focus is primarily on examining their precision and recall. In Appendix B, we discuss the details of how we compute precision and recall. Intuitively, the precision is the ratio of correct state transitions to all inferred transitions; and the recall is the ratio of correct state transitions to all transitions in the ground truth.
To drive the dynamic analyses, we randomly generate one thousand valid messages as their inputs. By contrast, our static analysis does not need any inputs and, thus, provides a promising alternative to the state of the art especially when the input quality cannot be guaranteed. The precision and recall of the inferred FSMs are plotted in Figure 15. It shows that we achieve over 90% precision and recall while the others often generate over 40% false or miss 50% true transitions. This is because they depend on a limited number of input messages and cannot handle FSM parsing loops well. StateLifter also reports a few false transitions or misses some true ones as it inherits some general limitations of static analysis (see SS5).
### Security Applications
**Protocol Fuzzing.** AFLNet [82] accepts a corpus of valid messages as the seeds and employs a lightweight mutation method. Thus, we create a seed corpus, where each message is generated by solving the transition constraints in the FSMs. BooFuzz [10] directly accepts the message formats as its input and automatically generates messages. Thus, we respectively input the formats inferred by StateLifter, ReverX, AutoFormat, and Tupni to BooFuzz. The experiments are performed on a 3-hour budget and repeated 20 times to avoid random factors. As shown in Figure 16, since we can provide more precise and complete formats, fuzzers enhanced by StateLifter achieve 1.2\(\times\)-3.3\(\times\) coverage. Meanwhile, we detect twelve zero-day bugs while the others detected only two of them. We provide an example of detected bugs in Appendix C. All detected bugs are exploitable as they can be triggered via crafted messages. Thus, they may pose a notable threat to software security in the industry. For example, we identified four vulnerabilities in the official implementation of ORP [11], which is commonly used for connecting Octave edge devices to the cloud [2].
**Beyond Protocols.** FSMs are widely used in domains beyond network protocols. In Appendix D, we provide a case study of applying StateLifter to autopilot systems for security analysis.
## 7 Related Work
**Static Analysis for Protocol Reverse Engineering.** While almost all existing works for inferring message formats use dynamic analysis, Lim et al. [70] proposed a static analysis that is different from StateLifter in two aspects. First, it infers the formats of output messages whereas we focus on received messages. Second, it cannot handle loops that implement complex state machines and all loops are assumed to process repetitive fields in a message. StateLifter does not assume this. Rabkin and Katz [83] statically infer input formats in key-value forms, particularly for program configuration rather than networks. Shoham et al. [91] infer valid API sequences rather than message formats as state machines. Existing static analysis for reverse engineering focuses on security protocols, which, different from message formats, infers an agreed sequence of actions performed by multiple entities [24].
**Applications of Protocol Reverse Engineering.** Formal message formats are important for protocol fuzzing. Mutation-based fuzzers use formats to generate the seed corpus [36, 50, 59, 55, 59, 93]. Generation-based fuzzers directly use the formats to generate messages for testing [4, 9, 10, 14, 26]. Protocol model checking and verification also need formal protocol specifications [27, 28, 29, 30, 31, 32, 32, 41, 77, 79, 95]. Blanchet [32] specifies a protocol by Horn clauses and applies their technique to verify TLS models [29]. Beurdouche et al. [28] use Frama-C [62] to verify TLS implementations. Tamarin [77] uses a domain-specific language to establish proofs for security protocols and applies to 5G AKA protocols [27, 41]. Some works verify TCP components via symbolic analysis [30, 31, 79]. Udrea et al. [95] use a rule-based static analysis to identify problems in protocols. All these works assume the existence of formal specifications or manually build them. We push forward the study of automatic specification inference and can infer message formats with high precision, recall, and speed.
## 8 Conclusion
We present a static analysis that infers an FSM to represent the format of regular protocols. We significantly mitigate the path-explosion problem via carefully designed path merging and splitting rules. Evaluation shows that our approach achieves high precision, recall, and speed. Fuzzers supported by our work can achieve high coverage and discover zero-day bugs.
Figure 16: X-axes list the ten protocols. Y-axes are coverage normalized to one with a 95% confidence interval.
Figure 15: Precision and recall. X-axes list the ten protocols. |
2310.08573 | PolyTask: Learning Unified Policies through Behavior Distillation | Unified models capable of solving a wide variety of tasks have gained
traction in vision and NLP due to their ability to share regularities and
structures across tasks, which improves individual task performance and reduces
computational footprint. However, the impact of such models remains limited in
embodied learning problems, which present unique challenges due to
interactivity, sample inefficiency, and sequential task presentation. In this
work, we present PolyTask, a novel method for learning a single unified model
that can solve various embodied tasks through a 'learn then distill' mechanism.
In the 'learn' step, PolyTask leverages a few demonstrations for each task to
train task-specific policies. Then, in the 'distill' step, task-specific
policies are distilled into a single policy using a new distillation method
called Behavior Distillation. Given a unified policy, individual task behavior
can be extracted through conditioning variables. PolyTask is designed to be
conceptually simple while being able to leverage well-established algorithms in
RL to enable interactivity, a handful of expert demonstrations to allow for
sample efficiency, and preventing interactive access to tasks during
distillation to enable lifelong learning. Experiments across three simulated
environment suites and a real-robot suite show that PolyTask outperforms prior
state-of-the-art approaches in multi-task and lifelong learning settings by
significant margins. | Siddhant Haldar, Lerrel Pinto | 2023-10-12T17:57:32Z | http://arxiv.org/abs/2310.08573v1 | # PolyTask: Learning Unified Policies through
###### Abstract
Unified models capable of solving a wide variety of tasks have gained traction in vision and NLP due to their ability to share regularities and structures across tasks, which improves individual task performance and reduces computational footprint. However, the impact of such models remains limited in embodied learning problems, which present unique challenges due to interactivity, sample inefficiency, and sequential task presentation. In this work, we present PolyTask, a novel method for learning a single unified model that can solve various embodied tasks through a 'learn then distill' mechanism. In the 'learn' step, PolyTask leverages a few demonstrations for each task to train task-specific policies. Then, in the 'distill' step, task-specific policies are distilled into a single policy using a new distillation method called _Behavior Distillation_. Given a unified policy, individual task behavior can be extracted through conditioning variables. PolyTask is designed to be conceptually simple while being able to leverage well-established algorithms in RL to enable interactivity, a handful of expert demonstrations to allow for sample efficiency, and preventing interactive access to tasks during distillation to enable lifelong learning. Experiments across three simulated environment suites and a real-robot suite show that PolyTask outperforms prior state-of-the-art approaches in multi-task and lifelong learning settings by significant margins.
## I Introduction
Current progress in large-scale machine learning has been driven by large unified models that can solve multiple tasks [1, 2, 3, 4]. In contrast to task-specific models, unified models are hypothesized to benefit from sharing data, regularizing representations, and reducing overall parameter counts [5, 6, 7, 8, 9]. Perhaps more importantly, having a single model streamlines the assimilation of skills by circumventing challenges associated with managing numerous skills during deployment. While we have built better frameworks to learn unified models in vision [5, 6, 7, 9] and natural language processing [8, 5, 10], their impact in embodied domains - where agents must solve problems by interacting with physical environments - has been limited. This is despite tremendous improvement in single-task policy learning in recent years [11, 12, 13, 14, 15, 16].
Prior efforts into training unified policies fall under the umbrella of multi-task policy learning and can be broadly categorized into two paradigms - offline imitation and online reinforcement learning (RL). Offline imitation learning uses large amounts of expert demonstrations to supervise the training of the unified policy. However, training such unified policies often requires several thousand demonstrations to learn simple control tasks [1]. Multi-task RL [17, 18, 19] approaches on the other hand do away with the need for demonstrations and instead use reward-driven learning to optimize the unified policy. This comes at the cost of large amounts of interactive experience, often exceeding the cumulative sum of experience to train individual task-specific policies [20].
A more nuanced challenge in training a unified policy with RL is the need to access the environment of all tasks simultaneously in parallel. While this is easy to do with stationary datasets [1, 2], collecting interactive experiences in a multitude of environments at the same time is only feasible in simulation. Real-world embodied agents, such as robots can only be solving one task at a time, which brings about challenges in catastrophically forgetting prior tasks [21].
Fig. 1: PolyTask is a technique to train a single unified policy \(\Pi\) that can solve a range of different tasks. This is done by first learning a single-task policy for each task followed by distilling it into a unified policy. Distillation allows our unified policy to assimilate additional tasks in a lifelong-learning fashion without needing to increase the parameter count of the policy. Once trained, the unified policy can solve tasks by conditioning on task identifiers such as goal image, text description, or one-hot labels.
In this work, we present PolyTask, a new framework to train unified policies that can solve a multitude of tasks. PolyTask is intuitively simple, requires a minimal increase in parameter counts compared to a single-task policy, and is readily applicable to a wide variety of policy learning settings. At its core, PolyTask is built on the principle of 'learn then distill', a two-phase process. In the 'learn' phase, a single-task policy is trained using demonstration-guided RL for every task. This training procedure combines the sample efficiency of learning from demonstrations with the flexibility of interactive RL. Next, we move to the 'distill' phase. Here the single-task policies are distilled into a single unified policy using a new technique called _Behavior Distillation_. Unlike prior work [22] that distill using Q-values, Behavior Distillation directly distills on policy outputs, which enables the distillation of continuous actions. With PolyTask, we show for the first time that efficient demonstration-guided RL can be combined with multi-task distillation. This simple combination overcomes several prior challenges in multi-task policy learning by improving sample efficiency compared to multi-task RL and reducing the number of demonstrations needed compared to offline imitation.
Since Behavior Distillation only requires offline data, it makes PolyTask readily applicable to lifelong learning settings, where tasks are presented sequentially. Concretely, when presented with a new task to solve, a new unified policy can be obtained by distilling both the old task experts and the new task policy into a single policy. This task assimilation is done without increasing parameter count while reducing catastrophic forgetting compared to prior work [23]. Such a lifelong distillation procedure lends itself well to real robotic problems where tasks are presented sequentially (see Fig. 1).
We evaluate PolyTask on four environment suites - Meta-World, FrankaKitchen, DMControl, and xArm (real-robot) - on multi-task and lifelong settings to learn unified policies. Through an extensive study of 32 tasks across 3 simulation suites and 6 tasks on a xArm robotic arm, we present the following key insights:
1. PolyTask outperforms prior state-of-the-art multi-task learning algorithms by an average of 1.3\(\times\) on 32 tasks across 3 simulated environment suites (Sec. III-B).
2. Behavior Distillation allows PolyTask tackle catastrophic forgetting in lifelong learning without increasing parameter counts, yielding a 3.8\(\times\) improvement over prior work (Sec. III-C).
3. On our real-robot benchmark, we find that PolyTask can be used for both multi-task policy learning and lifelong learning. In both settings, our single unified policy performs on par with the task-specific experts without needing additional data for training (Sec. III-D).
4. Through an ablation analysis, we demonstrate that PolyTask can also work without environment rewards, is not sensitive to the size of the policy network, and works with various conditioning modalities including one-hot, language, and goal-images (Sec. III-E).
## II PolyTask
### _Problem formulation and overview_
A fundamental challenge in embodied learning problems is to develop a single unified model that can solve a variety of tasks. Consider a set of \(N\) tasks \(\mathcal{T}=\{\mathcal{T}_{1},\mathcal{T}_{2},\cdots,\mathcal{T}_{N}\}\), each with a unique identifiable key \(\{c_{1},c_{2},\cdots,c_{N}\}\). In such a setting, solving any particular task \(\mathcal{T}_{k}\) corresponds to maximizing the cumulative reward \(r_{k}\) associated with it. We denote the task-specific policy for \(\mathcal{T}_{k}\) as \(\pi_{k}\), which can be obtained using standard RL algorithms or through offline imitation, which requires expert demonstrations for each task.
Multi-task learningThe goal of multi-task policy learning is to learn a single parametric policy \(\Pi(a_{t}|o_{t};c_{k})\) that can solve a set of tasks \(\mathcal{T}\). During this learning, we assume parallel access to all tasks \(\mathcal{T}\), which allows for training on samples drawn on-policy for all tasks. Our work focuses on improving multi-task learning by reducing the number of samples needed by the RL policy while achieving high performance compared to single-task experts.
Lifelong learning of tasksGiven a multi-task policy \(\Pi\) capable of performing the tasks in \(\mathcal{T}\), lifelong learning aims to teach a new set of tasks \(\mathcal{T}^{\prime}=\{\mathcal{T}^{\prime}_{1},\mathcal{T}^{\prime}_{2}, \cdots,\mathcal{T}^{\prime}_{L}\}\) to policy \(\Pi\) such that its effective set of tasks becomes \(\mathcal{T}^{\prime\prime}=\mathcal{T}\bigcup\mathcal{T}^{\prime}\) (shown in Fig. 1(b)). This comes with challenges related to transfer learning [24] and catastrophic forgetting [21, 25, 26]. Our work primarily focuses on addressing catastrophic forgetting without needing additional policy parameters.
Overview of PolyTaskPolyTask operates in 2 phases - _learn_ and _distill_. During the _learn_ phase, a task-specific policy is learned for each task using offline imitation from a few expert demonstrations followed by online finetuning using RL. During the _distill_ phase, all task-specific policies are combined into a unified policy using knowledge distillation on the RL replay buffers stored for each task.
### _Phase 1: Learning individual task-specific policies_
In this phase, a task-specific policy \(\pi_{k}\) is learned for each task \(\mathcal{T}_{k}\). First, a randomly initialized policy is trained on a small set of expert demonstrations using goal-conditioned behavior cloning (BC) [27, 28]. This BC-pretrained policy is finetuned through online interactions in the environment using a standard RL optimizer. In this work, we use DrQ-v2 [29], a deterministic actor-critic based method that provides high performance in continuous control. Following prior work in sample-efficient RL [13, 12, 30], we combine the online RL objective with a BC loss (as shown in Eq. 1).
\[\begin{split}\pi=\operatorname*{argmax}_{\pi}\big{[}(1-\lambda) \mathbb{E}_{(s,g,a)\sim\mathcal{D}^{s}}[Q(s,g,a)]\\ -\alpha\lambda\mathbb{E}_{(s^{e},g^{e},a^{e})\sim\mathcal{D}^{e}} \|a^{e}-\pi(s^{e},g^{e})\|^{2}\big{]}\end{split} \tag{1}\]
Fig. 2: An illustration of the difference between training the unified policy on multi-task and lifelong learning settings.
Here, \(Q(s,g,a)\) represents the Q-value from the critic used in actor-critic policy optimization with \(g\) being the goal for the sampled state-action pair \((s,a)\). \(\pi\) is a goal-conditioned policy. \(\alpha\) is a fixed weight and \(\lambda\) controls the relative contribution of the two loss terms. \(\mathcal{D}^{\beta}\) refers to the replay buffer for online rollouts and \(\mathcal{D}^{e}\) refers to the expert demonstration set.
### _Phase 2: Behavior-Distillation of multiple policies into a unified policy_
Knowledge distillation [31] is a method for transferring knowledge from a teacher model \(T\) to a student model \(S\). In addition to widespread use in vision and natural language processing [32, 33, 31], knowledge distillation has also been successful in policy learning [22, 34, 35]. Inspired by these prior works, we use knowledge distillation for combining the task-specific policies \(\{\pi_{1},...,\pi_{k}\}\) obtained from the first phase into a unified goal-conditioned multi-task policy \(\Pi\).
In order to distill the knowledge from the previous phase, we use the replay buffer data obtained during the online RL training for each task. We cannot directly behavior clone the replay buffer data as this data is exploratory in nature and hence suboptimal. To tackle this, [22] propose distilling the Q-values for \((s,a)\) tuples in the task-specific replay buffers into the unified policy. This works well for discrete action control as the distilled Q-function can be converted to a policy through the argmax operation. However, for continuous control problems, such Q-value distillation is incompatible as the argmax operation requires additional optimization to produce executable policies [36].
Instead, we propose behavior distillation, a new and simple technique to distill policies without needing access to Q values. We directly use the learned task-specific teacher policy to relabel the action corresponding to each replay buffer state and distill the action distribution of the relabeled actions into the unified policy. Our distillation objective has been shown in Eq. 2.
\[\Pi=\operatorname*{argmin}_{\Pi}\mathbb{E}_{t\sim\mathcal{T}}\mathbb{E}_{(s,g) \sim\mathcal{D}^{\beta}_{t}}\|\Pi(s,g)-\pi_{t}(s,g)\|^{2} \tag{2}\]
Here, \(\mathcal{T}\) refers to the set of tasks and \(\mathcal{D}^{\beta}_{t}\) refers to the replay buffer for the task-specific policy \(\pi_{t}\) for all tasks \(t\in\mathcal{T}\). \(\Pi\) is the unified multi-task policy learned through distillation.
### _Extending PolyTask to lifelong learning with sequential task presentation_
Lifelong learning refers to a scenario where a robot encounters various tasks in a sequential manner, while being exposed to only one task at a given moment. In this setting, the state distributions change over time, which often leads to policies catastrophically forgetting previous tasks [21]. As PolyTask distills policy only using off-policy data, it naturally fits lifelong settings without much modification. Unlike prior work that continuously finetune on new tasks [23] or train separate networks for each new task [25], PolyTask uses prior task data to prevent forgetting while using the same model architecture to prevent an explosion of parameters.
For every task-specific expert policy \(\pi_{i}\) learned using demonstration-guided RL, each \((s,g,a)\) sample in the replay buffer \(\mathcal{D}^{\beta}_{i}\) is replaced by \((s,g,\pi_{i}(s,g))\). This is done to have the unified policy model expert actions, not sub-optimal training actions. Given policies \(\pi_{1:N}\) corresponding to tasks \(\mathcal{T}_{1:N}\), we first distill the knowledge of \(\pi_{1:N}\) into a unified policy \(\Pi_{N}\) using the relabeled task-specific replay buffers \(\mathcal{D}^{\beta}_{1:N}\) (using Eq. 2). Next, in order to teach a new task \(\mathcal{T}_{N+1}\), we relabel the task-specific replay buffer \(\mathcal{D}^{\beta}_{N+1}\) using the task expert \(\pi_{N+1}\). Finally, a unified policy \(\Pi_{N+1}\) for tasks \(\mathcal{T}_{1:N+1}\) is obtained through the same distillation procedure applied on replay buffers \(\mathcal{D}^{\beta}_{1:N+1}\).
## III Experiments
Our experiments are designed to answer the following questions: \((a)\) How well does PolyTask work in multi-task distillation? \((b)\) Can PolyTask deal with the sequential presentation of tasks? \((c)\) Does PolyTask scale to real-world robots? \((d)\) What design decisions in PolyTask affect performance?
### _Experimental setup_
**Simulated tasks:** We run experiments on 3 simulated environment suites across a total of 32 tasks.
1. **DM Control (DMC)**: We learn state-based policies spanning 10 tasks on the cheetah run environment in a multi-task variant of DM Control suite [37, 38, 39]. Each task is a variation of the torso length of the cheetah. We train expert policies using DrQ-v2 [29] and collect 10 demonstrations for each task using this policy. A sinusoidal positional embedding [40] corresponding to each task label is used as the goal embedding for these experiments.
2. **Meta-World**: We learn image-based policies spanning 16 tasks from the Meta-World suite [41]. For each task, we collect a single hard-coded expert demonstration from their open-source implementation [41]. The last frame of the demonstration is used as the goal for each task.
3. **Franka kitchen**: We learn image-based policies spanning 6 tasks from the Franka kitchen environment [42]. We use the relay policy learning dataset [42] comprising 566 demonstrations. Since each trajectory in the dataset performs multiple tasks, we segment each trajectory into task-specific snippets and use 100 demonstrations for each task. It must be noted that since the relay policy learning
Fig. 3: PolyTask is evaluated across 3 simulated benchmarks - the DeepMind Control suite, the Meta-World benchmark, and the Franka kitchen environment.
dataset [42] consists of play data, the segmented task-specific snippets are suboptimal which makes the learning problem harder. For each online rollout, we randomly select one of the task demonstrations and use the last frame as the goal image.
**Robot tasks**: Our real-world setup comprises six tasks as shown in Fig. 1. We use a Ufactory xArm 7 robot with a xArm Gripper as the robot platform for our real-world experiments. The observations are RGB images from a fixed camera. In this configuration, we gather up to two demonstrations per task and proceed to train our task-specific expert policies through demonstration-guided RL, employing rewards based on optimal transport (OT) based trajectory matching [13, 14, 43]. We limit the online training to a fixed period of 1 hour.
**Baseline methods**: We compare PolyTask to a variety of baselines in multi-task policy learning and lifelong learning. A brief discussion of them is as follows:
1. **Goal conditioned BC (GCBC)**[27, 28]: A supervised learning framework for learning a goal-conditioned multi-task policy \(\pi(\cdot|o,g)\) given a dataset of (observation, action, goal) tuples \((o,a,g)\).
2. **Multi-task RL (MTRL)**[20, 19, 17, 44]: A framework for learning multi-task policies where the agent is simultaneously trained on all tasks using reinforcement learning.
3. **Distral**[19]: A MTRL algorithm that jointly trains separate task-specific policies and a distilled policy that captures common behavior across tasks.
4. **MTRL-PCGrad**[18]: A variant of MTRL with projecting-conflicting-gradients (PCGrad) style gradient optimization to mitigate task interference during gradient updates.
5. **MTRL-Demo**: A demo-guided variant of MTRL where we adapt the strategy proposed in Sec. II-B to a multi-task setting. A multi-task BC policy is first trained on all task demonstrations and this policy is used for initialization and regularization during the MTRL training.
6. **Fine-tuning**[23]: A framework where a single policy is initialized and finetuned on each new task. The parameter count of the policy remains constant throughout training.
7. **Progressive Nets**[25]: A framework that deals with catastrophic forgetting by adding a new model for each task with lateral connections to all previous models to allow for forward transfer. The parameter count of the policy increases with each new task.
GCBC, MTRL, Distral, MTRL-PCGrad, and MTRL-Demo serve as our multi-task skill learning baselines, and finetuning and progressive nets serve as our lifelong learning baselines.
**Evaluation metrics**: For each task across our environment suites, we measure performance by running 10 episodic seeds. Given this performance score on a task, we can then measure the effective number of tasks completed. The effective number of tasks executed by a policy is calculated as the cumulative success rate across all tasks in the entire task set. In the case of the Meta-World suite and Franka kitchen environment, where task completion is well-defined, we directly compute the metric using task success rate. However, in the cheetah run task of DM Control, the only available measure is the episode reward, with a maximum value of 1000. Thus, we calculate the average episode reward across 10 episodes, divide it by 1000, and consider that as the task success rate for all tasks.
### _How well does PolyTask work in multi-task distillation?_
Table I shows our results for multi-task policy learning across 3 simulated environment suites. We provide the results for two variants of PolyTask - a vanilla version using Adam optimizer [46] and PolyTask with PCGrad [18] based gradient optimization which is aimed at aiding multitask learning. On the simpler cheetah run task in DeepMind Control, we observe that MTRL-based baselines outperform PolyTask. This can be attributed to the fact that with such low environment variations (only the torso length of the cheetah in this case), sharing information between tasks enables more robust learning. However, since we use knowledge distillation, PolyTask shows a performance at par with the task-specific experts.
On the harder tasks in Meta-World and Franka Kitchen, we see a bigger gap in performance with PolyTask performing \(2.3\times\) better than GCBC and \(1.3\times\) better than the strongest MTRL-Demo baseline. We also present results on PolyTask-PCGrad and observe that the "gradient surgery" proposed by PCGrad [18] achieves slightly better performance on DM Control while slightly under-performing on Meta-World and Franka-Kitchen. Due to PCGrad requiring a separate loss computation for each task, we ran into time constraints while running MTRL-PCGrad on the Franka kitchen environment and so we could not present this result. Further, the increase in effective performance from MTRL to MTRL-Demo highlights the importance of utilizing demonstrations while the increase in performance from MTRL-Demo to PolyTask shows the importance of distillation.
### _Can PolyTask deal with the sequential presentation of tasks (lifelong learning)?_
The offline knowledge distillation phase naturally makes PolyTask suitable for lifelong learning. Fig. 4 and Fig. 5 demonstrate that PolyTask exhibits significant lifelong learning capabilities as compared to fine-tuning a policy for the most recent task (a setting explored in [23, 45]). On both Meta-World and Franka Kitchen, we observe that PolyTask significantly outperforms finetuning with a sequential presentation of tasks. Progressive networks [25] learn a new model for
\begin{table}
\begin{tabular}{l c c c} \hline
**Method** & **DMC** & **Meta-World** & **Franka Kitchen** \\ \hline Task-specific expert & 7.31 & 15.6 & 4.8 \\ GCBC [27, 28] & 1.76 & 6.3 & 1.7 \\ MTRL [20, 19, 17, 44] & 7.81 & 0.0 & 0.0 \\ Distral [19] & 0.08 & 0.9 & N/A \\ MTRL-PCGrad [18] & 8.09 & 1.0 & N/A \\ MTRL-Demo & **8.79** & 12.0 & 2.6 \\ \(\cdot\) PolyTask & 7.12 & **14.6** & **4.5** \\ PolyTask-PCGrad & 7.43 & 14.5 & 4.3 \\ \hline \end{tabular}
\end{table} TABLE I: Evaluation of multi-task distillation on 10 state-based tasks in DeepMind Control, 16 pixel-based tasks in the Meta-World benchmark, and 6 pixel-based tasks in the Franka kitchen environment. We notice that PolyTask performs a higher effective number of tasks as compared to prior work.
each new task, making them suitable for superior performance when handling a larger number of tasks. However, the performance comes at the cost of a significant increase in parameter count. Despite having a constant parameter count and the same overall environment interaction budget, we observe that PolyTask outperforms progressive networks by a significant margin on both environment suites. The low performance of progressive networks can be accounted for by the inability of the RL method to perform the tasks. For both finetuning and progressive networks, we add a behavior regularization term as proposed in Sec. II-B in order to enhance sample efficiency.
### _Does PolyTask scale to real-world robots?_
We devise a set of 6 manipulation tasks on a xArm robot to show the multi-task and lifelong learning capabilities of PolyTask in the real world. For each task \(\mathcal{T}_{k}\), a task-specific expert \(\pi_{k}\) is first trained using the scheme described in Sec. II-B. For the multi-task experiments, we combine the expert policies into a unified policy using behavior distillation (as described in Sec. II-C). For the lifelong experiments, we follow the scheme described in Sec. II-D to sequentially combine the single-task expert policies. Table II shows the results on the real robot. We observe that the multi-task policy learned by PolyTask is able to effectively perform 5.2 out of the 6 tasks and outperforms the single-task experts which effectively succeed on 5 out of 6 tasks. Further, in the task of inserting a peg in a green cup, we observe the single-task expert hovers over the cup and fails at placing the peg inside is some episodes. However, jointly training the unified policy seems to alleviate this issue (indicated by the higher success rate of multi-task and lifelong PolyTask), hinting towards the benefits of parameter sharing and regularization when training a single unified model as opposed to multiple experts. In the lifelong learning setting, we observe that after training on the sequence of 6 tasks, PolyTask is able to effectively perform 5 out of the 6 tasks, showing a significant ability to tackle catastrophic forgetting.
### _What design decisions in PolyTask affect performance?_
In this section, we ablate 3 design choices that have been made in PolyTask.
Reward typeIn our experiments on multi-task (Section III-B) and lifelong learning ( Section III-C), we initially assumed the availability of environment rewards for training task-specific experts. However, we found that this assumption can be relaxed, and even without environment rewards, we successfully applied inverse RL (IRL) with optimal transport (OT) based trajectory matching rewards, following prior work [13, 14, 43], to learn task-specific policies. These task experts are used to obtain a unified multi-task policy using PolyTask for 10 tasks from the Meta-World benchmark, achieving a cumulative success rate of 8.2 across all tasks, matching individual task expert performance. Importantly, our real-world results in Table II also relied on OT-based rewards, confirming their effectiveness in the lifelong learning setting.
Network sizeWe analyze the impact of the distilled policy's size on PolyTask. We evaluate the performance of PolyTask in a lifelong learning setting using same task-specific experts as described in Sec.III-C and with policies having parameter counts 0.1M, 0.5M, 1M, 4M(used throughout the paper), 10M, and 20M. Notably, we do not observe any significant performance differences attributed to the variation in policy size. This demonstrates the robustness of PolyTask to smaller policy architectures.
Modality of goal conditioningIn this work, the modality used for representing goals has been images for
Fig. 4: A comparison between the performance of Fine-tuning [23, 45] and PolyTask on the Meta-World benchmark [(a), (b)] and the Franka kitchen environment [(c), (d)] in a lifelong learning setting. We observe that PolyTask exhibits a significantly better ability to tackle catastrophic forgetting.
Fig. 5: Pixel-based evaluation for lifelong learning on 16 tasks in Meta-World, and 6 tasks in Franka kitchen.
the image-based tasks (Meta-World and Franka Kitchen) and a sinusoidal positional embedding [40] for the state-based task (DeepMind Control). In this experiment, we evaluate the effect of different goal modalities on the multi-task performance of PolyTask. We conduct our experiments on the 16 tasks in the Meta-World suite. We use the same task-specific experts with image-based goals as the expert policies. Only the goal modality of the distilled policy is changed. Table III provides the results of such multi-task learning with PolyTask using one-hot goals and language-based goals as task identifiers. For the language-based goals, we encode the task labels for each task using a 6-layer version of MiniLM [47] provided in SentenceTransformers [48]. We observe that PolyTask shows near-perfect performance with the simpler one-hot goal representation and the language-based goal specification outperforms the image-based goal specification. Thus, we conclude that PolyTask can be used with various goal modalities, even when the goal modality for the distilled policy is different than the task-specific experts.
## IV Related Work
### _Unified models_
Unified models have seen widespread success in computer vision [5, 6, 7, 9], natural language processing [8, 5, 10], and decision-making [1, 44, 3, 2, 49, 4]. In this work, we focus on the application of unified models in decision-making. The unification in unified models can come in the form of a single model for multiple tasks [50, 44, 3], capable of handling multiple modalities [2, 9, 5], or being learned from diverse domains [1, 51]. Though different in terms of application, all forms of unification hold the promise of enhancing decision quality, efficiency, and adaptability. In this work, we focus on multitask unification [3, 44, 1]. Inspired by recent advances in sample-efficient RL [11, 16, 13, 15], we propose a method for learning task-specific experts using online RL (Sec. II-B) and unifying these experts using behavior distillation (Sec. II-C).
### _Multi-task RL_
Multi-task RL (MTRL) [17, 19, 20, 44, 52, 53] is a branch of multi-task learning [50] where a RL agent simultaneously performs multiple tasks to learn a multi-task policy for the set of tasks. MTRL is challenging due to interference between gradients of different tasks during gradient optimization and there has been some prior work aimed at mitigating this interference [18, 54]. There have also been attempts at stabilizing MTRL through knowledge distillation [19] and learning contextual representations [17]. In Sec. III-B, we compare PolyTask with a baseline MTRL-Demo which is inspired by recent advances in demonstration-guided RL [12, 11, 13, 14, 16] and provides a promising performance boost to prior MTRL approaches. Further, since the agent must simultaneously interact with multiple environments, MTRL is not feasible on a physical robot that can only access a single environment at any point in time.
### _Lifelong learning_
Lifelong learning [55, 56] refers to the ability to continuously acquire and refine policies or decision-making strategies over time. Some of the challenges in lifelong learning are transfer learning [24], catastrophic forgetting [21, 25, 26], incremental learning [57] and the stability-plasticity dilemma [58, 59]. There have been some recent works tackling transfer learning by retaining prior experiences [45], learning a new model [25] or a separate task head [60] for each task to avoid catastrophic forgetting, using inverse RL for lifelong learning [61] and using hyper-networks in conjunction with neural ODEs [62] for learning robot policies [63]. Contrary to progressive networks [25] that trains a separate model for each task to avoid catastrophic forgetting, PolyTask uses prior task data while using the same model to avoid increasing the parameter count.
## V Conclusion and Limitations
In this work, we present PolyTask, a conceptually simple algorithm that unifies several task experts into a single multi-task policy through behavior distillation. We show the efficacy of our approach on a variety of simulated and robot domains. However, we recognize a few limitations in this work: (a) Since we store the replay buffer for each task, this approach might lead to storage concerns in the case of a large number of tasks. Extending PolyTask to either avoid storing prior data or only storing a small number of data points using techniques such as reservoir sampling [64] would be an interesting problem to tackle. (b) Though we use expert demonstrations to accelerate single-task learning, our framework currently does not allow forward transfer [26]. It would be interesting to see if enabling forward transfer in the PolyTask framework can further improve performance.
\begin{table}
\begin{tabular}{l c c c c c c|c} \hline \hline \multirow{2}{*}{**Method**} & **Insert Peg in** & \multirow{2}{*}{**Open Box**} & \multirow{2}{*}{**Pour**} & **Insert Peg in** & \multirow{2}{*}{**Reach**} & **Drawer Close** & **Cumulative** \\ & & & & & **yellow Cup** & & **success rate** \\ \hline Single task BC & 4/10 & 0/10 & 3/10 & 5/10 & 3/10 & 5/10 & 2/6 \\ Single task expert & 5/10 & 10/10 & 6/10 & 9/10 & 10/10 & 10/10 & 5/6 \\ Multitask & 7/10 & 10/10 & 6/10 & 9/10 & 10/10 & 10/10 & 5.2/6 \\ Lifelong & 7/10 & 7/10 & 6/10 & 10/10 & 10/10 & 10/10 & 5/6 \\ \hline \hline \end{tabular}
\end{table} TABLE II: PolyTask is evaluated on a set of 6 robotic manipulations tasks. We notice the PolyTask demonstrates remarkable multi-task and lifelong learning capabilities while achieving performance levels comparable to those of task experts.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Goal Modality** & **Image** & **One-hot** & **Language** \\ \hline Meta-World & 14.6 & 15.9 & 15.0 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Results for PolyTask with different goal modalities on 16 Meta-World tasks for multi-task distillation.
## VI Acknowledgements
We thank Mahi Shafiullah for valuable feedback and discussions. This work was supported by grants from Google, Honda, Meta, Amazon, and ONR awards N00014-21-1-2758 and N00014-22-1-2773.
|
2303.10204 | ESP32: QEMU Emulation within a Docker Container | The ESP32 is a popular microcontroller from Espressif that can be used in
many embedded applications. Robotic joints, smart car chargers, beer vat
agitators and automated bread mixers are a few examples where this
system-on-a-chip excels. It is cheap to buy and has a number of vendors
providing low-cost development board kits that come with the microcontroller
and many external connection points with peripherals. There is a large software
ecosystem for the ESP32. Espressif maintains an SDK containing many C-language
sample projects providing a starting point for a huge variety of software
services and I/O needs. Third party projects provide additional sample code as
well as support for other programming languages. For example, MicroPython is a
mature project with sample code and officially supported by Espressif. The SDK
provides tools to not just build an application but also merge a flash image,
flash to the microcontroller and monitor the output. Is it possible to build
the ESP32 load and emulate on another host OS? This paper explores the QEMU
emulator and its ability to emulate the ethernet interface for the guest OS.
Additionally, we look into the concept of containerizing the entire emulator
and ESP32 load package such that a microcontroller flash image can successfully
run with a one-step deployment of a Docker container. | Michael Howard, R. Bruce Irvin | 2023-03-17T18:48:50Z | http://arxiv.org/abs/2303.10204v2 | # ESP32: QEMU Emulation within a Docker Container
###### Abstract
The ESP32 is a popular microcontroller from Espressif that can be used in many embedded applications. Robotic joints, smart car chargers, beer vat agitators and automated bread mixers are a few examples where this system-on-a-chip excels. It is cheap to buy and has a number of vendors providing low-cost development board kits that come with the microcontroller and many external connection points with peripherals.
There is a large software ecosystem for the ESP32. Espressif maintains an SDK containing many C-language sample projects providing a starting point for a huge variety of software services and I/O needs. Third party projects provide additional sample code as well as support for other programming languages. For example, MicrorDyn is a mature project with sample code and officially supported by Espressif. The SDK provides tools to not just build an application but also merge a flash image, flash to the microcontroller and monitor the output.
Is it possible to build the ESP32 load and emulate on another host OS? This paper explores the QEMU emulator and its ability to emulate the ethernet interface for the guest OS. Additionally, we look into the concept of containerizing the entire emulator and ESP32 load package such that a microcontroller flash image can successfully run with a one-step deployment of a Docker container.
## I Introduction
The ESP32 is a microcontroller developed by Espressif Systems Co. [1], a semiconductor company headquartered in Shanghai, China. The ESP32 provides a low-cost, low-power and reasonably performant all-in-one hardware package that is ideally suited to Internet of Things (IoT) applications. The RISC processor is a 32-bit Xtensa core developed by Cadence Design Systems [2]. It is packaged as a system-on-a-chip (SoC) with Bluetooth, Wi-Fi and general purpose input output (GPIO) capabilities built-in.
In addition to the suite of microcontrollers, Espressif also supports a rich software ecosystem. Developers around the world contribute to an open source project that provide a software development kit (SDK) for each microcontroller flavor. The SDK provides a complete build system that utilize Python scripting on top of emake C-language build projects. Device drivers, a real-time operating system (OS) called FreeRTOS and a suite of software component projects are all included. The component projects provide an excellent starting point for developers to expand their own software requirements.
QEMU [3] is an emulator able to run guest OS and application binaries on a host operating system. For this project, the overall goal is to see how QEMU can be used to emulate an ESP32 application image on a macOS host system.
### _Goals_
At the completion of the project, the following target goals will be accomplished:
1. Build an ESP32 target load containing the OS, device drivers and a simple HTTP application such as a web server that uses the TCP/IP stack.
2. Execute this load on native hardware to first ensure it is functional.
3. Custom build QEMU for macOS (with ESP32 support) and modify as needed to run the target load.
4. Develop a Docker container around a QEMU tool chain. A container will provide an isolated environment for all dependencies and is preferable to running natively.
5. Generate some minimal HTTP traffic between the application and the external world.
6. If working, trace through the call stack(s) as much as possible and determine how the emulation is being performed.
7. Detail and report on the experiments and system architecture.
The overall goal as detailed above is motivated by a desire to easily emulate the ESP32. This microcontroller is being used as an edge device in the author's energy auction research. As such, the need to easily deploy and destroy multiple containers containing ESP32 loads (with emulators) will facilitate load testing and easy automation for functional tests. Both the Capstone project and follow-on research will benefit from being able to build the ESP32 load directly into a container and then immediately deploy multiple instances, each with a unique identifier, on either a local workstation or Cloud container service. Additionally, the knowledge and understanding gained through digging into the QEMU architecture and APIs will assist with the development and debugging of the edge device application.
The system architecture is outlined in Section II. The experiments performed and corresponding analysis are detailed in Sections III and IV respectively. The final discussion and conclusions are captured in Section VI.
## II System Architecture
The relevant system architecture for this paper primarily consists of the ESP32 microcontroller, its SDK, the QEMU emulator and Docker containers. Visual Studio Code provides extensions that interface both to the ESP32 SDK and the Docker container management. While the SVCode integrated development environment (IDE) does provide convenience features used during the experiments, they will only be mentioned in passing without going in depth in their implementation.
### _ESP32 microcontroller_
The core component of this paper is the ESP32 microcontroller as this is the desired platform to both emulate and virtualize. Figure 1 shows the hardware development kit purchased for this experiment. A micro USB connector provides both power and data. The SoC chip itself can be purchased very cheaply on its own. The dev kit as shown adds a few extra (unnecessary) peripherals such as a breadboard, GPIO breakout pins, LEDs, speakers and a camera. The entire package was purchased for $26.00 on Amazon which also shipped with a set of wires and resistors.
The ESP32 chip is the silver component in the upper-left corner of Figure 1. Its functional block diagram is shown in Figure 2. This diagram shows the richness of the SoC with dual Xtensa CPU cores, Wi-Fi, Bluetooth, RAM, flash memory and a large variety of I/O controllers.
### _Espressif ESP-IDF_
In addition to providing the ESP32 hardware itself, Espressif also maintains an open-source project for a complete SDK called the ESP integrated development framework (ESP-IDF). Version 5.0 is the current release and publicly available on GitHub [4]. ESP-IDF provides a suite of Python scripts that can build the application, flash to the remote ESP32 chip and monitor its stdout via the USB port powering the device.
ESP-IDF also consists of large set of cmake [5] build projects. Source code is C-language only and includes the FreeRTOS [6] kernel, drivers, support libraries and many sample application projects. The application developed for this paper is one of the sample projects that provides an HTTP server listening on port 80. It provides a simple response message to a request message with the _hello_ context.
Building the application provides a linked binary containing the application, kernel and libraries all compiled for the Xtensa instruction set. However, this monolith cannot run on the ESP32 target without providing some additional components for the flash device. Both a bootloader and a partition table must be present in order for the microcontroller to bootstrap the application load. ESP-IDF provides tools to assemble (merge) a final binary flash image. The layout is shown in Figure 3. This merged binary is now able to be written to the target flash and booted. It is ready for emulation as well.
### _Qemu_
QEMU is a system emulator that provides a virtual model of a machine to run a guest OS. CPU, memory and devices are all part of the emulation. While the vanilla code does support the Xtensa processor, the ESP32 microcontroller currently requires a fork that is maintained by Espressif. The source code project is available on GitHub [7]. This source project has to be built in order to run QEMU and ESP32 on macOS. There are no pre-built versions hosted.
QEMU is a user space application that requires an accelerator (hypervisor) in the host kernel. However, this custom build utilizes Tiny Code Generator (TCG) which is pure emulation. Theoretically, this trades performance for ease of implementation. Features such as a block and character device are built-in which allow emulation of stdio, files, sockets and TCP networking.
### _Containers_
The container engine is provided by Docker [8]. The purpose of this engine is to provide a virtualization of the file system and configuration while not incurring the overhead of booting a completely separate OS kernel. The Docker image used starts with an existing Ubuntu 20.04 file system. The ESP-IDF is cloned into the image as well as a pre-built QEMU from 2022-09-19. Note that in the case of the container, we can use a pre-built QEMU (Linux binaries) that supports the ESP32. Espressif has this tarball available in their GitHub repo. Figure 4 shows a coarse outline of the layers in the image. Each Docker image is composed of layers that can be reused among different images. Each layer typically maps to a Docker build instruction.
Fig. 1: FreeNove ESP32 development board containing the core SoC with some sample I/O devices such as LEDs and speaker.
Fig. 3: The ESP32 flash image layout including bootloader, partition table and application.
Fig. 2: The ESP32 block diagram showing the dual Xtensa cores with a wide variety of I/O controllers built in to the chip.
## III Experiments
The objective of the experiments is to build the ESP32 application, build QEMU, emulate the application via QEMU and containerize both application and QEMU. Once the setup steps are complete, an HTTP interaction with the application is performed to test and analyze the emulation.
### _Build the ESP32 application_
The first step is to download and configure the SDK to build the target application. ESP-IDF is cloned from GitHub [4] and an installation script ensures the necessary binaries and scripts are in the host execution path. Multiple passes were performed. However, the final pass took advantage of the VSCode extension which wrapped the underlying clone and install operations. All build, merge, flash and monitor commands were subsequently run via the _Espressif IDF_ extension in VSCode.
After cloning and installing ESP-IDF, the template project selector was used to create a new project based on the simple HTTP server template. The first pass enabled Wi-Fi through the
CONFIG_EXAMPLE_CONNECT_WIFI=y
project configuration. This repo may be publicly viewed at [https://github.com/zemar/esp_http_server](https://github.com/zemar/esp_http_server). The application was built, merged into a flash image and flashed to the USB-connected ESP32 target board. The monitor command then prints all stdout messages from the USB port.
### _Natively run the ESP32 application_
After loading and monitoring the target board, an HTTP message request was sent from the host. The correct response message contains "Hello World!". The stdout of the target board is also displayed. See Figure 5 for the actual results.
### _Emulating the ESP32 application_
Since the QEMU fork for ESP32 does not maintain pre-built binaries for macOS, the first step is to clone and build the emulator. The following steps are used to accomplish this:
```
gitclone[https://github.com/espressif/qemu](https://github.com/espressif/qemu) brewinstallibegcrypt./configure-target-list=xtensa-softmmu -enable-gcrypt-enable-debug { --enable-sanitizers } --disable-strip --disable-user --disable-capstone --disable-vnc { --disable-sdl --disable-qkt } ninja -c build
```
Note that the target is specified as "xtensa-softmmu", thus forcing pure software emulation without requiring Apple's hypervisor library. The resulting binary is **qemu-system-xtensa**. However, the ESP32 application is configured for Wi-Fi and there is no pass-through to connect the application network stack to the host Wi-Fi device. The solution is to build the ESP32 with an experimental OpenCores Ethernet MAC driver [9] which provides the _open_eth_ device for configuring the network interface. This driver is able to pass networking transactions through to the host ethernet interface. It is configured by setting project options:
```
CONFIG_EXAMPLE_CONNECT_ETHERNET=y and CONFIG_EXAMPLE_USE_OPENETH=y and rebuilding.
Our custom QEMU now runs (with emulation) the ESP32 flash image using the command:
``` qemu-system-xtensa=nographic=machineesp32 -nicuser,model=open_eth, id=lo0,hostfwd=tcp::8000-:80 -drivefile=merged_qemu.bin, if=mtd,format=raw
Fig. 4: The Docker container image containing the Ubuntu base, ESP-IDF SDK toolchain and QEMU emulator.
Fig. 5: Screenshot of host sending a “/hello” request with the corresponding “Hello World!” response. Stdout of the ESP32 USB port is also displayed showing the internal handling of the message.
Running the above command on our host results in a successful emulation run as shown in Figure 6.
The HTTP server is able to bind to 10.0.2.15 on the "example_netif_eth" interface and the QEMU TCP forwarding allows port 80 in the guest OS to be forwarded to 8000 on the host. Our HTTP request to port 8000 is then successfully processed with the correct returned response.
### _Containerizing the emulated application_
Building the container image described in Figure 4 is straight forward using the Docker engine client and the following command:
```
__dockerbuild-tesp>_qemu.
```
With the image built and locally stored, it is run taking the QEMU application command as an argument. This will run immediately upon launching the container. Note that the ESP32 flash image, **merged_qemu.bin** needs to be present in the container at run time. A volume mount of the build directory accomplishes this.
The command to run for the experiment:
```
__dockerrun-if(-nameesp-rm-p8000:8000\ -v$(pwd)/build:app\ -esp-qemu:latestqemu-system-xtensa\ -nographic-machineesp32\ -nicuser,model-open_eth,id-l00,host/wd=tcp:8000-:80\ -drivefile-merged_qemu.bin,if=tmd,format=raw
```
The above runs the container locally, maps (mounts) the local folder to the container and publishes port 8000 inside the container to the host. The internal version (inside the container) of the QEMU binary is used to run the mapped **merged_qemu.bin**. As shown in Figure 7, an identical response and set of QEMU messages is generated when virtualized in a container compared to running the emulator natively in macOS.
## IV Analysis
Since execution and emulation of the Xtensa instruction set is demonstrated through successfully running the flash binary via QEMU, the primary focus of the analysis is networking. Emulating the Xtensa is a core feature of the Tiny Code Generator within QEMU. However, networking does have different possible code paths and configurations as well as a requirement to interface with the host network stack. Network emulation in QEMU [10] can take 2 forms: TAP and user mode network stack. The former adds a virtual network device on the host (called tapN) and can then be configured as a real ethernet card. The latter was used in this project and will now be described.
As shown in Section III-C, the networking interface was configured via launching QEMU with the
**-nic user,model=open_eth,id=l00,hostfwd=tcp::8000-:80** options. These options configure the user mode network stack without root privileges. Figure 8 shows the resulting virtual network configuration. The QEMU Virtual Machine (VM) behaves as if it was behind a firewall which blocks all incoming connections. The DHCP server assigns addresses to the guests starting from 10.0.2.15.
In order for the host to access the IP ports listening on the guest OS, port forwarding must be configured. The **hostfwd=tcp::8000-:80** takes care of this by forwarding the guest OS port 80 (our HTTP server) to the host port 8000. The forwarding is not specific to any one of the host's network interfaces. For instance, both **l00** and **en0** on the host are listening on port 8000.
From Section III-C, we stated that the ESP32 load was built with support for the OpenCores Ethernet MAC driver [9]. This driver, implemented in QEMU, provides the Media Access Control (MAC) layer in the emulator allowing the guest OS to transmit and receive ethernet frames which are subsequently forwarded to the host MAC. In the QEMU project, **opencores_eth.c** implements a set of functions providing the emulated MAC layer interface. The relevant functions
Fig. 8: The virtual network configuration created by the user mode network stack options.
Fig. 6: Screenshot of host sending a “/hello” request with the corresponding “Hello World!” response. Stdout of the QEMU process is also displayed showing the internal handling of the message.
Fig. 7: Screenshot of host sending a “/hello” request with the corresponding “Hello World!” response. Stdout of the Docker container is also displayed showing the QEMU internal handling of the message.
used in transmitting and receiving ethernet frames to the guest OS are:
* open_eth_desc_write()
* open_eth_desc_read()
* open_eth_reg_read()
* open_eth_reg_write()
* open_eth_min_read()
* open_eth_min_write()
* open_eth_start_xmit()
* open_eth_receive()
* open_eth_receive_desc()
* open_eth_receive_mcast()
* open_eth_update_irq()
### _GDB debugging_
It is possible to start the QEMU session listening on a debug port. The "-s" option allows this. Thus, the new launch command on native macOS:
```
1qemu-system-xtensa-nographic-machineesp32-nnicuser,model=open_eth,id-l00,hostfwd=tcp:8000-:800-drivefile-merged_qemu.bin,if-mtd,format-raw-s
```
Now, we can connect and debug the guest OS and application with:
```
1xtensa-esp32-elf-gdbesp_http_server.elf/-ex"targetremote:1234*
2-ex"monitorsystem_reset" \
3-ex"tbapp_main"-ex"c"
```
This GDB session will provide an interactive prompt and will temporarily break at the application entry point. The FreeRTOS kernel, drivers and HTTP server application is in scope with this debug session. The **xtensa-esp32-elf-gdb** is a ESP32-specific build of GDB that is provided through the ESP-IDF SDK.
In theory, GDB stepping through the ESP32 binary will not show anything unique in the emulated session since the guest OS implements a network stack and simply binds to the MAC address at the lower layer. A more relevant option is to follow the networking code path in the QEMU emulator, specifically the OpenCores Ethernet MAC driver. Unfortunately, a binary with symbols file (i.e. elf) is not generated during a custom build which prevents productive GDB stepping through the **opencores_eth_c** driver. This is the entity that provides networking emulation in our experiments. Also, due to the high transaction rate and asynchronous nature of the network stack operation, GDB stepping will not provide the best analysis of the emulated code path. A better option is to explore function tracing.
### _QEMU function tracing_
QEMU provides a function tracing framework which can be enabled at runtime [11]. The
```
1-trace"open_eth*
```
option will enable tracing on all **opencores_eth_c** driver functions. Thus, we launch QEMU with the following options:
```
1qemu-system-xtensa-nographic-machineesp32-nnicuser,model=open_eth,id-l00,hostfwd=tcp:8000-:800-drivefile-merged_qemu.bin,if-mtd,format-raw--trace"open_eth*
```
The new option results in tracing messages for all functions prefixed with "open_eth". These messages print to stderr. Log options are also available. The trace framework is realized by a "trace()" function call in each of the subsystem API functions. The trace function calls outside open_eth generate a non such that there is no performance penalty or extraneous messages.
The full stderr trace is too much data to display here. Listing 1 is an abbreviation with many in-between open_eth messages removed.
```
1system_reg_newMac
The next steps in lines 36-49 show the binding of the IP stack to the ethernet interface. Again, the ethernet interface is serviced by the OpenCores driver utilizing the same open_eth functions calls to read, write the MAC register and transmit data frames.
Lines 50-56 show the application HTTP server starting and binding port 80 with the IP stack. As IP packet headers are stripped and forwarded to the MAC layer, the OpenCores driver services these requests and passes to the host. The same API is used with function calls to read the MAC register (line 52) and write the MAC (line 53),
The final set is after we hit the host port 8000 with an HTTP request. These are shown lines 57-73. The host command
curl http://localhost:8000/hello
sends an HTTP (IP) request to the guest. When this is broken down into ethernet frames, we see the OpenCores driver receive the request data (line 58) and set the interrupt mask (line 60). The HTTP server then sends its response. The server itself writes a message on line 66 that it has responded to the request from localhost:8000. The data payload contains "Hello World!" and its transmit is handled by OpenCores on line 69.
For a graphical representation of the above analysis, see Figure 9.
## V Future Work
In this paper, we strive to emulate the ESP32 and understand how the medium access control (MAC) layer is handled within the emulation environment. The evaluation is based on successfully passing network traffic between the host and emulated guest OSes in the form of HTTP request/response messages. An important question to ask: how will this system behave at scale?
A logical next experiment is to evaluate the networking performance of the guest OS. If we take the performance of the native ESP32 hardware as the gold standard, a comparison may be run to determine the potential latency of emulating the ESP32 instructions and the MAC layer. A high rate of HTTP request messages would need to be sent to the target and the corresponding response delay measured. In addition, with the containerizing of this emulation environment, an experiment may be performed to deploy numerous guest OS instances to evaluate how the response delay degrades as multiple emulated MAC instances communicate with the same host MAC.
## VI Conclusion
The goal of this project is to gain familiarity with the ESP32 microcontroller, build toolchain, SDK and explore possibilities to emulate and containerize. Espressif maintains both the SDK (ESP-IDF) and a customized fork of the QEMU project on GitHub. Additionally, Espressif publishes a VSCode extension which wraps the SDK and provides a convenient integrated development environment for coding. An example HTTP server was easily built and linked with the SDK's suite of support libraries and FreeRTOS kernel. It loads and runs without issue on the FreeNove test board purchased for these experiments.
The QEMU fork for ESP32 maintains pre-built binaries for Linux and can also be natively built for macOS. Both options were explored. The former was run through building a Docker container with the QEMU emulator and mounting the ESP32 load from the host file system at runtime. The latter required building QEMU for macOS and launching with the ESP32 load. Both options successfully emulated the Xtensa instruction set and allowed the ESP32 load to run without error.
While running on the FreeNove board, the ESP32 joined the available Wi-Fi for networking. During emulation, it was configured to use the native ethernet interface. This interface is provided by the OpenCores Ethernet MAC driver that is part of the QEMU project. The driver provides an API to the guest OS network stack to discover the underlying MAC as well as transmit and receive ethernet frames. These frames are forwarded between QEMU and the host OS. QEMU has a trace framework built in with calls placed in all major API functions of every subsystem. For these experiments, the OpenCores driver API activity was observed through launching QEMU with the option to trace all activity in the "open_eth" subsystem. Both the macOS and containerized Linux versions of QEMU ran the HTTP server networking application with no errors.
In conclusion, the ESP32 is a feature rich and cost-effective system-on-a-chip providing Wi-Fi, Bluetooth and a large selection of I/O devices built in to a tiny package. The $10 investment provides a powerful microcontroller that is ideal for car chargers, stand mixers, robotic joints and many other embedded applications. It is possible to natively build the ESP32 and emulate this load through QEMU. Containerizing this combination adds a very convenient way to improve the rapid build and test cycle while scaling to many device deployments on a single host.
Fig. 9: Components of the guest OS and the network data flow to the host. The Tiny Code Generator (TCG) emulates the Xtensa-native instructions while the OpenCores Ethernet MAC driver forwards ethernet traffic to the host MAC.
## Acknowledgements
The author would like to thank Dr Bruce Irvin at Portland State University for the fantastic course he taught in the concepts of operating systems as well as his valuable advice, direction and mentorship.
|
2305.03690 | Counting subtrees of the branching process tree by the number of leaves | We study the distribution of the number of leaves of the subtree chosen
uniformly at random among all the subtrees of the critical branching process
tree at extinction. | Boris Pittel | 2023-05-05T17:00:32Z | http://arxiv.org/abs/2305.03690v1 | # Counting subtrees of the branching process tree by the number of leaves
###### Abstract.
We study the distribution of the number of leaves of the subtree chosen uniformly at random among all the subtrees of the critical branching process tree at extinction.
Key words and phrases:Branching process, random tree, extinction, asymptotics 2020 Mathematics Subject Classification: 60C05; 05C05, 92B10
## 1. Introduction and results
Consider a branching process initiated by a single progenitor. This process is visualized as a growing rooted tree. The root is the progenitor, connected by edges to each of its immediate descendants (children), that are _ordered_, say by seniority. Each of the children becomes the root of the corresponding subtree, so that the children of all these roots are the grandchildren of the progenitor. We obviously get a recursively defined process. It delivers a nested sequence of trees, which is either infinite, or terminates at a moment when none of the current leaves have children.
The classic Galton-Watson branching process is the case when the number of each member's children **(a)** is independent of those numbers for all members from the preceding and current generations and **(b)** has the same distribution \(\{p_{j}\}_{j\geq 0}\), \((\sum_{j}p_{j}=1)\). It is well-known that if \(p_{0}>0\) and \(\sum_{j\geq 0}jp_{j}=1\), then the process terminates with probability \(1\), Harris [7]. A standard example is \(p_{0}=p_{2}=\frac{1}{2}\), in which case we have a binary tree.
Let \(T\) denote the terminal tree. Given a finite rooted tree \(\mathcal{T}\), we have
\[\mathbb{P}(T=\mathcal{T})=\prod_{v\in\mathcal{V}(\mathcal{T})}p_{d(v,\mathcal{ T})},\]
where \(\mathcal{V}(\mathcal{T})\) is the vertex set of \(\mathcal{T}\), and \(d(v,\mathcal{T})\) is the out-degree of vertex \(v\in V(\mathcal{T})\). Introduce \(L=L_{T}\) the total number of leaves of \(T\), i.e. vertices of \(T\) with out-degree \(0\), and \(X(t)=X_{T}(t)\) the total number of the (full) subtrees of \(T\) with \(t\) leaves. So, the total number of subtrees \(\sum_{t\geq 1}X(t)=V(T)=|\mathcal{V}(T)|\). Introduce \(\mathcal{L}=\mathcal{L}(T)\), the number of leaves in the subtree chosen uniformly at random among all \(V=V(T)\) subtrees of \(T\); so, conditionally on \(T\), we have \(\mathbb{P}(\mathcal{L}=t|T)=\frac{X(t)}{V}\), and-conditionally on \(L\) only-\(\mathbb{P}(\mathcal{L}=t|L)=\frac{X(t)}{V}\).
\(\mathbb{E}\big{[}\frac{X(t)}{V}|L\big{]}\). It was proved in [15] that \(f(x):=\mathbb{E}[x^{L}]=\sum_{k\geq 1}x^{k}\mathbb{P}(L=k)\) satisfies
\[f(x)=\sum_{j\geq 1}p_{j}f^{j}(x)+p_{0}x,\quad|x|\leq 1. \tag{1.1}\]
Also \(\mathbb{P}(L<\infty)=1\), and assuming that \(\{p_{j}\}\) has a finite variance \(\sigma^{2}\),
\[\mathbb{P}(L=\ell)=\frac{\gamma(2\ell-1)!!}{2^{\ell}\ell!}+O(\ell^{-2})=\frac{ \gamma}{\ell^{3/2}}+O(\ell^{-2}),\quad\gamma:=\big{(}\tfrac{p_{0}}{2\pi\sigma^ {2}}\big{)}^{1/2}, \tag{1.2}\]
\(\sigma^{2}\) being the variance of \(\{p_{j}\}\). (To be sure, we were interested in the case \(p_{1}=0\), but (1.2) holds for all \(p_{1}<1\) as well.) A less sharp formula \(\mathbb{P}(L=\ell)=\frac{\gamma}{\ell^{3/2}}(1+o(1))\) has long been known, Kolchin [11] (Ch. 2, Lemma 4).
**Theorem 1.1**.: **(i)** _For the binary case \(p_{0}=p_{2}=1/2\),_
\[\mathbb{P}(\mathcal{L}=t|L=\ell)=\tfrac{t}{\ell(2\ell-1)}\cdot\tfrac{\binom{ \ell}{t}^{2}}{\binom{2(\ell-1)}{2(t-1)}}.\]
**(ii)** _Consequently, for \(t=o(\ell)\) we have_
\[\mathbb{P}(\mathcal{L}=t|L=\ell)=\tfrac{1+O(\ell^{-1/2}+t/\ell)}{2^{2t-1}t} \binom{2(t-1)}{t-1}\!=\!(1+O(\ell^{-1/2}+t/\ell))\mathbb{P}(L=t),\]
_implying tightness of the sequence of distributions \(\{\mathbb{P}(\mathcal{L}=t|L=\ell)\}_{t\geq 1}\). However, \(\mathbb{E}[\mathcal{L}|L=\ell]\sim\frac{\sqrt{\pi\ell}}{2}\), as \(\ell\to\infty\)._
**Note.** So, for large \(\ell\), the number of leaves in the subtree chosen at random from all \((2\ell-1)\) subtrees of the extinction tree with \(\ell\) leaves, is distributed asymptotically as the number of leaves in the extinction tree.
Our second result is for a general critical distribution \(\{p_{j}\}_{j\geq 0}\).
**Theorem 1.2**.: **(i)** _For each fixed \(t\geq 1\),_
\[\lim_{\ell\to\infty}\mathbb{P}(\mathcal{L}=t|L=\ell)=(1-p_{1})\mathbb{P}(L=t).\]
_Since \(\sum_{t\geq 1}\mathbb{P}(L=t)=1\), we see that the sequence of the distributions \(\{\mathbb{P}(\mathcal{L}=t|L=\ell)\}_{1\leq t\leq\ell}\), \((\ell\geq 1)\), is not tight, unless (like in the binary case) \(p_{1}=0\). In words, \(p_{1}\) is the limiting deficit of the leaf-set size distribution for the random subtree._ **(ii)** _More precisely,_
\[\mathbb{P}\Big{(}\mathcal{L}>\tfrac{\ell^{1/2}}{\log^{2}\ell}\Big{|}L=\ell \Big{)}=p_{1}+O(\ell^{-1/4}\log\ell),\]
_so that, conditioned on \(\{L=\ell\}\), \(\mathcal{L}\) exceeds \(\tfrac{\ell^{1/2}}{\log^{2}\ell}\) with conditional probability bounded away from zero as \(\ell\to\infty\)._
**Note.** Since \(p_{1}<1\), the equation (1.1) is equivalent to
\[f(x)=\sum_{j\geq 2}p_{j}^{\prime}f^{j}(x)+p_{0}^{\prime}x,\quad p_{j}^{\prime}= \tfrac{p_{j}}{1-p_{1}},\ j=0,2,3,\ldots,\]
and \(p^{\prime}_{0},p^{\prime}_{2},p^{\prime}_{3},\dots\) is a probability distribution, with \(\sum_{j}jp^{\prime}_{j}=1\) again, but with \(p^{\prime}_{1}=0\). Therefore \(L\) and \(L^{\prime}\), the number of leaves in the terminal tree \(T^{\prime}\) associated with \(\{p^{\prime}_{j}\}\), are _equidistributed_. So, the part **(i)** can be interpreted as saying that, conditioned on the event "no father has a single child", the uniformly random subtree of \(T\), in the limit, has the number of leaves distributed as that for the tree \(T^{\prime}\).
As a source for our inspiration, we should mention the Russian mathematician Valentin Kolchin who pioneered and championed the study of connection between conditiional branching processes and combinatorics of random trees since the mid-seventies, Kolchin [9], [10], and [11]. We refer the reader to David Aldous [1] and Svante Janson [8] for two, 20 years apart, encyclopedic surveys of limit results for the conditioned and the simply generated trees, without convergence rates, that analyze a rich variety of fringe distributions. The distribution of random subtree size is listed in [1] as one of the basic problems in this class of distributions.
## 2. Generating functions identities
Here are two useful identities that follow from implicit differentiation of (1.1) for \(f(x)\):
\[\sum_{j\geq 1}jp_{j}f^{j-1}(x)=1-\tfrac{p_{0}}{f^{\prime}(x)},\quad\sum_{j\geq 1 }j(j-1)p_{j}f^{j-2}(x)=\tfrac{p_{0}f^{\prime\prime}(x)}{(f^{\prime}(x)^{3}}. \tag{2.1}\]
In particular, since \(f(0)=0\), we have \(f^{\prime}(0)=\tfrac{p_{0}}{1-p_{1}}\).
Our first task is to derive an equation for \(f(\ell,t)=\mathbb{E}[X(t)\mathbb{I}(L=\ell)]\), where \(\mathbb{I}(B)\) stands for the indicator of an event \(B\). Of course, \(f(\ell,t)=0\) for \(\ell<t\), and \(f(t,t)=\mathbb{P}(L=t)\). Consider \(\ell>t\). With probability \(p_{j}\) the root has \(j\) children; let \(X_{i}(t)\) denote the number of subtrees with \(t\) leaves in the tree rooted at the \(i\)-th child. Then
\[f(\ell,t)=\mathbb{E}\big{[}X(t)\mathbb{I}(L=\ell)\big{]}=\sum_{j \geq 1}p_{j}\mathbb{E}\bigg{[}\Big{(}\sum_{i\in[j]}X_{i}(t)\Big{)}\cdot \mathbb{I}\Big{(}\sum_{i^{\prime}\in[j]}L_{i^{\prime}}=\ell\Big{)}\bigg{]}\\ =\sum_{j\geq 1}jp_{j}\sum_{\mu\leq\ell}\mathbb{E}\big{[}X(t) \mathbb{I}(L=\mu)\big{]}\cdot\mathbb{P}\Big{(}\sum_{i\in[j]\setminus\{1\}}L_{ i}=\ell-\mu\Big{)}\\ =\sum_{j\geq 1}jp_{j}\sum_{\mu\leq\ell}f(\mu,t)\cdot[x^{\ell-\mu}]f^ {j-1}(x)\\ =\sum_{\mu\leq\ell}f(\mu,t)\big{[}x^{\ell-\mu}\big{]}\sum_{j\geq 1 }jp_{j}f^{j-1}(x).\]
So, in combination with (2.1), we obtain
\[f(\ell,t)=\sum_{\mu\leq\ell}f(\mu,t)\cdot\big{[}x^{\ell-\mu}\big{]}\big{(}1-\tfrac{ p_{0}}{f^{\prime}(x)}\big{)},\quad\ell>t. \tag{2.2}\]
And we remind that \(f(t,t)=\mathbb{P}(L=t)\). The convolution on the RHS of (2.2) positively dictates usage of generating functions. For \(|y|<1\), using \(f^{\prime}(0)=\frac{p_{0}}{1-p_{1}}\), we have
\[\sum_{\ell\geq t}y^{\ell}f(\ell,t)=y^{t}\,\mathbb{P}(L=t)-y^{t}\, \mathbb{P}(L=t)\cdot[x^{0}]\big{(}1-\tfrac{p_{0}}{f^{\prime}(x)}\big{)}\\ +\sum_{\ell\geq t}y^{\ell}\sum_{\mu=t}^{\ell}f(\mu,t)\cdot[x^{ \ell-\mu}]\big{(}1-\tfrac{p_{0}}{f^{\prime}(x)}\big{)}\\ =(1-p_{1})y^{t}\,\mathbb{P}(L=t)+\sum_{\mu\geq t}f(\mu,t)y^{\mu} \sum_{\ell\geq\mu}y^{\ell-\mu}\cdot[x^{\ell-\mu}]\big{(}1-\tfrac{p_{0}}{f^{ \prime}(x)}\big{)}\\ =(1-p_{1})y^{t}\,\mathbb{P}(L=t)+\big{(}1-\tfrac{p_{0}}{f^{\prime }(y)}\big{)}\sum_{\mu\geq t}f(\mu,t)y^{\mu},\]
implying that
\[\sum_{\ell\geq t}y^{\ell}f(\ell,t)=\tfrac{1-p_{1}}{p_{0}}y^{t}f^{\prime}(y)\, \mathbb{P}(L=t). \tag{2.3}\]
Recalling that \(f(\ell,t)=f(\ell,t)=\mathbb{E}[X(t)\mathbb{I}(L=\ell)]\), and using \([y^{s}]f^{\prime}(y)=(s+1)\cdot[y^{s+1}]f(y)\), we arrive at
**Lemma 2.1**.: \[\mathbb{E}[X(t)|L=\ell]=\tfrac{1-p_{1}}{p_{0}}\tfrac{[y^{\ell-t}]f^{\prime}(y )\times\mathbb{P}(L=t)}{\mathbb{P}(L=\ell)}=\tfrac{1-p_{1}}{p_{0}}\,\tfrac{( \ell-t+1)\mathbb{P}(L=\ell-t+1)\,\mathbb{P}(L=t)}{\mathbb{P}(L=\ell)}.\]
## 3. Proof of Theorem 1.1
Notice that \(V=2\ell-1\) on the event \(\{L=\ell\}\) for the binary case.
Proof.: By Lemma 2.1 (first identity), with a bit of elementary work, we have
\[\mathbb{P}(\mathcal{L}=t|L=\ell) =\tfrac{1}{2\ell-1}\mathbb{E}[X(t)|L=\ell]=\tfrac{2}{2\ell-1} \tfrac{[y^{\ell-t}]f^{\prime}(y)\times[x^{t}]f(x)}{[x^{\ell}]f(x)}\] \[=\tfrac{1}{2\ell-1}\cdot\tfrac{[y^{\ell-t}](1-y)^{-1/2}\times[x^{ t}](-(1-x)^{1/2})}{[x^{\ell}](-(1-x)^{1/2})}=\tfrac{t}{\ell(2\ell-1)}\cdot \tfrac{\binom{\ell}{t}^{2}}{\binom{2(\ell-1)}{2(\ell-1)}}.\]
The asymptotic formula \(\mathbb{E}[\mathcal{L}|L=\ell]\sim\tfrac{\sqrt{\pi\ell}}{2}\) follows easily from
\[t\cdot\mathbb{P}(\mathcal{L}=t|L=\ell)=\tfrac{\ell}{2\ell-1}\frac{\binom{2(t- 1)}{t-1}\binom{2(\ell-t)}{\ell-t}}{\binom{2(\ell-1)}{\ell-1}},\]
the formula \(\binom{2m}{m}\sim\frac{2^{2m}}{\sqrt{\pi m}}\), \((m\to\infty)\), and \(\int_{0}^{1}\frac{dx}{\sqrt{x(1-x)}}=\pi\). Finally, by (1.2) and the second identity in Lemma 2.1, we have
\[\mathbb{P}(\mathcal{L}=t|L=\ell)=\tfrac{1}{2\ell-1}\mathbb{E}[X(t )|L=\ell]\\ =\tfrac{2(\ell-t+1)(\ell-t+1)^{-3/2}}{(2\ell-1)\ell^{-3/2}}\cdot \big{(}1+O((\ell-t+1)^{-1/2})\big{)}\mathbb{P}(L=t)\\ =(1+O(\ell^{-1/2}+t/\ell))\mathbb{P}(L=t),\]
provided that \(t=o(\ell)\).
## 4. Proof of Theorem 1.2
For a general \(\{p_{j}\}_{j\geq 0}\) with \(p_{0}>0\), \(\sum_{j}jp_{j}=1\), \(V=\sum_{s}X(s)\) is random on the event \(\{L=\ell\}\), i.e. the number of the subtrees of the extinction tree to choose from is random. So, here \(\mathbb{P}(\mathcal{L}=t|L=\ell)=\mathbb{E}\big{[}\frac{X(t)}{V}\big{|}L=\ell \big{]}.\) The ratios of dependent random variables can be problematic for evaluation, precise or even asymptotic, of their moments. Fortunately, here we hope that, for \(\ell\) large, conditionally on \(\{L=\ell\}\), the distribution of \(V\) is sharply concentrated around \(\mathbb{E}[V|L=\ell]\). If so, we can expect that for large \(\ell\), \(\mathbb{P}(\mathcal{L}=t|L=\ell)\sim\frac{\mathbb{E}[X(t)|L=\ell]}{\mathbb{E}[ V|L=\ell]}\) for a wide range of \(t<\ell\). And the last fraction is perfectly amenable to asymptotic analysis.
To prove concentration, let us introduce \(g(y,\ell)=\mathbb{E}[y^{V}\mathbb{I}(L=\ell)]\), \((\ell\geq 1)\), so that \(g(y,1)=p_{0}y\), and define \(G(y,x):=\sum_{\ell\geq 1}x^{\ell}g(y,\ell)\). For \(\ell>1\), we have
\[g(y,\ell)=y\sum_{j\geq 1}p_{j}\sum_{\ell_{1}+\cdots+\ell_{j}=\ell}\,\prod_{i \in[j]}g(y,\ell_{i})=[x^{\ell}]\,y\sum_{j\geq 1}p_{j}G^{j}(y,x).\]
Consequently
\[G(y,x)=p_{0}yx+y\sum_{j\geq 1}p_{j}G^{j}(y,x). \tag{4.1}\]
We use (4.1) to get \(G^{\prime}_{y}(1,x)\) and \(G^{{}^{\prime\prime}}_{y}(1,x)\), since
\[\begin{split}& G^{\prime}_{y}(1,x)=\mathbb{E}[Vx^{L}]\Longrightarrow \mathbb{E}[V\mathbb{I}(L=\ell)]=[x^{\ell}]G^{\prime}_{y}(1,x),\\ & G^{{}^{\prime\prime}}_{y}(1,x)=\mathbb{E}[V(V-1)x^{L}] \Longrightarrow\mathbb{E}[V(V-1)\mathbb{I}(L=\ell)]=[x^{\ell}]G^{{}^{\prime \prime}}_{y}(1,x).\end{split} \tag{4.2}\]
We have
\[G(1,x) =\sum_{\ell\geq 1}x^{\ell}g(1,\ell)=\sum_{\ell\geq 1}x^{\ell}\mathbb{P}(L =\ell)=f(x),\] \[G^{\prime}_{y}(1,x) =\bigg{(}p_{0}x+\sum_{j\geq 1}p_{j}G^{j}(y,x)+y\sum_{j\geq 1}jp_{j}G^{ j-1}(y,x)G^{\prime}_{y}(y,x)\bigg{)}\Big{|}_{y=1}\] \[=f(x)+\bigg{(}\sum_{j\geq 1}jp_{j}f^{j-1}(x)\bigg{)}G^{\prime}_{y}(1,x),\] \[G^{{}^{\prime\prime}}_{y}(1,x) =\bigg{(}2\sum_{j\geq 1}jp_{j}G^{j-1}(y,x)G^{\prime}_{y}(y,x)\] \[+y\sum_{j\geq 1}jp_{j}\big{[}(j-1)G^{j-2}(y,x)(G^{\prime}_{y}(y,x ))^{2}+G^{j-1}(y,x)G^{{}^{\prime\prime}}_{y}(y,x)\big{]}\bigg{)}\Big{|}_{y=1}\] \[=2G^{\prime}_{y}(1,x)\sum_{j\geq 1}jp_{j}f^{j-1}(x)\] \[+(G^{\prime}_{y}(1,x))^{2}\sum_{j\geq 1}j(j-1)p_{j}f^{j-2}(x)+G ^{{}^{\prime\prime}}_{y}(1,x)\sum_{j\geq 1}jp_{j}f^{j-1}(x).\]
Therefore
\[G^{\prime}_{y}(1,x) =\frac{f(x)}{1-\sum_{j}jp_{j}f^{j-1}(x)},\] \[G^{{}^{\prime\prime}}_{y}(1,x) =\frac{\sum_{j}j(j-1)p_{j}f^{j}(x)}{\bigg{(}1-\sum_{j}jp_{j}f^{j- 1}(x)\bigg{)}^{3}}+\frac{2\bigg{(}\sum_{j}jp_{j}f^{j}(x)\bigg{)}}{\bigg{(}1- \sum_{j}jp_{j}f^{j-1}(x)\bigg{)}^{2}}. \tag{4.3}\]
So, combining (4.2), (4.3), and (2.1), we conclude that
\[\mathbb{E}[V\mathbb{I}(L=\ell)] =[x^{\ell}]\frac{f(x)f^{\prime}(x)}{p_{0}},\] \[\mathbb{E}[V(V-1)\mathbb{I}(L=\ell)] =\frac{1}{p_{0}^{2}}\cdot[x^{\ell}]\big{(}f^{{}^{\prime\prime}}( x)f^{2}(x)\big{)},\] \[\quad+2\cdot[x^{\ell}]\big{(}\frac{(f^{\prime}(x))^{2}f(x)}{p_{0} ^{2}}-\frac{f^{\prime}(x)f(x)}{p_{0}}\big{)}. \tag{4.4}\]
Let us evaluate these expectations. Time for some complex analysis. Using Weierstrass preparation theorem, (see Ebeling [4], Krantz and Parks [12]) we proved In [15] that \(f(z)=\sum_{k\geq 1}z^{k}\,\mathbb{P}(L=k)\), \(z\in\mathbb{C}\), \(|z|<1\), admits an analytic extension \(F(z)\) to an open disc \(D\) centered at the origin, of radius \(\rho>1\), _minus_ a cut \([1,\rho)\) such that, for \(z\in D\setminus[1,\rho)\), \(z\to 1\),
\[F(z)=1-\gamma(1-z)^{1/2}+\sum_{s>1}\gamma_{s}(1-z)^{s/2},\ \gamma:=(2p_{0})^{1/2}\sigma^{-1}. \tag{4.5}\]
Here \(\xi^{1/2}:=|\xi|^{1/2}\exp(i\mathrm{Arg}(\xi)/2)\), \(\mathrm{Arg}(\xi)\in(-\pi,\pi)\). So, ever so slightly _above_ the cut \(\{z=x,x\in[1,\rho)\}\), we have
\(-i|1-x|^{1/2}\). (We encourage the interested reader to check Bender [2], Canfield [3], Meir and Moon [13], [14] and Flajolet and Sedgewick [5], for a remarkable story of how classic complex analysis techniques found their way into analytic combinatorics.)
Since \(\frac{F(z)F^{\prime}(z)}{z^{\ell+1}}\) is integrable on the circular contour \(\mathcal{C}_{r}=\{z=re^{i\theta},\,\theta\in(-\pi,\pi]\}\), \(r=1\), we use the Cauchy integral theorem to write
\[[x^{\ell}]\big{(}f(x)f^{\prime}(x)\big{)}=\tfrac{1}{2\pi i}\oint _{\mathcal{C}_{1}}\tfrac{F(z)F^{\prime}(z)}{z^{\ell+1}}\,dz\\ =\tfrac{1}{2\pi i}\oint_{\mathcal{C}_{\rho}}\tfrac{F(z)F^{\prime} (z)}{z^{\ell+1}}+\tfrac{1}{2\pi i}\int_{1}^{\rho}\tfrac{2i\operatorname{Im}(F (x)F^{\prime}(x))}{x^{\ell+1}}\,dx.\]
For the second line we replaced \(\mathcal{C}_{1}\) with the _limit_ contour. It consists of the directed circle \(z=\rho e^{i\alpha}\), \(0\leq a\leq 2\pi\) with the single point \(z=\rho\) pinched out, and a detour part formed by two opposite-directed line segments, one from \(z=\rho e^{i(2\pi-0)}\) to \(z=e^{i(2\pi-0)}\), and another from \(z^{i(+0)}\) to \(z=\rho^{i(+0)}\). The \(2i\operatorname{Im}(F(x)F^{\prime}(x))\) appears because the values of \(F(x)F^{\prime}(x)\) just above and just below \([1,\rho)\) are complex-conjugate, so the real parts cancel each other, and \(\operatorname{Im}(F(x)F^{\prime}(x))\) comes from \(z\)'s approaching \(x\in[1,\rho)\) from above. The first integral at the bottom is of order \(O(\rho^{-\ell})\). Further, using (4.5) and the formula 3.191(2) from Gradschteyn and Ryzik [6], we get
\[\tfrac{1}{2\pi i}\int_{1}^{\rho}\tfrac{2i\operatorname{Im}(F(x)F ^{\prime}(x))}{x^{\ell+1}}\,dx=\tfrac{\gamma}{2\pi}\int_{1}^{\rho}\tfrac{(x-1 )^{-1/2}+O(1)}{x^{\ell+1}}\,dx\\ =\tfrac{\gamma}{2\pi}\int_{1}^{\infty}\tfrac{(x-1)^{-1/2}}{x^{ \ell+1}}\,dx+O(1/\ell)=\tfrac{\gamma(2\ell-1)!!}{2^{\ell+1}\ell!}+O(1/\ell);\]
the explicit term is of order \(\ell^{-1/2}\) exactly. Therefore
\[\mathbb{E}[V\mathbb{I}(L=\ell)]=\tfrac{\gamma(2\ell-1)!!}{p_{0}2^{\ell+1} \,\ell!}+O(1/\ell). \tag{4.6}\]
Now, by (1.2), we have \(\mathbb{P}(L=\ell)=\tfrac{\gamma(2\ell-3)!!}{2^{\ell}\ell!}+O(\ell^{-2})\). This and (4.6) imply that
\[\mathbb{E}[V|L=\ell]=\tfrac{\mathbb{E}[V\mathbb{I}(L=\ell)]}{\mathbb{P}(L=\ell )]}=\tfrac{\ell}{p_{0}}+O(\ell^{1/2}). \tag{4.7}\]
Turn to the second identity in (4.4). Similarly to the above computation, we obtain
\[\tfrac{1}{p_{0}^{2}}\cdot[x^{\ell}]\big{(}f^{\prime\prime}(x)f^{2}(x )\big{)}=\tfrac{\gamma(2\ell+1)!!}{(2p_{0})^{2}2^{\ell}\ell!}+O(1),\] \[\quad 2\cdot[x^{\ell}]\big{(}\tfrac{(f^{\prime}(x))^{2}f(x)}{p_{0}^ {2}}-\tfrac{f^{\prime}(x)f(x)}{p_{0}}\big{)}=O(1),\] \[\Longrightarrow \mathbb{E}[V(V-1)\mathbb{I}(L=\ell)]=\tfrac{\gamma(2\ell+1)!!}{(2 p_{0})^{2}2^{\ell}\ell!}+O(1),\] \[\Longrightarrow \mathbb{E}[V^{2}\mathbb{I}(L=\ell)]=\tfrac{\gamma(2\ell+1)!!}{(2 p_{0})^{2}2^{\ell}\ell!}+O(1),\] \[\Longrightarrow \mathbb{E}[V^{2}|L=\ell]=\big{(}\tfrac{\ell}{p_{0}}\big{)}^{2}+O (\ell^{3/2}). \tag{4.8}\]
Putting (4.7) and (4.8), we conclude that
\[\begin{split}\operatorname{var}(V|L=\ell)&:= \mathbb{E}\big{[}(V-\mathbb{E}[V|L=\ell])^{2}|L=\ell\big{]}\\ &=\mathbb{E}[V^{2}|L=\ell]-\mathbb{E}^{2}[V|L=\ell]=O(\ell^{3/2}).\end{split} \tag{4.9}\]
Recall that \(V\) is the number of vertices in the terminal tree \(T\), and \(X(t)\) is the number of vertices whose descendant tree has \(t\) leaves. Thus \(\{\tfrac{X(t)}{V}\}_{t\geq 1}\) is the distribution of \(Y\), the number of leaves in the total descendant subtree rooted at the uniformly random vertex, _conditioned_ on \(T\). We use (4.9), _and_\(X(t)\leq\tfrac{\ell}{t}\), which holds since no two subtrees each with \(t\) leaves intersect, to estimate
\[\mathbb{E}\bigg{[}\Big{(}\tfrac{X(t)}{V}-\tfrac{X(t)}{\mathbb{E}[V|L=\ell]} \Big{)}^{2}\,\Big{|}L=\ell\bigg{]}\leq\mathbb{E}\Big{[}\big{(}\tfrac{X(t)}{ \ell^{2}}\big{)}^{2}\!\operatorname{var}(V|L=\ell)\Big{]}=O(\ell^{-1/2}t^{-2}).\]
So, by Cauchy-Schwartz inequality, we have: uniformly for all \(t\geq 1\),
\[\mathbb{E}\big{[}\tfrac{X(t)}{V}|L=\ell]-\tfrac{\mathbb{E}[X(t)|L=\ell]}{E[V| L=\ell]}=O(\ell^{-1/4}t^{-1}).\]
or equivalently
\[\mathbb{P}(\mathcal{L}=t|L=\ell)=\tfrac{\mathbb{E}[X(t)|L=\ell]}{E[V|L=\ell]} +O(\ell^{-1/4}t^{-1}).\]
Combining this formula with Lemma 2.1 and (4.7), we obtain
\[\mathbb{P}(\mathcal{L}=t|L=\ell)=(1-p_{1})\tfrac{(\ell-t+1)\,\mathbb{P}(L= \ell-t+1)\,\mathbb{P}(L=t)}{\ell\,\mathbb{P}(L=\ell)}+O(\ell^{-1/4}t^{-1}), \tag{4.10}\]
uniformly for \(t\leq\ell\). Applying the formula (1.2) to (4.10), we conclude
\[\lim_{\ell\to\infty}\mathbb{P}(\mathcal{L}=t|L=\ell)=(1-p_{1})\mathbb{P}(A_{ t}).\]
Since \(\sum_{t\geq 1}\mathbb{P}(A_{t})=1\), we see that the sequence of the distributions \(\{\mathbb{P}(Y=t|L=\ell)\}_{1\leq t\leq\ell}\), \((\ell\geq 1)\), is not tight, unless (as in the binary case) \(\,p_{1}=0\). In words, \(p_{1}\) is the limiting deficit of the leaf-set size distribution for the random subtree.
Let us make this claim more precise. Introduce \(\tau=\frac{\ell^{1/2}}{\log^{2}\ell}\). For \(t\leq\tau\), it follows from (1.2) that
\[\frac{(\ell-t+1)\,\mathbb{P}(L=\ell-t+1)}{\ell\,\mathbb{P}(L=\ell)}=1+O(\tau \ell^{-1}+\ell^{-1/2})=1+O(\ell^{-1/2}),\]
implying that
\[(1-p_{1})\sum_{t\leq\tau}\frac{(\ell-t+1)\,\mathbb{P}(L=\ell-t+1 )\,\mathbb{P}(L=t)}{\ell\,\mathbb{P}(L=\ell)}=(1-p_{1})\sum_{t\leq\tau} \mathbb{P}(L=t)+O(\ell^{-1/2})\\ =1-p_{1}+O(\mathbb{P}(L>\tau))+O(\ell^{-1/2})=1-p_{1}+O\big{(} \ell^{-1/4}\log\ell\big{)}.\]
Therefore
\[\mathbb{P}(\mathcal{L}>\tau|L=\ell)=1-(1-p_{1})\sum_{t\leq\tau} \frac{(\ell-t+1)\,\mathbb{P}(L=\ell-t+1)\,\mathbb{P}(L=t)}{\ell\,\mathbb{P}(L =\ell)}\\ +O(\ell^{-1/4}\log\ell)=p_{1}+O(\ell^{-1/4}\log\ell).\]
This completes the proof of Theorem 1.2.
**Acknowledgment.** I am grateful to David Aldous for helping me to put this work into a proper perspective.
|
2302.08762 | Gravitational form factors of a kink in $1+1$ dimensional $φ^4$ model | We calculate the one-loop correction to the distribution of energy-momentum
tensor around a kink in $1+1$ dimensional $\phi^4$ model. We employ the
collective coordinate method to eliminate the zero mode that gives rise to
infrared divergence. The ultraviolet divergences are removed by vacuum
subtraction and mass renormalization. We obtain an analytic result that is
finite and satisfies the momentum conservation. The total energy of the kink
obtained from the spatial integral of energy density reproduces the known
result. Our result obtained on a finite space has a spatially-uniform term that
is inversely proportional to the spatial length. | Hiroaki Ito, Masakiyo Kitazawa | 2023-02-17T08:58:47Z | http://arxiv.org/abs/2302.08762v2 | # Gravitational form factors of a kink in \(1+1\) dimensional \(\phi^{4}\) model
###### Abstract
We calculate the one-loop correction to the distribution of energy-momentum tensor around a kink in \(1+1\) dimensional \(\phi^{4}\) model. We employ the collective coordinate method to eliminate the zero mode that gives rise to infrared divergence. The ultraviolet divergences are removed by vacuum subtraction and mass renormalization. We obtain an analytic result that is finite and satisfies the momentum conservation. The total energy of the kink obtained from the spatial integral of energy density reproduces the known result. Our result obtained on a finite space has a spatially-uniform term that is inversely proportional to the spatial length.
+
Footnote †: institutetext: Department of Physics, University of California, Berkeley, CA 94720-119, USA
## 1 Introduction
Energy-momentum tensor (EMT), \(T^{\mu\nu}(x)\), is a fundamental observable in physics that is closely related to space-time symmetry. EMT plays indispensable roles in classical and quantum field theories for various purposes; for example, its individual components, energy and momentum densities, and stress tensor, are basic quantities having definite physical meanings.
Recently, there has been remarkable progress in using the EMT operator for investigating localized systems in quantum field theory. An example is the experimental investigations of the gravitational form factors (GFF) of hadrons [1; 2; 3; 4], that is the matrix element of the
EMT operator [5; 6; 7; 8; 9; 10; 11; 12]. The GFF in the coordinate space represent the mechanical structure of hadrons [7; 13], and provide us with novel insights into the hadron structure. Their detailed study is one of the central goals of the Electron-Ion Collider (EIC) [14], and precise experimental data will be provided in the future. The measurement of the GFF on the lattice is also ongoing [15; 16].
Another progress has been made in the numerical analysis of static-quark systems in lattice gauge theory. Thanks to an efficient method to measure the expectation value of the EMT operator on the lattice [17; 18; 19; 20; 21; 22; 23] based on the gradient flow [24; 25], detailed analysis of the local distribution of EMT in various non-uniform and non-isotropic systems has been realized [26; 27; 28]. In particular, the numerical result of the static quark-anti-quark (\(Q\bar{Q}\)) system [26] has revealed the formation of the flux tube and its mechanical structure in terms of the gauge-invariant observable.
In these localized systems, quantum effects should play crucial roles in determining the EMT distribution. For example, in the \(Q\bar{Q}\) system it is known that the width of the flux tube becomes larger with increasing the \(Q\bar{Q}\) distance due to quantum string vibrations [29; 30; 31; 32]. Its importance is also suggested from the comparison of the lattice result in Ref. [26] with the classical EMT distribution around the flux tube in the dual superconductor model [33]. The pressure anisotropy induced by boundaries also arises from purely quantum effects [27; 34]. To understand these experimental and numerical results, therefore, investigations of the quantum effects on the EMT distribution are inevitable.
In the present study, as a trial of such investigations, we focus on the kink in the \(1+1\) dimensional scalar \(\phi^{4}\) theory and calculate the EMT distribution around it incorporating quantum effects to one-loop order. The kink, which is also called the soliton, is a localized and stable classical solution in this theory that connects two degenerate vacua [35]. Its properties and applications have been discussed actively more than half century [36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49]. However, its EMT distribution at the quantum level has not been understood well to the best of the authors' knowledge. As for related studies, the quantum correction to the _total_ energy of the kink has been calculated at one-loop order in the renowned paper by Dashen, et al. [36], and the result has been confirmed in many literature [35; 50; 51; 52; 53; 54; 55]. Also, there are several attempts to calculate the energy _density_[43; 48], i.e. the expectation value of \(T^{00}(x)\). However, these studies have not investigated the spatial component \(T^{11}(x)\). In the present study, we calculate all components simultaneously. We show that our result satisfies the momentum conservation. However, the expectation value of \(T^{00}(x)\) does not agree with any of those in Refs. [43; 48], while the spatial integral of \(T^{00}(x)\) reproduces the total energy in Ref. [36] in all the results.
In this analysis, we face a difficulty arising from the zero mode in the fluctuations around the classical solution, which physically represents the space translation of the kink. The zero mode causes an infrared divergence in the perturbative expansion. It also brings about a conceptual difficulty in the definition of the EMT distribution around the kink in quantum systems, since the location of the kink is not fixed in the quantum ground state. It is known that these problems are resolved by employing the collective coordinate method (CCM) [56; 57; 58; 59], in which the zero mode is eliminated by promoting the coordinate of the kink to a dynamical variable. The CCM also allows us to define the EMT distribution of
the kink around its center-of-mass frame, which is the Fourier transform of the GFF [7; 35]. We will discuss these issues in Sec. 3.
The analysis at one-loop order also has ultraviolet (UV) divergences. We eliminate them in two steps; vacuum subtraction and mass renormalization. For the former, we employ the same procedure as in Ref. [36], which is named the mode-number cutoff (MNC) scheme [60]. In this method, the subtraction between the kink and vacuum sectors is performed in a finite system of length \(L\) assuming that each sector has the same mode numbers. The result after the vacuum subtraction is still logarithmically divergent, which can be removed by mass renormalization.
We show that our result of \(T^{00}(x)\) and \(T^{11}(x)\) obtained at the spatial length \(L\) has a constant term proportional to \(1/L\). This term has a finite contribution to the total energy in the \(L\to\infty\) limit, while it vanishes in the local EMT distribution. The total energy in Ref. [36] is reproduced including this contribution. This result means that the integral of the local EMT distribution defined in the \(L\to\infty\) limit is not consistent with the result in Ref. [36].
This paper is organized as follows. In the next section we introduce the \(\phi^{4}\) theory and its kink solution, and summarize their basic properties. In Sec. 3 we give a brief review of the CCM. The expectation values of EMT around the kink are then calculated in Sec. 4, and the final result and its properties are discussed in Sec. 5. The final section is devoted to a summary and outlook. The topological charge density is calculated in App. A. In App. B, App. C and App. D, specific topics on the mass renormalization, vacuum subtraction based on the MNC, and analysis of the tadpole diagram, respectively, will be discussed. In App. E, we discuss the analyses in Refs. [43; 48].
## 2 Model
We employ the real-scalar \(\phi^{4}\) theory in a \(1+1\) dimensional system, whose Lagrangian density is given by
\[\mathcal{L}=\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi-U( \phi), \tag{1}\]
with the potential term
\[U(\phi)=\frac{\lambda}{4}\left(\phi^{2}-v^{2}\right)^{2}=-\frac{ 1}{2}m^{2}\phi^{2}+\frac{\lambda}{4}\phi^{4}+\frac{\lambda v^{4}}{4}, \tag{2}\]
where \(\phi=\phi(x)\) is the real scalar field. The potential \(U(\phi)\) has two degenerate minima at \(\phi=\pm v\) with \(v^{2}=m^{2}/\lambda\).
### Classical solutions
The classical equation of motion (EoM) of this theory is given by
\[\partial_{0}^{2}\phi-\partial_{1}^{2}\phi+\frac{dU}{d\phi}= \partial_{0}^{2}\phi-\partial_{1}^{2}\phi-m^{2}\phi+\lambda\phi^{3}=0. \tag{3}\]
Since \(U(\phi)\) has minima at \(\phi=\pm v\),
\[\phi_{\rm vac}(x)=\pm v=\pm\frac{m}{\lambda^{1/2}}, \tag{4}\]
are static solutions of Eq. (3). We refer to these trivial solutions as the vacuum.
The EoM (3) has other static solutions called the kink and anti-kink,
\[\phi_{\rm kink}(x;X)=\pm\frac{m}{\lambda^{1/2}}\tanh\frac{m(x-X)}{\sqrt{2}}, \tag{5}\]
where \(X\) is a free parameter that represents the position of the kink. As Eq. (5) behaves \(\phi_{\rm kink}(x;X)\to\pm v\) in the limit \(x\to\infty\) or \(x\to-\infty\), the (anti-)kink solution connects two vacua in Eq. (4).
The EMT in this theory is given by the Noether current as
\[T^{\mu\nu}(x)=(\partial^{\mu}\phi)(\partial^{\nu}\phi)-\frac{1}{2}g^{\mu\nu}( \partial^{\rho}\phi)(\partial_{\rho}\phi)+g^{\mu\nu}U(\phi). \tag{6}\]
Substituting Eqs. (4) and (5) into Eq. (6), one finds that \(T^{\mu\nu}(x)=0\) for the vacuum and
\[T^{00}_{\rm kink}(x)=\frac{m^{4}}{2\lambda}{\rm sech}^{4}\frac{m(x-X)}{\sqrt{ 2}},\qquad T^{01}_{\rm kink}(x)=T^{11}_{\rm kink}(x)=0, \tag{7}\]
for the kink with \({\rm sech}x=1/\cosh x\). By integrating \(T^{00}_{\rm kink}(x)\), we obtain the total energy
\[E_{\rm kink}=\int dxT^{00}_{\rm kink}(x)=\frac{2\sqrt{2}m^{3}}{3\lambda}. \tag{8}\]
In the following, we evaluate the quantum correction to Eq. (7) to the leading order of perturbative expansion with respect to \(\lambda\); the dimensionless expansion parameter is \(\lambda/m^{2}\), or \(\lambda\hbar/m^{2}\) if \(\hbar\) is explicitly shown. Since Eq. (7) is of order \(\lambda^{-1}\), the leading-order correction to it is at order \(\lambda^{0}\). We also note that \(\phi_{\rm kink}(x)\) is of order \(\lambda^{-1/2}\) as in Eq. (5).
One can also define the topological current [35]
\[j^{\mu}(x)=\frac{m}{2\lambda^{1/2}}\epsilon^{\mu\nu}\partial_{\nu}\phi(), \tag{9}\]
that satisfies the current conservation \(\partial_{\mu}j^{\mu}=0\), where \(\epsilon^{\mu\nu}\) is the anti-symmetric tensor. From Eq. (5) one has
\[j^{0}_{\rm kink}(x)=\frac{m}{2\lambda^{1/2}}\partial_{1}\phi_{\rm kink}(x;X)= \pm\frac{m^{2}}{2\sqrt{2}\lambda^{1/2}}{\rm sech}^{2}\frac{m(x-X)}{\sqrt{2}}. \tag{10}\]
The topological charge \(Q\) is given by the spatial integral of \(j^{0}\);
\[Q=\int_{-\infty}^{\infty}dxj^{0}(x)=\frac{m}{2\lambda^{1/2}}[ \phi_{\rm kink}(\infty;X)-\phi_{\rm kink}(-\infty;X)]=\begin{cases}\pm 1&({\rm kink/anti - kink}),\\ 0&({\rm vacuum}).\end{cases} \tag{11}\]
In App. A, we calculate the quantum correction to Eq. (10).
### Expansion around the classical solutions
To calculate the quantum correction to Eq. (7), we expand the field \(\phi(x,t)\) around the classical solutions as
\[\phi(x,t) =v+\chi(x,t), \tag{12}\] \[\phi(x,t) =\phi_{\rm kink}(x)+\eta(x,t), \tag{13}\]
where we take the positive sign in Eq. (5) in the following. The action is written in terms of \(\chi(x,t)\) and \(\eta(x,t)\) as
\[S =\int dx^{2}\mathcal{L}\] \[=S_{\rm vac}+\int dx^{2}\Big{[}\frac{1}{2}(\partial_{0}\chi)^{2} -\frac{1}{2}(\partial_{1}\chi)^{2}-m^{2}\chi^{2}-\lambda^{1/2}m\chi^{3}-\frac {\lambda}{4}\chi^{4}\Big{]} \tag{14}\] \[=S_{\rm kink}+\int dx^{2}\Big{[}\frac{1}{2}(\partial_{0}\eta)^{2} -\frac{1}{2}(\partial_{1}\eta)^{2}-\frac{\lambda}{2}\big{(}3\phi_{\rm kink}^{2 }-v^{2}\big{)}\eta^{2}-\lambda\phi_{\rm kink}\eta^{3}-\frac{\lambda}{4}\eta^{4 }\Big{]}, \tag{15}\]
where \(S_{\rm vac}=S[v]\) and \(S_{\rm kink}=S[\phi_{\rm kink}(x;X)]\) are the classical action of each sector. We note that terms linear in \(\chi(x,t)\) or \(\eta(x,t)\) are eliminated by the partial integral and the EoM (3).
The quadratic terms in Eq. (15),
\[-\frac{1}{2}\int d^{2}x\eta(\partial_{0}^{2}+\Delta)\eta,\qquad\Delta=- \partial_{1}^{2}+\lambda\big{(}3\phi_{\rm kink}^{2}-v^{2}\big{)}, \tag{16}\]
are diagonalized by solving the eigenequation
\[\Delta\psi_{l}(x)=\omega_{l}^{2}\psi_{l}(x). \tag{17}\]
The analytic solution of Eq. (17) is known as [61]
\[\omega_{0}^{2} =0, \psi_{0}(x)= {\rm sech}^{2}\frac{mx}{\sqrt{2}}\sim\partial_{1}\phi_{\rm kink}( x;0), \tag{18}\] \[\omega_{1}^{2} =\frac{3}{2}m^{2}, \psi_{1}(x)= \sinh\frac{mx}{\sqrt{2}}{\rm sech}^{2}\frac{mx}{\sqrt{2}},\] (19) \[\omega_{q}^{2} =q^{2}+2m^{2}, \psi_{q}(x)= e^{iqx}\Big{(}3\tanh^{2}\frac{mx}{\sqrt{2}}-1-\frac{2}{m^{2}}q^{2 }-3\sqrt{2}i\frac{q}{m}\tanh\frac{mx}{\sqrt{2}}\Big{)}, \tag{20}\]
for \(X=0\). Here, \(\psi_{0}(x)\) and \(\psi_{1}(x)\) are discrete modes, while \(\psi_{q}(x)\) for real number \(q\) forms a continuous spectrum. \(\psi_{0}(x)\) is proportional to \(\partial_{1}\phi_{\rm kink}(x;0)\) and represents the space translation of the kink. It thus is called the translational mode. This mode is interpreted as the Nambu-Goldstone mode associated with the violation of translational invariance due to the existence of the kink. The continuous modes \(\psi_{q}(x)\) have an asymptotic behaviour
\[\psi_{q}\xrightarrow[x\to\pm\infty]{}C\exp\Big{(}iqx\pm\frac{i}{2}\delta_{p} (q)\Big{)}, \tag{21}\]
with a constant \(C\) and the phase shift
\[\delta_{p}(q)=-2\arctan\frac{3\sqrt{2}mq}{2m^{2}-2q^{2}}. \tag{22}\]
The argument of arctan diverges at \(q=\pm m\), which means that the phase shift crosses \(\delta_{p}(q)=\pm\pi\) there. Requiring \(\delta_{p}(0)=0\), to make \(\delta_{p}(q)\) continuous we obtain [60]
\[\delta_{p}(q)\xrightarrow[q\to\pm\infty]{}\mp 2\pi\pm 3\sqrt{2}\frac{m}{q}. \tag{23}\]
For later use, we introduce the normalized eigenmodes \(\bar{\psi}_{l}(x)\) where \(l\) represents all the eigenmodes. Because the following analysis is mainly performed in a finite system of length \(L\) where the continuous modes are discretized, we impose the orthogonality condition
\[\int_{-L/2}^{L/2}dx\bar{\psi}_{l_{1}}^{*}(x)\bar{\psi}_{l_{2}}(x)=\delta_{l_{1} l_{2}}. \tag{24}\]
For the discrete modes \(l=0,1\), we obtain
\[\bar{\psi}_{0}(x)=\Big{(}\frac{3m}{4\sqrt{2}}\Big{)}^{1/2}\psi_{0}(x),\qquad \bar{\psi}_{1}(x)=\Big{(}\frac{3m}{2\sqrt{2}}\Big{)}^{1/2}\psi_{1}(x), \tag{25}\]
where the effect of finite \(L\) is exponentially suppressed for \(mL\gg 1\). For the continuous modes, using
\[|\psi_{q}|^{2}= \Big{(}3\tanh^{2}\frac{mx}{\sqrt{2}}-1-2q^{2}\Big{)}^{2}+18q^{2} \tanh^{2}\frac{mx}{\sqrt{2}}\] \[= \frac{2}{m^{4}}(2q^{2}+m^{2})(q^{2}+2m^{2})-\frac{3}{m^{2}}(2q^{ 2}+m^{2})\psi_{0}^{2}-\frac{6}{m^{2}}(q^{2}+2m^{2})\psi_{1}^{2}, \tag{26}\]
the normalization constant is calculated to be
\[N_{q}= \int_{-L/2}^{L/2}dx|\psi_{q}|^{2}=\frac{2L}{m^{4}}(2q^{2}+m^{2})( q^{2}+2m^{2})-\frac{12\sqrt{2}}{m^{2}}(q^{2}+m^{2})\] \[= \frac{2L}{m^{4}}(2q^{2}+m^{2})(q^{2}+2m^{2})\Big{(}1+\frac{1}{L} \delta_{p}^{\prime}(q)\Big{)}, \tag{27}\]
which gives \(\bar{\psi}_{q}(x)=\psi_{q}(x)/\sqrt{N_{q}}\) with
\[\delta_{p}^{\prime}(q)=\frac{d\delta_{p}(q)}{dq}=-\frac{6\sqrt{2}m(q^{2}+m^{2 })}{(2q^{2}+m^{2})(q^{2}+2m^{2})}. \tag{28}\]
For the boundary conditions (BC), we impose the anti-periodic BC (APBC)
\[\phi(x+L)=-\phi(x), \tag{29}\]
unless otherwise stated, since this choice of the BC conforms to Eq. (5). The effect of the boundary in the analysis of the total energy has been discussed in the literature [60; 62]. Their conclusion is that the total energy does not depend on the choice of the BC. Later, we will argue that the APBC removes a divergence that appears in the calculation of a tadpole diagram most naturally. From Eq. (29) that means \(\eta(x+L)=-\eta(x)\) and Eq. (21), the values of \(q\) are restricted to discrete ones satisfying
\[Lq_{n}+\delta_{p}(q_{n})=(2n+1)\pi, \tag{30}\]
for \(L\to\infty\) with integer \(n\).
Using the normalized eigenfunctions, \(\eta(x)\) is represented as
\[\eta(x)=c_{0}\bar{\psi}_{0}(x)+c_{1}\bar{\psi}_{1}(x)+\sum_{n}c_{q_{n}}\bar{\psi }_{q_{n}}(x)=\sum_{l}c_{l}\bar{\psi}_{l}(x), \tag{31}\]
where the sum on the far right-hand side runs over \(l=0\), \(1\) and \(q_{n}\). The Hamiltonian is expressed in terms of \(c_{l}\) as
\[H=\frac{1}{2}\sum_{l}\omega_{l}c_{l}^{2}. \tag{32}\]
For the vacuum sector, the eigenmodes are discretized as
\[\varphi_{n}(x)=e^{ik_{n}x},\qquad k_{n}=\frac{(2n+1)\pi x}{L}, \tag{33}\]
with the APBC \(\chi(x+L)=-\chi(x)\).
Substituting Eqs. (12) and (13) into Eq. (6), EMT is rewritten as
\[T^{00} =\frac{1}{2}(\partial_{0}\chi)^{2}+\frac{1}{2}(\partial_{1}\chi )^{2}+m^{2}\chi^{2}+\mathcal{O}(\lambda^{1/2}), \tag{34}\] \[T^{11} =\frac{1}{2}(\partial_{0}\chi)^{2}+\frac{1}{2}(\partial_{1}\chi )^{2}-m^{2}\chi^{2}+\mathcal{O}(\lambda^{1/2}),\] (35) \[T^{01} =-(\partial_{0}\chi)(\partial_{1}\chi), \tag{36}\]
for the vacuum sector and
\[T^{00}= T^{00}_{\rm kink}+\frac{1}{2}(\partial_{0}\eta)^{2}+\frac{1}{2}( \partial_{1}\eta)^{2}+(\partial_{1}\phi_{\rm kink})(\partial_{1}\eta)\] \[+\lambda\phi_{\rm kink}(\phi^{2}_{\rm kink}-v^{2})\eta+\frac{ \lambda}{2}(3\phi^{2}_{\rm kink}-v^{2})\eta^{2}+O(\lambda^{1/2}), \tag{37}\] \[T^{11}= +\frac{1}{2}(\partial_{0}\eta)^{2}+\frac{1}{2}(\partial_{1}\eta) ^{2}-(\partial_{1}\phi_{\rm kink})(\partial_{1}\eta)-\lambda\phi_{\rm kink}( \phi^{2}_{\rm kink}-v^{2})\eta\] \[+\frac{\lambda}{2}(3\phi^{2}_{\rm kink}-v^{2})\eta^{2}+O(\lambda ^{1/2}),\] (38) \[T^{01}= -(\partial_{0}\eta)(\partial_{1}\eta)-(\partial_{0}\eta)( \partial_{1}\phi_{\rm kink}), \tag{39}\]
for the kink sector, where we omitted higher order terms that are negligible to order \(\lambda^{0}\). We note that Eqs. (37) and (38) have linear terms in \(\eta(x)\), while such terms do not appear in the action (15) as they are eliminated by the partial integral and the EoM. We will see later that these linear terms calculated from the tadpole diagrams have nonzero contributions.
## 3 Collective-coordinate method
In the perturbative analysis, the zero mode in Eq. (18) leads to an infrared divergence. The appearance of the zero mode is related to the fact that the kink position \(X\) is arbitrary and the translation of the kink requires zero energy. The zero mode also causes another
conceptual difficulty. In the ground state of this system in quantum theory, the value of \(X\) is not fixed, but the ground state is the eigenstate of the conjugate momentum of \(X\). Hence, the expectation value of EMT is uniform in space in the ground state. To obtain a non-trivial result, one has to introduce a quantum expectation value with fixed \(X\).
It is known that these problems are resolved by employing a procedure called the collective-coordinate method (CCM) [56; 57; 58]. In the CCM, the perturbative analysis is performed by eliminating the zero mode in place of the promotion of \(X\) to a dynamical variable. In this section, we give a brief review of the CCM to make the manuscript self-contained. The CCM has been formulated by various methods, such as the canonical and the path-integral formalisms, which give the same result [56; 57; 58; 59; 35]. In this section, we illustrate the CCM based on Refs. [58; 59]. See also Sec. 8 of Ref. [35].
### Canonical transformation
Let us start from the classical system described by the Lagrangian (1). There are various choices for a set of dynamical variables to describe the system; in addition to the original field \(\phi(x,t)\), one can choose \(\eta(x,t)\) in Eq. (13), or \(c_{l}\) in Eq. (31).
Now, let us rewrite \(\phi(x,t)\) as
\[\phi(x,t)=\phi_{\rm kink}(x;X(t))+\tilde{\eta}(x-X(t),t), \tag{31}\]
and regard \(X(t)\) as a dynamical variable. Since this causes redundancy in the degrees of freedom, we impose a constraint on \(\tilde{\eta}(x,t)\)
\[\int dx\tilde{\eta}(x,t)\bar{\psi}_{0}(x)=0. \tag{32}\]
This constraint means that the variable \(c_{0}\) in Eq. (31), i.e. the zero mode, is removed and \(\tilde{\eta}(x,t)\) is given by
\[\tilde{\eta}(x,t)=\sum_{l\neq 0}c_{l}(t)\bar{\psi}_{l}(x). \tag{33}\]
The basic idea of the CCM is to describe the system using the set of variables \(X(t)\) and \(\tilde{\eta}(x,t)\), or equivalently \(X(t)\) and \(c_{l}(t)\) for \(l\neq 0\).
The Hamiltonian of the system is represented in terms of the new variables by canonical transformation. For this we introduce the conjugate momenta of \(X(t)\) and \(\tilde{\eta}(x,t)\),
\[P(t)=\frac{\partial L}{\partial(\partial_{0}X)},\quad\tilde{\pi}(x,t)=\frac{ \delta L}{\delta(\partial_{0}\tilde{\eta})}, \tag{34}\]
where \(L=\int dx\mathcal{L}\) is the Lagrangian. The conjugate field \(\tilde{\pi}(x,t)\) can also be defined as
\[\tilde{\pi}(x,t)=\sum_{l\neq 0}\gamma_{l}(t)\bar{\psi}_{l}(x), \tag{35}\]
with \(\gamma_{l}=(\partial L)/(\partial(\partial_{0}c_{l}))\) being the canonical conjugate of \(c_{l}\). In any case, \(\tilde{\pi}(x,t)\) also satisfies the orthogonality condition
\[\int dx\tilde{\pi}(x)\bar{\psi}_{0}(x)=0. \tag{36}\]
It is found that the conjugate of the original field \(\pi=\partial\mathcal{L}/\partial(\partial_{0}\phi)\) is given by
\[\pi(x,t)=\tilde{\pi}(x-X,t)-\frac{P(t)+\int dx\tilde{\pi}\partial_ {1}\tilde{\eta}}{E_{\rm kink}^{1/2}(1+\xi/E_{\rm kink}^{1/2})}\bar{\psi}_{0}(x -X), \tag{10}\]
with \(\xi=\int dx(\partial_{1}\tilde{\eta}(x))\bar{\psi}_{0}(x)\).
The variables \(X(t)\) and \(P(t)\) satisfies \(\{X,P\}=1\), where \(\{\cdot,\cdot\}\) is the Poisson bracket in this subsection. The Poisson bracket of \(\tilde{\eta}(x,t)\) and \(\tilde{\pi}(y,t)\) is given by
\[\{\tilde{\eta}(x),\tilde{\pi}(y)\}=\sum_{l\neq 0}\bar{\psi}_{l}(x) \bar{\psi}_{l}^{*}(y)=\delta(x-y)-\bar{\psi}_{0}(x)\bar{\psi}_{0}(y), \tag{11}\]
due to the constraints (10) and (11). The deviation from the delta function in Eq. (11) is understood as the Poisson bracket in constrained systems [63]. These Poisson brackets and Eq. (10) give
\[\{\phi(x),\pi(y)\}=\delta(x-y). \tag{12}\]
In terms of \(X\), \(P\), \(\tilde{\eta}\), and \(\tilde{\pi}\), the Hamiltonian of the system is written as
\[H=E_{\rm kink}+\frac{1}{2E_{\rm kink}}\frac{(P+\int dx\tilde{ \pi}\partial_{1}\tilde{\eta})^{2}}{\left(1+\xi/E_{\rm kink}^{1/2}\right)^{2}}+ \tilde{H}, \tag{13}\]
with
\[\tilde{H}= \int dx\tilde{\mathcal{H}}(x-X), \tag{14}\] \[\tilde{\mathcal{H}}(x)= \frac{1}{2}\tilde{\pi}^{2}(x)+\frac{1}{2}(\partial_{1}\tilde{\eta }(x))^{2}+U(\phi_{\rm kink}(x;0)+\tilde{\eta}(x))-U(\phi_{\rm kink}(x;0)). \tag{15}\]
In Eq. (13), the first term \(E_{\rm kink}\) represents the classical energy of the kink (8) at order \(\lambda^{-1}\). The second term contains cross terms between \(P\) and \(\tilde{\eta}(x)\), \(\tilde{\pi}(x)\), which arise as a price of using new variables. However, this term is \(\mathcal{O}(\lambda)\), and thus is negligible for our purpose that evaluates the quantum correction to leading order, provided that \(P\) is of order \(\mathcal{O}(\lambda^{0})\). It, however, is notable that \(P^{2}/2E_{\rm kink}\) in this term represents the kinetic energy of a non-relativistic particle1. We also note that Eq. (13) does not depend on \(X\) explicitly as the \(X\) dependence in \(\tilde{H}(x-X)\) is eliminated by the \(x\) integral. This fact is in accordance with the translational invariance of the theory. The third term in Eq. (13) is independent of \(X\) and \(P\). \(\tilde{\mathcal{H}}(x)\) is interpreted as the Hamiltonian density of the kink at \(X=0\). While \(\tilde{\mathcal{H}}(x)\) has a similar form as the original Hamiltonian, it is written by \(\tilde{\eta}(x)\) and \(\tilde{\pi}(x)\) that do not include the zero mode.
Footnote 1: These terms correspond to the first two terms in the non-relativistic expansion of the kinetic energy \(\sqrt{E_{\rm kink}^{2}+P^{2}}=E_{\rm kink}+P^{2}/2E_{\rm kink}+\cdots\). The higher order terms in the expansion manifest themselves in the higher order terms of the perturbative expansion of \(\lambda\)[57]. For the Lorentz symmetry of Eq. (13), see Refs. [57; 58].
Using the new set of variables, EMT is expressed as
\[T^{\mu\nu}[X,P,\tilde{\eta},\tilde{\pi}]=T^{\mu\nu}_{\rm kink}(x -X)+\Delta\tilde{T}^{\mu\nu}[\tilde{\pi}(x-X),\tilde{\eta}(x-X)], \tag{16}\]
with
\[\Delta\tilde{T}^{00}[\tilde{\pi},\tilde{\eta}]= \frac{1}{2}\tilde{\pi}^{2}+\frac{1}{2}(\partial_{x}\tilde{\eta})^{ 2}+(\partial_{x}\phi_{\rm kink})(\partial_{x}\tilde{\eta})+\lambda\phi_{\rm kink }(\phi_{\rm kink}^{2}-v^{2})\tilde{\eta}\] \[+\frac{\lambda}{2}(3\phi_{\rm kink}^{2}-v^{2})\tilde{\eta}^{2}+ \mathcal{O}(\lambda^{1/2}), \tag{3.14}\] \[\Delta\tilde{T}^{11}[\tilde{\pi},\tilde{\eta}]= \frac{1}{2}\tilde{\pi}^{2}+\frac{1}{2}(\partial_{x}\tilde{\eta}) ^{2}+(\partial_{x}\phi_{\rm kink})(\partial_{x}\tilde{\eta})-\lambda\phi_{\rm kink }(\phi_{\rm kink}^{2}-v^{2})\tilde{\eta}\] \[-\frac{\lambda}{2}(3\phi_{\rm kink}^{2}-v^{2})\tilde{\eta}^{2}+ \mathcal{O}(\lambda^{1/2}),\] (3.15) \[\Delta\tilde{T}^{01}[\tilde{\pi},\tilde{\eta}]= -\tilde{\pi}(\partial_{x}\tilde{\eta})+\mathcal{O}(\lambda). \tag{3.16}\]
### Quantization
The system described by Eq. (3.10) is quantized by promoting the variables \(X(t)\), \(P(t)\), \(\tilde{\eta}(x,t)\) and \(\tilde{\pi}(t)\) to quantum operators. The Poisson brackets are promoted to the commutation relations
\[[X,P]=i,\quad[\tilde{\eta}(x),\tilde{\pi}(y)]=i\big{(}\delta(x-y)-\bar{\psi}( x)\bar{\psi}(y)\big{)}. \tag{3.17}\]
All other commutation relations vanish. The second term in Eq. (3.10) contains the cross terms between the conjugate fields. Although the order of operators has to be chosen carefully for quantizing such terms, as discussed already these terms are of order \(\mathcal{O}(\lambda^{1})\) and negligible for our purpose.
Since the Hamiltonian (3.10) does not depend on \(X\) and \(P\) to order that we are working, it is convenient to separate the Hilbert space \(\Phi\) into the direct product as
\[\Phi=\Phi_{X}\otimes\Phi_{\tilde{\eta}}, \tag{3.18}\]
where \(\Phi_{X}\) and \(\Phi_{\tilde{\eta}}\) represent the subspaces described by the corresponding subindices. Then, to order \(\lambda^{0}\), the Hamiltonian is diagonalized in \(\Phi_{X}\) and \(\Phi_{\tilde{\eta}}\) separately. The subspace \(\Phi_{\tilde{\eta}}\) is described by Eq. (3.11), and its ground state is determined without specifying the state in \(\Phi_{X}\).
After setting the quantum state to be the ground state in \(\Phi_{\tilde{\eta}}\), we still have arbitrariness to specify the state in \(\Phi_{X}\). For example, one can consider eigenstates of the operator \(\hat{X}\) satisfying \(\hat{X}|X\rangle=X|X\rangle\), where \(|X\rangle\) is assumed to be the ground state in \(\Phi_{\tilde{\eta}}\). The matrix element of the EMT operator (3.13) between these states is then calculated to be
\[\langle X|T^{\mu\nu}(x)|X^{\prime}\rangle=\Big{(}T^{\mu\nu}_{\rm kink}(x-X)+ \Delta T^{\mu\nu}_{\rm kink}(x-X)\Big{)}\delta(X-X^{\prime})+\mathcal{O}( \lambda), \tag{3.19}\]
with
\[\Delta T^{\mu\nu}_{\rm kink}(x-X)=\langle X|\Delta\tilde{T}^{\mu\nu}(x)|X\rangle. \tag{3.20}\]
Here, \(\Delta T^{\mu\nu}_{\rm kink}(x)\) is interpreted as the quantum correction of the EMT distribution around the kink at \(X=0\).
ne can also consider the momentum eigenstates satisfying \(\hat{P}|P\rangle=P|P\rangle\) and \(\langle X|P\rangle=e^{iPX}\), where \(|P\rangle\) is again assumed to be the ground state in \(\Phi_{\hat{\eta}}\). The matrix element of \(T^{\mu\nu}(x)\) between these states is given by
\[\langle P|T^{\mu\nu}(x)|P^{\prime}\rangle=\int dX\Big{(}T^{\mu\nu}_{\rm kink}(x -X)+\Delta T^{\mu\nu}_{\rm kink}(x-X)\Big{)}e^{i(P-P^{\prime})X}. \tag{3.21}\]
Substituting \(x=0\) into Eq. (3.21), one sees that the Fourier transform of \(T^{\mu\nu}_{\rm kink}(x)+\Delta T^{\mu\nu}_{\rm kink}(x)\) is the form factor of the kink, i.e. the GFF. In the next section, we calculate Eq. (3.20). This analysis corresponds to the perturbative expansion without the zero mode.
Further comments on the GFF are in order. Conventionally, the GFF of a spin-0 particle are defined as [3; 6; 7]
\[\langle p|T^{\mu\nu}(0)|p^{\prime}\rangle=\frac{K^{\mu}K^{\nu}}{K^{2}}\Theta_{ 1}(\Delta^{2})+\frac{\Delta^{\mu}\Delta^{\nu}-g^{\mu\nu}\Delta^{2}}{\Delta^{2 }}\Theta_{2}(\Delta^{2}), \tag{3.22}\]
where \(|p\rangle\) represents a quantum state with the Lorentz vector \(p^{\mu}\), \(K^{\mu}=p^{\mu}+p^{\prime\mu}\), \(\Delta^{\mu}=p^{\mu}-p^{\prime\mu}\) and the metric tensor \(g^{\mu\nu}\). Equation (3.22) has two independent components \(\Theta_{1}\) and \(\Theta_{2}\). In \(1+1\) dimensions, however, the projection operators satisfy \(K^{\mu}K^{\nu}/K^{2}=g^{\mu\nu}-\Delta^{\mu}\Delta^{\nu}/\Delta^{2}\) and only one component does exist in the GFF, corresponding to the fact that there are no "transverse" directions in \(1+1\) dimensions. Equation (3.19) corresponds to the Fourier transform of this component. We also note that our analysis assumes the non-relativistic limit since it is valid only when \(P\) is of order \(\mathcal{O}(\lambda^{0})\), while the kink mass (2.8) is of order \(\lambda^{-1}\).
## 4 Perturbative analysis
### Vacuum subtraction and mass renormalization
In the analysis of Eq. (3.20), we face two types of ultraviolet (UV) divergence. We remove them with the same procedure as Refs. [35; 36]. We first perform the vacuum subtraction, i.e. we require that the expectation value of \(T^{\mu\nu}(x)\) vanishes in the vacuum sector. This means that the expectation value in the kink sector is defined by
\[\langle T^{\mu\nu}(x)\rangle=\langle\tilde{T}^{\mu\nu}(x)\rangle_{\rm K}- \langle T^{\mu\nu}(x)\rangle_{\rm V}, \tag{4.1}\]
where the subscripts K and V mean the expectation values for the kink and vacuum sectors, respectively, and the expectation value without a subscript is defined by Eq. (4.1) in what follows.
Figure 1: (a) Diagrammatic representation of the renormalization condition. (b) Diagram that is not considered in our analysis; see text.
After the vacuum subtraction, Eq. (4.1) is still UV divergent. A conventional renormalization procedure removes this divergence. It is known that the \(1+1\) dimensional \(\phi^{4}\) theory is regularized only by the mass renormalization that adds the mass counterterm
\[\mathcal{L}_{\rm ct}=-\frac{1}{2}\delta m^{2}\phi^{2} =-\frac{1}{2}\delta m^{2}(v^{2}+2v\chi+\chi^{2})\] \[=-\frac{1}{2}\delta m^{2}(\phi_{\rm kink}^{2}+2\phi_{\rm kink} \tilde{\eta}+\tilde{\eta}^{2}), \tag{4.2}\]
to Lagrangian density2. To determine \(\delta m^{2}\) we impose the renormalization condition shown in Fig. 1(a), which results in
Footnote 2: We perform the analysis in the renormalized perturbation theory, where \(m\) stands for the renormalized mass. The analysis in the bare perturbation theory is discussed in App. B.
\[\delta m^{2}=-\frac{3\lambda}{2L}\sum_{n}\frac{1}{\sqrt{k_{n}^{2}+2m^{2}}}, \tag{4.3}\]
where the discrete momenta \(k_{n}\) are defined in Eq. (2.33). The divergence in the tadpole diagram in the vacuum sector is also canceled out by Eq. (4.2). We note that the common counterterm (4.3) is adopted to both the vacuum and kink sectors.
We note that the self-energy of \(\chi(x)\) includes the diagram in Fig. 1(b) at the same order as those in Fig. 1(a). The renormalization condition including Fig. 1(b) has been pointed out, for example, in Ref. [60]. We discuss the dependence of our analysis on the choice of the renormalization conditions in App. B.
As the Lagrangian density is modified by the counterterm (4.2), the EMT operator is also modified by this term. Since \(\delta m^{2}\) is of order \(\lambda^{1}\) as in Eq. (4.3), only the terms \(\delta m^{2}v^{2}\) and \(\delta m^{2}\phi_{\rm kink}^{2}\) contribute at order \(\lambda^{0}\) in the vacuum and kink sectors, respectively. Taking this effect into account, the explicit form of \(\Delta T_{\rm kink}^{\mu\nu}(x)\) is given by
\[\Delta T_{\rm kink}^{00}(x) =T_{1}(x)+T_{2}(x)+T_{3}(x)+T_{4}(x), \tag{4.4}\] \[\Delta T_{\rm kink}^{11}(x) =T_{1}(x)-T_{2}(x)+T_{3}(x)-T_{4}(x),\] (4.5) \[\Delta T_{\rm kink}^{01}(x) =0, \tag{4.6}\]
with
\[T_{1}(x)= \frac{1}{2}\langle(\partial_{0}\tilde{\eta})^{2}\rangle_{\rm K}+ \frac{1}{2}\langle(\partial_{1}\tilde{\eta})^{2}\rangle_{\rm K}-\frac{1}{2} \langle(\partial_{0}\chi)^{2}\rangle_{\rm V}-\frac{1}{2}\langle(\partial_{1} \chi)^{2}\rangle_{\rm V}, \tag{4.7}\] \[T_{2}(x)= \frac{\lambda}{2}(3\phi_{\rm kink}^{2}-v^{2})\langle\tilde{\eta} ^{2}\rangle_{\rm K}-m^{2}\langle\chi^{2}\rangle_{\rm V}+\frac{1}{2}\delta m^{ 2}(\phi_{\rm kink}^{2}-v^{2}),\] (4.8) \[T_{3}(x)= (\partial_{1}\phi_{\rm kink})\langle\partial_{1}\tilde{\eta} \rangle_{\rm K},\] (4.9) \[T_{4}(x)= \lambda\phi_{\rm kink}(\phi_{\rm kink}^{2}-v^{2})\langle\tilde{ \eta}\rangle_{\rm K}. \tag{4.10}\]
In the following, we calculate Eqs. (4.7)-(4.10) one by one.
### \(T_{1}(x)\)
We start from the calculation of \(T_{1}(x)\). The expectation values in the kink sector \(\langle(\partial_{0}\tilde{\eta})^{2}\rangle_{\rm K}\) and \(\langle(\partial_{1}\tilde{\eta})^{2}\rangle_{\rm K}\) are calculated with the use of the Green function of \(\tilde{H}\) given by
\[G(x,x^{\prime};t-t^{\prime})=\langle\tilde{\eta}(x,t)\tilde{\eta}(x^{\prime},t^{ \prime})\rangle_{\rm K}=\int\frac{d\omega}{2\pi}\sum_{l\neq 0}e^{i\omega(t-t^{ \prime})}\bar{\psi}_{l}(x)\frac{i}{\omega^{2}-\omega_{l}^{2}+i\epsilon}\bar{ \psi}_{l}^{*}(x^{\prime}), \tag{4.11}\]
and
\[\langle(\partial_{0}\tilde{\eta}(x,t)\partial_{0}\tilde{\eta}(x^{ \prime},t^{\prime})\rangle_{\rm K}= \partial_{t}\partial_{t^{\prime}}G(t-t^{\prime};x,x^{\prime})- \delta(t-t^{\prime})\big{[}\delta(x-x^{\prime})-\bar{\psi}_{0}(x)\bar{\psi}_{ 0}(x^{\prime})\big{]} \tag{4.12}\] \[= \int\frac{d\omega}{2\pi}\sum_{l\neq 0}e^{i\omega(t-t^{\prime})} \bar{\psi}_{l}(x)\frac{i\omega_{l}^{2}}{\omega^{2}-\omega_{l}^{2}+i\epsilon} \bar{\psi}_{l}^{*}(x^{\prime}), \tag{4.13}\]
where \(\bar{\psi}_{0}(x)\bar{\psi}_{0}(x^{\prime})\) in Eq. (4.12) comes from the commutation relation Eq. (3.17).
Using Eqs. (4.11) and (4.13), \(\langle(\partial_{0}\tilde{\eta})^{2}\rangle_{\rm K}\) and \(\langle(\partial_{1}\tilde{\eta})^{2}\rangle_{\rm K}\) are calculated to be
\[\langle(\partial_{1}\tilde{\eta})^{2}\rangle_{\rm K}=\lim_{x^{\prime}\to x} \partial_{1}\partial_{1}^{\prime}G(x,x^{\prime};0)=\sum_{l\neq 0}\frac{1}{2 \omega_{l}}|\partial_{1}\bar{\psi}_{l}(x)|^{2},\quad\langle(\partial_{0} \tilde{\eta})^{2}\rangle_{\rm K}=\sum_{l\neq 0}\frac{\omega_{l}}{2}|\bar{ \psi}_{l}(x)|^{2}, \tag{4.14}\]
with \(\partial_{1}^{\prime}=\partial/\partial x^{\prime}\). From
\[|\bar{\psi}_{q}(x)|^{2}= \frac{|\psi_{q}(x)|^{2}}{N_{q}}\] \[= \frac{1}{L}\Big{\{}1-\frac{3m^{2}}{2(q^{2}+2m^{2})}\psi_{0}^{2}- \frac{3m^{2}}{2q^{2}+m^{2}}\psi_{1}^{2}\Big{\}}\Big{(}1-\frac{\delta_{p}^{ \prime}(q)}{L}\Big{)}+\mathcal{O}(L^{-3}), \tag{4.15}\] \[= \frac{1}{L}\Big{\{}1-\frac{3m^{2}}{2(q^{2}+2m^{2})}\big{(}\psi_{ 0}^{2}+\psi_{1}^{2}\big{)}-\frac{9m^{4}}{2(q^{2}+2m^{2})(2q^{2}+m^{2})}\psi_{ 1}^{2}\Big{\}}\Big{(}1-\frac{\delta_{p}^{\prime}(q)}{L}\Big{)}\] \[+\mathcal{O}(L^{-3}),\] (4.16) \[|\partial_{1}\bar{\psi}_{q}(x)|^{2}= \frac{1}{L}\Big{\{}q^{2}+3\big{[}(\partial_{1}\psi_{0})^{2}+( \partial_{1}\psi_{1})^{2}\big{]}-\frac{3m^{2}(\partial_{1}\psi_{0})^{2}}{2(q^{ 2}+2m^{2})}-\frac{3m^{2}(\partial_{1}\psi_{1})^{2}}{2q^{2}+m^{2}}\Big{\}}\] \[\times\Big{(}1-\frac{\delta_{p}^{\prime}(q)}{L}\Big{)}+\mathcal{O }(L^{-3}), \tag{4.17}\]
and \(\psi_{0}^{2}+\psi_{1}^{2}=2m^{2}((\partial_{1}\psi_{0})^{2}+(\partial_{1}\psi_ {1})^{2})\), one obtains
\[\langle(\partial_{0}\tilde{\eta})^{2}\rangle_{\rm K}+\langle( \partial_{1}\tilde{\eta})^{2}\rangle_{\rm K}= \frac{1}{2L}\sum_{n}\Big{(}\sqrt{q_{n}^{2}+2m^{2}}+\frac{q_{n}^{2} }{\sqrt{q_{n}^{2}+2m^{2}}}\Big{)}\Big{(}1-\frac{\delta_{p}^{\prime}(q_{n})}{L} \Big{)}\] \[-\frac{3}{2}D_{1}(\partial_{1}\psi_{0})^{2}+\Big{(}\frac{\sqrt{3} }{4}-\frac{3}{2}D_{2}\Big{)}\Big{(}\frac{3m^{2}}{2}\psi_{1}^{2}+(\partial_{1} \psi_{1})^{2}\Big{)}, \tag{4.18}\]
with
\[D_{1}= \frac{1}{L}\sum_{n}\frac{m^{2}}{2\sqrt{q_{n}^{2}+2m^{2}}(q_{n}^{2}+2 m^{2})}\Big{(}1-\frac{\delta_{p}^{\prime}(q_{n})}{L}\Big{)}\] \[\xrightarrow[L\to\infty]{}\int_{-\infty}^{\infty}\frac{dq}{2\pi} \frac{m^{2}}{2\sqrt{q^{2}+2m^{2}}(q^{2}+2m^{2})}=\frac{1}{4\pi}, \tag{4.19}\] \[D_{2}= \frac{1}{L}\sum_{n}\frac{1}{\sqrt{q_{n}^{2}+2m^{2}}(2q_{n}^{2}+m^ {2})}\Big{(}1-\frac{\delta_{p}^{\prime}(q_{n})}{L}\Big{)}\] \[\xrightarrow[L\to\infty]{}\int_{-\infty}^{\infty}\frac{dq}{2\pi} \frac{m^{2}}{\sqrt{q^{2}+2m^{2}}(2q^{2}+m^{2})}=\frac{\sqrt{3}}{9}. \tag{4.20}\]
Here, the sums in Eqs. (4.19) and (4.20) are convergent and they are replaced with the integrals in the \(L\to\infty\) limit.
The first term in Eq. (4.18) is UV divergent. The divergence is removed by the subtraction of the vacuum sector given by
\[\langle(\partial_{0}\chi)^{2}\rangle_{\rm V}+\langle(\partial_{1}\chi)^{2} \rangle_{\rm V}=\frac{1}{2L}\sum_{n}\Big{(}\sqrt{k_{n}^{2}+2m^{2}}+\frac{k_{n} ^{2}}{\sqrt{k_{n}^{2}+2m^{2}}}\Big{)}, \tag{4.21}\]
As shown in App. C, the results of the subtraction in the MNC are given by
\[\frac{1}{L}\sum_{n}\sqrt{q_{n}^{2}+2m^{2}}\Big{(}1-\frac{\delta_ {p}^{\prime}(q_{n})}{L}\Big{)}-\frac{1}{L}\sum_{n}\sqrt{k_{n}^{2}+2m^{2}} \xrightarrow[L\to\infty]{}-\frac{3\sqrt{2}m}{\pi L}, \tag{4.22}\] \[\frac{1}{L}\sum_{n}\frac{1}{\sqrt{q_{n}^{2}+2m^{2}}}\Big{(}1- \frac{\delta_{p}^{\prime}(q_{n})}{L}\Big{)}-\frac{1}{L}\sum_{n}\frac{1}{\sqrt {k_{n}^{2}+2m^{2}}}\xrightarrow[L\to\infty]{}0. \tag{4.23}\]
Accumulating these results, one obtains
\[T_{1}(x)= -\frac{3\sqrt{2}m}{2\pi L}-\frac{3}{4}D_{1}(\partial_{1}\psi_{0 })^{2}+\frac{1}{2}\Big{(}\frac{\sqrt{3}}{4}-\frac{3}{2}D_{2}\Big{)}\Big{(} \frac{3m^{2}}{2}\psi_{1}^{2}+(\partial_{1}\psi_{1})^{2}\Big{)}\] \[= -\frac{3\sqrt{2}m}{2\pi L}+\frac{A}{2}(\partial_{1}\psi_{0})^{2}+ \frac{B}{2}\Big{(}\frac{3m^{2}}{2}\psi_{1}^{2}+(\partial_{1}\psi_{1})^{2} \Big{)}\] \[= -\frac{3\sqrt{2}m}{2\pi L}+Bm^{2}{\rm sech}^{2}\frac{mx}{\sqrt{2} }+\Big{(}A-\frac{7}{4}B\Big{)}m^{2}{\rm sech}^{4}\frac{mx}{\sqrt{2}}+(-A+B)m^ {2}{\rm sech}^{6}\frac{mx}{\sqrt{2}}, \tag{4.24}\]
with
\[A=-\frac{3}{2}D_{1}=-\frac{3}{8\pi}, \tag{4.25}\] \[B=\frac{\sqrt{3}}{4}-\frac{3}{2}D_{2}=\frac{\sqrt{3}}{12}. \tag{4.26}\]
### \(T_{2}(x)\)
Next, let us calculate \(T_{2}(x)\). \(\langle\tilde{\eta}^{2}(x)\rangle_{\rm K}\) and \(\langle\chi^{2}(x)\rangle_{\rm V}\) are calculated to be
\[\langle\tilde{\eta}(x)^{2}\rangle_{\rm K}= \lim_{x^{\prime}\to x}G(x,x^{\prime};0)=\sum_{l\neq 0}\frac{1}{2 \omega_{l}}|\bar{\psi}_{l}(x)|^{2}\] \[= \frac{1}{2L}\sum_{n}\frac{1}{\sqrt{q_{n}^{2}+2m^{2}}}\Big{(}1- \frac{\delta_{p}^{\prime}(q_{n})}{L}\Big{)}+A\psi_{0}^{2}+B\psi_{1}^{2}, \tag{4.27}\] \[\langle\chi^{2}(x)\rangle_{\rm V}= \frac{1}{2L}\sum_{n}\frac{1}{\sqrt{k_{n}^{2}+2m^{2}}}=-\frac{1}{ 3\lambda}\delta m^{2}. \tag{4.28}\]
Substituting them into \(T_{2}(x)\), we obtain
\[T_{2}(x)= \frac{\lambda}{2}(3\phi_{\rm kink}^{2}-v^{2})(\langle\tilde{\eta }^{2}\rangle_{\rm K}-\langle\chi^{2}\rangle_{\rm V})\] \[= \frac{m^{2}}{2}(3\bar{\phi}_{\rm kink}^{2}-1)(A\psi_{0}^{2}+B \psi_{1}^{2})\] \[= Bm^{2}{\rm sech}^{2}\frac{mx}{\sqrt{2}}+\Big{(}A-\frac{5}{2}B \Big{)}m^{2}{\rm sech}^{4}\frac{mx}{\sqrt{2}}-\frac{3}{2}(A-B)m^{2}{\rm sech}^ {6}\frac{mx}{\sqrt{2}}, \tag{4.29}\]
where \(\bar{\phi}_{\rm kink}(y)=(\sqrt{\lambda}/m)\phi_{\rm kink}(y;0)\) and we used Eq. (4.23) for the vacuum subtraction.
### \(T_{3}(x)\) and \(T_{4}(x)\)
Finally, we calculate the terms including one-point correlation function \(T_{3}(x)\) and \(T_{4}(x)\). The coefficients of these terms are of order \(\lambda^{-1/2}\). They have order \(\lambda^{0}\) contribution through \(\langle\tilde{\eta}(x)\rangle_{\rm K}\) obtained from the tadpole diagram in Fig. 2, where the three-point interaction \(\lambda\phi_{\rm kink}\) is of order \(\lambda^{1/2}\). The one-point function in the vacuum sector \(\langle\chi(x)\rangle_{\rm V}\) vanishes by mass renormalization. The expectation value of \(\tilde{\eta}(x)\) is calculated to be
\[\langle\tilde{\eta}(x)\rangle_{\rm K}= -i\int\frac{dy^{2}}{(2\pi)^{2}}\phi_{\rm kink}(y;0)\big{\{}3 \lambda\,\langle\tilde{\eta}(x)\tilde{\eta}(y)\rangle_{\rm K}\,\langle\tilde{ \eta}(y)\tilde{\eta}(y)\rangle_{\rm K}+\delta m^{2}\,\langle\tilde{\eta}(x) \tilde{\eta}(y)\rangle_{\rm K}\,\big{\}}\] \[= -i\int\frac{dy}{2\pi}\phi_{\rm kink}(y;0)\tilde{G}(x,y)\{3 \lambda G(y,y)+\delta m^{2}\}\] \[= -3i\lambda\int\frac{dy}{2\pi}\phi_{\rm kink}(y;0)\tilde{G}(x,y)\{ A\psi_{0}^{2}+B\psi_{1}^{2}\}, \tag{4.30}\]
where \(\tilde{G}(x,y)=\int(dt^{\prime}/2\pi)G(x,y;t-t^{\prime})\). In the last equality, we used Eqs. (4.3), (4.23) and (4.27). Using \(\psi_{0}^{2}(x)=1-2\bar{\phi}_{\rm kink}^{2}(x)+\bar{\phi}_{\rm kink}^{4}(x)\) and \(\psi_{1}^{2}(x)=\bar{\phi}_{\rm kink}^{2}(x)-\bar{\phi}_{\rm kink}^{4}(x)\), Eq. (4.30) is further rewritten as
\[\langle\tilde{\eta}(x)\rangle_{\rm K}=-3im\sqrt{\lambda}\big{\{}A\big{(}H_{1} (x)-2H_{3}(x)+H_{5}(x)\big{)}+B\big{(}H_{3}(x)-H_{5}(x)\big{)}\big{\}}, \tag{4.31}\]
with
\[H_{i}(x)=\int\frac{dy}{2\pi}\bar{\phi}_{\rm kink}^{i}(y)\tilde{G}(x,y). \tag{4.32}\]
Using analytic forms of Eq. (4.32) given in App. D, one obtains
\[\langle\tilde{\eta}(x)\rangle_{\rm K}= -\frac{\sqrt{\lambda}}{m}\Big{\{}(A-B)\bar{\phi}_{\rm kink}(1- \bar{\phi}_{\rm kink}^{2})+\frac{3}{2}Bx\partial_{1}\bar{\phi}_{\rm kink}\Big{\}}. \tag{4.33}\]
This result gives
\[T_{3}(x)= -(A-B)\frac{m^{2}}{2}\psi_{0}^{2}(1-3\bar{\phi}_{\rm kink}^{2})- \frac{3}{2}B(\partial_{1}\bar{\phi}_{\rm kink})\partial_{1}(x\partial_{1}\bar {\phi}_{\rm kink})\] \[= \Big{(}A-\frac{7}{4}B\Big{)}m^{2}{\rm sech}^{4}\frac{mx}{\sqrt{2} }-\frac{3}{2}(A-B)m^{2}{\rm sech}^{6}\frac{mx}{\sqrt{2}}\] \[+\frac{3\sqrt{2}}{4}Bm^{3}x\tanh\frac{mx}{\sqrt{2}}{\rm sech}^{4} \frac{mx}{\sqrt{2}}, \tag{4.34}\] \[T_{4}(x)= \frac{1}{2}(A-B)(\partial_{1}\psi_{0})^{2}+\frac{3}{2}B\bar{\phi }_{\rm kink}(1-\bar{\phi}_{\rm kink}^{2})x\partial_{1}\bar{\phi}_{\rm kink}\] \[= (A-B)m^{2}{\rm sech}^{4}\frac{mx}{\sqrt{2}}-(A-B)m^{2}{\rm sech}^ {6}\frac{mx}{\sqrt{2}}\] \[+\frac{3\sqrt{2}}{4}Bm^{3}x\tanh\frac{mx}{\sqrt{2}}{\rm sech}^{4} \frac{mx}{\sqrt{2}}. \tag{4.35}\]
As discussed in App. D, in the analysis of Eq. (4.32) there appear surface terms from partial integrals. Equation (4.33) is the result obtained neglecting these terms. We note that the vanishing of the surface terms is most clearly justified with the APBC, although it would be valid for any case.
Equation (4.33) also allows us to calculate the quantum correction to the topological charge density (2.10) as discussed in App. A.
## 5 Result
Accumulating these results, the expectation value of EMT to one-loop order is obtained as
\[\langle T^{\mu\nu}(x)\rangle= T^{\mu\nu}_{\rm kink}(x)+\Delta T^{\mu\nu}_{\rm kink}(x), \tag{5.1}\] \[\Delta T^{00}_{\rm kink}(x)= \frac{\sqrt{3}}{6}m^{2}{\rm sech}^{2}\frac{mx}{\sqrt{2}}-\Big{(} \frac{3}{2\pi}+\frac{7\sqrt{3}}{12}\Big{)}m^{2}{\rm sech}^{4}\frac{mx}{\sqrt{ 2}},\] \[+5\Big{(}\frac{3}{8\pi}+\frac{\sqrt{3}}{12}\Big{)}m^{2}{\rm sech} ^{6}\frac{mx}{\sqrt{2}}+\frac{\sqrt{6}}{8}m^{3}x\tanh\frac{mx}{\sqrt{2}}{\rm sech }^{4}\frac{mx}{\sqrt{2}},\] \[-\frac{3\sqrt{2}m}{2\pi L},\] (5.2) \[\Delta T^{11}_{\rm kink}(x)= -\frac{3\sqrt{2}m}{2\pi L}. \tag{5.3}\]
In Fig. 3, we show the behavior of Eqs. (5.1) and (5.2) in the \(L\to\infty\) limit together with the classical value \(T^{00}_{\rm kink}(x)\) at \(\lambda/m^{2}=1\). By taking the spatial integral of \(\langle T^{00}(x)\rangle\), we
obtain the total energy
\[\int_{-L/2}^{L/2}dx\,\langle T^{00}(x)\rangle=E_{\rm kink}+m\Big{(}\frac{\sqrt{6}} {12}-\frac{3\sqrt{2}}{2\pi}\Big{)}, \tag{5.4}\]
which reproduces the result in Ref. [36].
A notable feature of Eqs. (5.1), (5.2) and (5.3) is that all \(x\) dependence cancels out in Eq. (5.3) and \(\langle T^{11}(x)\rangle\) becomes a constant. This result is in agreement with the momentum conservation in static systems, \(\partial_{1}T^{11}(x)=0\), which is also interpreted as the equilibration of the force. While the energy density, i.e. Eq. (5.2), has been investigated in the same model in Refs. [43; 48], \(\langle T^{11}(x)\rangle\) is not analyzed there. Our result (5.2) does not agree with any of them3. The calculation of \(\langle T^{11}(x)\rangle\) and a confirmation of the momentum conservation would be used for a check of the validity of each analysis. Although the reproduction of their analyses is difficult, we give some arguments in App. E.
Footnote 3: In Ref. [43; 48], the definition of \(m\) is different from ours. Their results are comparable to ours with a replacement \(m^{2}\to 2m^{2}\).
Although Eq. (5.3) is obtained to leading order in \(1/L\), it is easily confirmed that \(\partial_{1}\,\langle T^{11}(x)\rangle=0\) holds even to higher orders in \(1/L\) as follows. The higher order terms in \(1/L\) come from Eqs. (4.16) and (4.17), and also Eqs. (4.22) and (4.23). Among their effects on the final result, the modifications of \(A\) and \(B\) in Eqs. (4.25) and (4.26) do not affect the cancellation of each term in \(\Delta T^{11}_{\rm kink}(x)\). Also, Eq. (4.23) becomes nonzero at order \(1/L^{2}\), and it modifies \(T_{2}(x)\), \(T_{3}(x)\) and \(T_{4}(x)\). However, one can easily verify that this effect cancels out in \(x\)-dependent terms. Therefore, \(\langle T^{11}(x)\rangle\) is a constant even to higher order in \(1/L\), as it should be from the momentum conservation.
Equation (5.3), however, is nonzero. Equation (5.2) also contains the same term proportional to \(1/L\). The term in Eq. (5.2) contributes to the total energy Eq. (5.4) and is mandatory to reproduce the result of Ref. [36]. However, this term vanishes if one defines
Figure 3: Energy density around the kink \(\langle T^{00}(x)\rangle\) at \(\lambda/m^{2}=1\). The classical value \(T^{00}_{\rm kink}(x)\) and the quantum correction \(\Delta T^{00}_{\rm kink}(x)\) in the \(L\to\infty\) limit are also shown by the dashed and dotted lines, respectively.
the energy density of the kink as the \(L\to\infty\) limit of Eq. (102). The total energy defined from this energy density as
\[\int_{-\infty}^{\infty}dx\lim_{L\to\infty}\left\langle T^{00}(x)\right\rangle=E_{ \rm kink}+\frac{\sqrt{6}}{12}m, \tag{103}\]
thus contradicts the one of Ref. [36]. This result raises the question of what is the correct total energy of the kink. We note that the quantum correction in Ref. [36] is negative, while Eq. (103) is positive.
There is another issue concerning the \(1/L\) term. Since the stress tensor \(\left\langle T^{11}(x)\right\rangle\) represents the force acting on each space point, this force does the work when the system size \(L\) is varied. More specifically, by varying the system size from \(L\) to \(L+\Delta L\), the total energy of the system should be reduced by \(T^{11}\Delta L\) due to the work. However, since the total energy of the system, Eq. (100), does not depend on \(L\), this interpretation is self-contradicting4. There are several possibilities to explain this contradiction. One of them is that \(T^{11}(x)\) would not be interpreted as the force in this system, or the system investigated here would not physically correspond to the one in which kinks and anti-kinks are aligned alternately with the interval \(L\). As for another possibility, we note that our analysis relies on the nonrelativistic approximation since it is justified only when \(P\) is of order \(\mathcal{O}(\lambda)\) as discussed in Sec. 3. The clarification of the problem, however, is beyond our present understanding, and we leave it for future study.
Footnote 4: For the case of Casimir effect, this correspondence between the stress and internal energy is fullfilled [27; 34].
## 6 Summary and outlook
In this study, we have explored the EMT distribution around the kink in the \(1+1\) dimensional \(\phi^{4}\) theory to one-loop order. Our final result is given in Eqs. (102) and (103). This result is consistent with the momentum conservation \(\partial_{1}\left\langle\tilde{T}^{11}(x)\right\rangle=0\). The spatial integral of \(\left\langle\tilde{T}^{00}(x)\right\rangle\) reproduces the total energy in Ref. [36], while our result obtained in a finite system of length \(L\) contains a constant term proportional to \(1/L\), whose physical interpretation is problematic.
There are many future extensions of the present study. Investigations of the kink and localized structures in other \(1+1\) dimensional models are straightforward ones among them. An example is the sine-Gordon model, which has a stable kink solution and the time-dependent solution called the breather mode [36]. While the quantum correction to their total energy has been investigated [37], EMT distribution has not been analyzed so far. Their analysis will be reported in the forthcoming publication [64]. Next, exploring the quantum effects on the localized structures in higher-dimensional systems is a further interesting subject. For example, the \(2+1\) dimensional \(\phi^{4}\) theory has a classical solution
\[\phi(x,y)=\phi_{\rm kink}(x;X), \tag{104}\]
having the translational invariance in the \(y\) direction, which is the surface connecting two vacua. By quantizing this system, the position of the kink is obscured. Although this effect
e will be well defined when the positions of the surface are fixed by hand at two points, say \(y=\pm R/2\). An investigation of the EMT distribution in this system will give us novel insights into the quantum effects on the surface. The problem would also be extended to a \(3+1\) dimensional system, where the classical solutions can have string-like structures, such as the vortex solution in the Abelian-Higgs model [33]. The analysis of quantum effects in this system will provide us with a microscopic basis of the effective string models [65; 66; 29; 67], as well as the numerical results of flux tube [67; 26; 32].
The authors thank Shunzo Kumano for valuable comments. They are also grateful to Teiji Kunihiro and Hiroshi Suzuki for their encouragement. This work was supported by JST SPRING, Grant Number JPMJSP2138, and JSPS KAKENHI (Grants No. JP19H05598, No. 20H01903, No. 22K03619).
## Appendix A Topological charge density
In this appendix, we calculate the 1-loop correction to the topological charge density (10) in the kink sector. From Eq. (4.30), the expectation value of \(j^{0}(x)\) in kink sector is given by
\[\langle j^{0}(x)\rangle_{\rm K}= j^{0}_{\rm kink}(x)+\Delta j^{0}_{\rm kink}(x),\] (A.1) \[\Delta j^{0}_{\rm kink}(x)= \frac{1}{2v}\langle\partial_{1}\tilde{\eta}\rangle_{\rm K}\] \[= \frac{1}{2}\big{[}(\sqrt{2}A-\frac{7\sqrt{2}}{4}B)m{\rm sech}^{2 }\frac{mx}{\sqrt{2}}+\frac{3\sqrt{2}}{2}(-A+B)m{\rm sech}^{4}\frac{mx}{\sqrt{ 2}}\] \[+\frac{3}{2}Bm^{2}x\tanh\frac{mx}{\sqrt{2}}{\rm sech}^{2}\frac{mx} {\sqrt{2}}\big{]}.\] (A.2)
Figure 4: Topological charge density around a kink at \(\lambda/m^{2}=1\).
The behavior of Eqs. (107) and (108) at \(\lambda/m^{2}=1\) is shown in Fig. 4. The spatial integral of Eq. (108) is given by
\[\int dx\Delta j_{\rm kink}^{0}(x)=0. \tag{110}\]
which leads to \(\int dx\left\langle j^{0}(x)\right\rangle_{\rm K}=1\).
## Appendix B Mass renormalization
In this Appendix, we discuss the dependence of our results on the mass renormalization condition and clarify their mutual relation. It is also shown that the bare perturbation theory gives the same result as Eqs. (107)-(109).
To resolve these issues, we first point out that the terms arising from the mass counterterm cancel out in \(\left\langle T^{11}(x)\right\rangle\). This can be checked by formally accumulating terms including \(\delta m^{2}\) in Eq. (111). Such terms exist in \(T_{2}(x)\), \(T_{3}(x)\) and \(T_{4}(x)\). First, \(T_{2}(x)\) contains
\[\delta T_{2}(x)=\frac{1}{2}\delta m^{2}(\phi_{\rm kink}^{2}-v^{2})=-\frac{m^{2 }}{2\lambda}\delta m^{2}{\rm sech}^{2}\frac{mx}{\sqrt{2}}. \tag{111}\]
Next, in \(T_{3}(x)\) and \(T_{4}(x)\), such terms arise from the first or second line of Eq. (102)
\[\left\langle\delta\tilde{\eta}(x)\right\rangle_{\rm K}=-i\delta m^{2}\int \frac{dy}{2\pi}\phi_{\rm kink}(y;0)\tilde{G}(x,y)=-i\frac{m}{\sqrt{\lambda}} \delta m^{2}H_{1}(x), \tag{112}\]
that gives
\[\delta T_{3}(x)= (\partial_{1}\phi_{\rm kink})\langle\partial_{1}\delta\tilde{ \eta}\rangle_{\rm K}=\frac{m^{2}\delta m^{2}}{2\lambda}\Big{(}-{\rm sech}^{4} \frac{mx}{\sqrt{2}}+\frac{mx}{\sqrt{2}}\tanh\frac{mx}{\sqrt{2}}{\rm sech}^{4} \frac{mx}{\sqrt{2}}\Big{)}, \tag{113}\] \[\delta T_{4}(x)= \lambda\phi_{\rm kink}(\phi_{\rm kink}^{2}-v^{2})\langle\delta \tilde{\eta}\rangle_{\rm K}\] \[= \frac{m^{2}\delta m^{2}}{2\lambda}\Big{(}{\rm sech}^{2}\frac{mx} {\sqrt{2}}-{\rm sech}^{4}\frac{mx}{\sqrt{2}}+\frac{mx}{\sqrt{2}}\tanh\frac{ mx}{\sqrt{2}}{\rm sech}^{4}\frac{mx}{\sqrt{2}}\Big{)}. \tag{114}\]
From Eqs. (111), (113) and (114) one finds
\[-\delta T_{2}(x)+\delta T_{3}(x)-\delta T_{4}(x)=0. \tag{115}\]
which means that these terms cancel out in Eq. (111).
From Eq. (115), it is concluded that \(\left\langle T^{11}(x)\right\rangle\) does not depend on \(\delta m^{2}\), and hence the renormalization condition. In particular, the momentum conservation \(\partial_{1}\left\langle T^{11}(x)\right\rangle=0\) is always satisfied. Equation (115) also tells us that Eq. (109) is obtained in the bare perturbation theory (BPT), where the mass counterterm does not exist.
Next, let us focus on \(\left\langle T^{00}(x)\right\rangle\). In this case, the change of the renormalized mass modifies the classical term \(T_{\rm kink}^{00}(x)\), which gives rise to additional terms at order \(\lambda^{0}\). To clarify the discussion, let us consider two renormalization conditions that give different renormalized masses \(m_{1}\) and \(m_{2}\), whose difference \(m_{1}-m_{2}\) is of order \(\lambda\). Then, in
each renormalization, the classical energy density is given by \(m_{1}^{4}/(2\lambda)\text{sech}^{4}(m_{1}x/\sqrt{2})\) and \(m_{2}^{4}/(2\lambda)\text{sech}^{4}(m_{2}x/\sqrt{2})\), respectively, whose difference is
\[(m_{1}-m_{2})\frac{\partial T^{00}_{\text{kink}}(x)}{\partial m}, \tag{100}\]
where the value of mass in \(\partial T^{00}_{\text{kink}}(x)/\partial m\) is irrelevant at order \(\lambda^{0}\). On the other hand, the value of mass counterterms \(\delta m_{1}^{2}\) and \(\delta m_{2}^{2}\) in each renormalization differ by \(\delta m_{1}^{2}-\delta m_{2}^{2}=m_{1}^{2}-m_{2}^{2}\sim(m_{1}-m_{2})m_{1}\). The difference of Eq. (4) coming from it is \(\delta T_{2}(x)+\delta T_{3}(x)+\delta T_{4}(x)\) but \(\delta m^{2}\) is replaced with \(\delta m_{1}^{2}-\delta m_{2}^{2}\). It is shown that this modification exactly cancels out with Eq. (100). This can be shown from the relation
\[-\frac{\delta m^{2}}{2m}\frac{\partial T^{00}_{\text{kink}}(x)}{\partial m}= \delta T_{2}(x)+\delta T_{3}(x)+\delta T_{4}(x)+\mathcal{O}(\lambda), \tag{101}\]
that is obtained by an explicit calculation of the left-hand side.
Equation (101) also tells us that Eq. (100) is obtained even in the BPT. In the BPT, we use the bare mass \(m_{0}\) that is related to the renormalized mass \(m\) as \(m^{2}=m_{0}^{2}+\delta m^{2}\), while the mass counterterm is not introduced. In this case, since the mass in \(T^{00}_{\text{kink}}(x)\) is \(m_{0}\), Eq. (101) appears at order \(\lambda^{0}\) when \(T^{00}_{\text{kink}}(x)\) is rewritten by \(m\), which is exactly the term coming from the mass counterterm in the renormalized perturbation theory. Therefore, the results in the bare and renormalized perturbation theories are the same at order \(\lambda^{0}\) as they should be.
## Appendix C Mode-number cutoff
In this appendix we derive Eqs. (4.22) and (4.23). For this, we use the mode-number cutoff (MNC) prescription. We refer to Refs. [35; 60; 36] for a more detailed discussion on the MNC. In particular, see Ref. [60] for the treatment of the phase shift.
Figure 5: Eigenvalue distributions in the kink and vacuum sectors for the APBC. The kink sector has two bound states shown by the orange lines. The continuum spectra shown by the blue lines are doubly degenerated. On the right-hand side of Eq. (100), the subtraction is taken between the modes connected by the arrows.
We start from the form of subtraction
\[\frac{1}{L}\sum_{n=-\infty}^{\infty}f(q_{n})-\frac{1}{L}\sum_{n=-\infty}^{\infty} f(k_{n}), \tag{108}\]
for an even function \(f(x)\), where \(q_{n}\) and \(k_{n}\) are the discretized momenta in the kink and vacuum sectors in Eqs. (30) and (33). Using \(f(x)=f(-x)\), Eq. (108) is rewritten as
\[\frac{2}{L}\sum_{n=0}^{\infty}f(q_{n})-\frac{2}{L}\sum_{n=0}^{\infty}f(k_{n}). \tag{109}\]
Note that \(q_{n}=-q_{-n-1}\) and \(k_{n}=-k_{-n-1}\) from Eqs. (30) and (33).
To perform the subtraction in Eq. (109), one has to introduce the upper limits of the two sums to make them finite, and then take them to infinity keeping the difference finite. In the MNC, this cutoff is introduced in such a way that the mode numbers in the kink and vacuum sectors are equivalent. This prescription is justified, for example, in the lattice regularization. Then, since the kink sector has two bound states that are not included in Eq. (109), and there are two modes for each \(n\) in Eq. (109), the upper bound for the kink sector is one smaller than the vacuum sector. Therefore, in the MNC the sum (109) is defined as
\[\frac{2}{L}\sum_{n=0}^{N-1}f(q_{n})-\frac{2}{L}\sum_{n=0}^{N}f(k_{n})=\frac{2 }{L}\sum_{n=0}^{N-1}\Big{(}f(q_{n})-f(k_{n+1})\Big{)}-\frac{2}{L}f(k_{0}), \tag{110}\]
where \(N\) is half the number of the modes that are taken infinity at the end of the calculation. On the right-hand side of Eq. (110), the subtraction is taken between the modes off by one as in Fig. 5, and the remaining \(n=0\) mode in the vacuum sector is put outside the sum.
From Eqs. (30) and (33), one sees that
\[q_{n}=k_{n+1}-\frac{2\pi+\delta_{p}(q_{n})}{L}=k_{n+1}-\frac{2\pi+\delta_{p}( k_{n+1})}{L}+\mathcal{O}(L^{-2}). \tag{111}\]
In the limit \(L\to\infty\) we thus have
\[f(q_{n})-f(k_{n+1})=-f^{\prime}(k_{n+1})\frac{2\pi+\delta_{p}(k_{n+1})}{L}. \tag{112}\]
Plugging Eq. (112) into Eq. (110), Eq. (108) is calculated to be
\[\frac{1}{L}\sum_{n=-\infty}^{\infty}f(q_{n})-\frac{1}{L}\sum_{n=- \infty}^{\infty}f(k_{n})\] \[=-\frac{2}{L}\lim_{N\to\infty}\sum_{n=0}^{N-1}f^{\prime}(k_{n+1}) \frac{2\pi+\delta_{p}(k_{n+1})}{L}-\frac{2}{L}f(k_{0})\] \[\xrightarrow[L\to\infty]{}-\frac{2}{L}\int_{0}^{\infty}\frac{dk}{ 2\pi}f^{\prime}(k)(\delta_{p}(k)+2\pi)-\frac{2}{L}f(k_{0})\] \[=-\frac{1}{\pi L}\Big{[}f(k)(\delta_{p}(k)+2\pi)\Big{]}_{0}^{ \infty}+\frac{2}{L}\int_{0}^{\infty}\frac{dk}{2\pi}f(k)\delta^{\prime}(k)- \frac{2}{L}f(k_{0})\] \[=-\frac{3\sqrt{2}}{L\pi}\lim_{k\to\infty}\Big{(}f(k)\frac{m}{k} \Big{)}+\frac{2}{L}\int_{0}^{\infty}\frac{dk}{2\pi}f(k)\delta^{\prime}_{p}(k), \tag{113}\]
where in the last equality we used Eq. (23). The last term in Eq. (102) cancels out with \(\delta^{\prime}_{p}(q_{n})/L\) term in Eqs. (4.22) and (4.23). Substituting \(f(x)=\sqrt{x^{2}+2m^{2}}\) and \(f(x)=1/\sqrt{x^{2}+2m^{2}}\) into this result gives Eqs. (4.22) and (4.23), respectively.
## Appendix D Calculations of \(H_{i}(x)\)
In this Appendix, we calculate \(H_{i}(x)\) in Eq. (4.32) that appear in the analysis of the tadpole diagram in Sec. 4.4.
We start from an identity [57]5,
Footnote 5: See, Eq. (4.7) of Ref. [57].
\[\int dy\partial_{y}^{2}\bar{\phi}_{\rm kink}(y)\tilde{G}(x,y)= \frac{i}{2}x\partial_{x}\bar{\phi}_{\rm kink}(x). \tag{104}\]
Substituting the EoM
\[(\partial_{y}^{2}+m^{2})\bar{\phi}_{\rm kink}(y)=m^{2}\bar{\phi} _{\rm kink}^{3}(y), \tag{105}\]
into Eq. (104), we obtain
\[\int dy\partial_{y}^{2}\bar{\phi}_{\rm kink}(y)\tilde{G}(x,y) =m^{2}\int dy(\bar{\phi}_{\rm kink}^{3}(y)-\bar{\phi}_{\rm kink}( y))\tilde{G}(x,y)\] \[=m^{2}(H_{3}(x)-H_{1}(x)). \tag{106}\]
By the partial integral, Eq. (104) is also calculated to be
\[\int dy\partial_{y}^{2}\bar{\phi}_{\rm kink}(y)\tilde{G}(x,y)= \big{[}\partial_{y}\bar{\phi}_{\rm kink}(y)\tilde{G}(x,y)\big{]}_ {-L/2}^{L/2}-\int dy\partial_{y}\bar{\phi}_{\rm kink}(y)\partial_{y}\tilde{G} (x,y)\] \[= -\big{[}\bar{\phi}_{\rm kink}(y)\partial_{y}\tilde{G}(x,y)\big{]} _{-L/2}^{L/2}+\int dy\bar{\phi}_{\rm kink}(y)\partial_{y}^{2}\tilde{G}(x,y). \tag{107}\]
Provided that the surface terms in the second and third lines vanish, one obtains
\[\int dy\partial_{y}^{2}\bar{\phi}_{\rm kink}(y)\tilde{G}(x,y)= \int dy\bar{\phi}_{\rm kink}(y)\partial_{y}^{2}\tilde{G}(x,y). \tag{108}\]
Here, we note that the surface terms in Eq. (107) vanish trivially when the APBC (or Dirichlet) BC is imposed on \(\tilde{\eta}(x)\). We employ the APBC owing to the cancellation, although the surface terms would vanish even for other BCs because of \(\lim_{x\to\pm\infty}\partial_{x}\bar{\phi}(x)=0\) and \(\lim_{y\to\pm\infty}G(x,y)=0\).
Plugging
\[(-\partial_{y}^{2}-m^{2}+3m^{2}\bar{\phi}_{\rm kink}^{2}(y)) \tilde{G}(x,y)=-i\delta(x-y), \tag{109}\]
into Eq. (108) gives
\[\int dy\partial_{y}^{2}\bar{\phi}_{\rm kink}(y)\tilde{G}(x,y) =\int dy\bar{\phi}_{\rm kink}(y)\big{\{}(-m^{2}+3m^{2}\bar{\phi} _{\rm kink}^{2}(y))\tilde{G}(x,y)+i\delta(x-y)\big{\}}\] \[=m^{2}(3H_{3}(x)-H_{1}(x))+i\bar{\phi}_{\rm kink}(x). \tag{110}\]
From Eqs. (108) and (109), one finds
\[H_{1}(x) =-\frac{i}{2m^{2}}\partial_{x}\big{(}x\bar{\phi}_{\rm kink}(x)\big{)}, \tag{110}\] \[H_{3}(x) =-\frac{i}{2m^{2}}\bar{\phi}_{\rm kink}(x). \tag{111}\]
To calculate \(H_{5}(x)\), we use the following relations
\[\int dy\bar{\phi}^{3}_{\rm kink}(y)\partial_{y}^{2}\tilde{G}(x,y) =\int dy\bar{\phi}^{3}_{\rm kink}(y)\big{\{}(-m^{2}+3m^{2}\bar{ \phi}^{2}_{\rm kink}(y))\tilde{G}(x,y)+i\delta(x-y)\big{\}}\] \[=m^{2}(3H_{5}(x)-H_{3}(x))+i\bar{\phi}^{3}_{\rm kink}(x), \tag{112}\] \[\int dy\bar{\phi}^{3}_{\rm kink}(y)\partial_{y}^{2}\tilde{G}(x,y) =\int dy\partial_{y}^{2}\big{(}\bar{\phi}^{3}_{\rm kink}(y)\big{)} \tilde{G}(x,y)\] \[=m^{2}(6H_{5}(x)-9H_{3}(x)+3H_{1}(x)), \tag{113}\]
which lead to
\[H_{5}(x)=\frac{i}{2m^{2}}\partial_{x}\big{(}x\bar{\phi}_{\rm kink}(x)\big{)}- \frac{4i}{3m^{2}}\bar{\phi}_{\rm kink}(x)+\frac{i}{3m^{2}}\bar{\phi}^{3}_{\rm kink }(x). \tag{114}\]
## Appendix E Other approaches
The energy densities obtained in Refs. [43; 48] and our result on \(\langle T^{00}(x)\rangle\) differ with one another. In this Appendix, in order to gain insights into the origin of the difference we give a brief review of the regularization employed in Ref. [43] called the local-mode regularization (LMR).
In the LMR, vacuum subtraction is performed in an infinitely-long system. Since the eigenfunctions Eq. (20) form a continuous spectrum in this case, it is convenient to use the orthogonality condition of eigenfunctions
\[\int_{-\infty}^{\infty}dx\tilde{\psi}_{0}(x)\tilde{\psi}_{0}(x)= \int_{-\infty}^{\infty}dx\tilde{\psi}_{1}(x)\tilde{\psi}_{1}(x)=1,\quad\int_{ -\infty}^{\infty}dx\tilde{\psi}_{q}^{*}(x)\tilde{\psi}_{p}(x)=\delta(q-p), \tag{115}\]
in place of Eq. (24).
According to Ref. [43], the LMR introduces the local mode density
\[\rho^{\rm K}_{\Lambda}(x)\equiv\sum_{l=0}^{N}\tilde{\psi}_{l}^{*}(x)\tilde{ \psi}_{l}(x)=\tilde{\psi}_{0}^{2}(x)+\tilde{\psi}_{1}^{2}(x)+2\int_{0}^{ \Lambda}\frac{dq}{2\pi}|\tilde{\psi}_{q}(x)|^{2}, \tag{116}\]
for the kink sector with a cutoff \(\Lambda=2\pi N/L\) and the corresponding one \(\rho^{\rm V}_{\Lambda}(x)\) for the vacuum sector
\[\rho^{\rm V}_{\Lambda}(x)=2\int_{0}^{\Lambda}\frac{dk}{2\pi}. \tag{117}\]
Then, the UV cutoff in each sector, \(\Lambda_{\rm K}\) and \(\Lambda_{\rm V}\), which are dependent on \(x\), is introduced so that the local mode densities are equivalent for each sector
\[\rho^{\rm K}_{\Lambda_{\rm K}}(x) =\rho^{\rm V}_{\Lambda_{\rm V}}(x), \tag{118}\] \[\Lambda_{\rm V} =\Lambda_{\rm K}+\Delta\Lambda(x). \tag{119}\]
Using the completeness relation
\[\int_{-\infty}^{\infty}\frac{dq}{2\pi}\{|\bar{\psi}_{q}|^{2}-1\}+\tilde{\psi}_{0}^ {2}+\tilde{\psi}_{1}^{2}=0, \tag{101}\]
Eq. (100) is given by
\[\rho_{\Lambda}^{\rm K}=-2\int_{\Lambda}^{\infty}\frac{dq}{2\pi}|\bar{\psi}_{q} |^{2}+2\int_{0}^{\infty}\frac{dq}{2\pi}. \tag{102}\]
From Eqs. (102) and (100) one obtains
\[\Delta\Lambda(x) =\int_{\Lambda}^{\infty}dq\big{\{}1-|\bar{\psi}_{q}|^{2}\big{\}}\] \[=\int_{\Lambda}^{\infty}dk\Big{\{}\frac{3}{2(k^{2}+2m^{2})}\psi_{ 0}^{2}+\frac{3}{2k^{2}+m^{2}}\psi_{1}^{2}\Big{\}}=\frac{3m^{2}}{2\Lambda}( \psi_{0}^{2}+\psi_{1}^{2})+\mathcal{O}(\Lambda^{-2})\] \[=\frac{3m^{2}}{2\Lambda}{\rm sech}^{2}\frac{mx}{\sqrt{2}}+ \mathcal{O}(\Lambda^{-2}). \tag{103}\]
Using Eq. (103), the vacuum subtractions in Eqs. (4.22) and (4.23) are calculated to be
\[\frac{1}{2}\int_{-\infty}^{\infty}\frac{dq}{2\pi}\sqrt{q^{2}+2m^ {2}}-\frac{1}{2}\int_{-\infty}^{\infty}\frac{dk}{2\pi}\sqrt{k^{2}+2m^{2}}\] \[=\int_{0}^{\Lambda}\frac{dq}{2\pi}\sqrt{q^{2}+2m^{2}}-\int_{0}^{ \Lambda+\Delta\Lambda}\frac{dk}{2\pi}\sqrt{k^{2}+2m^{2}}\] \[=-\int_{\Lambda}^{\Lambda+\Delta\Lambda}\frac{dk}{2\pi}\sqrt{k^{2 }+2m^{2}}=-\frac{\Lambda\Delta\Lambda}{2\pi}+\mathcal{O}(\Lambda^{-1})\] \[\xrightarrow[\Lambda\to\infty]{}-\frac{3m^{2}}{4\pi}{\rm sech}^{ 2}\frac{mx}{\sqrt{2}}, \tag{104}\] \[\frac{1}{2}\int_{-\infty}^{\infty}\frac{dq}{2\pi}\frac{1}{\sqrt{q ^{2}+2m^{2}}}-\frac{1}{2}\int_{-\infty}^{\infty}\frac{dk}{2\pi}\frac{1}{\sqrt{ k^{2}+2m^{2}}}\] \[=\frac{\Delta\Lambda}{2\pi\Lambda}+\mathcal{O}(\Lambda^{-3}) \xrightarrow[\Lambda\to\infty]{}0. \tag{105}\]
Since the LMR is needed only for the subtraction between divergent sums, the replacement of Eqs. (4.22) and (4.23) with Eqs. (104) and (105), respectively, is only the change in the LMR compared with the MNC. Thus, the final result of the EMT distribution in the LMR is obtained by simply replacing
\[-\frac{3\sqrt{2}m}{2\pi L}\longrightarrow-\frac{3m^{2}}{4\pi}{\rm sech}^{2} \frac{mx}{\sqrt{2}}, \tag{106}\]
in Eqs. (100) and (100). This result gives \(\Delta T_{\rm kink}^{11}(x)=-3m^{2}/(4\pi){\rm sech}^{2}(mx/\sqrt{2})\), which is not consistent with the momentum conservation \(\partial_{1}T^{11}=0\), while the spatial integral of \(\langle\tilde{T}^{00}(x)\rangle\) reproduces the result in Ref. [36] even after the replacement.
We, however, note that the result of \(\Delta T_{\rm kink}^{00}(x)\) obtained with the replacement (106) does not agree with the energy density in Ref. [43]. This suggests the existence of a difference
in the manipulation other than the vacuum subtraction scheme. On the other hand, we found that this result agrees with the energy density in Ref. [48], while the point-split regularization is employed there. The agreement implies the similarity of the regularization schemes. We, however, do not pursue details further in the present study.
|
2302.13424 | Weighted contractivity for derivatives of functions in the Bergman space
on the unit disk | In a recent paper, Ramos and Tilli proved certain sharp inequality for
analytic functions in subdomains of the unit disk. We will generalize their
main inequality for derivatives of functions from Bergman space with respect to
two diferent measures. Some connections with an analog for the Fock spaces,
earlier investigated in Kalaj, will also be discussed. | David Kalaj, Petar Melentijević | 2023-02-26T22:27:27Z | http://arxiv.org/abs/2302.13424v1 | # Weighted contractivity for derivatives of functions in the Bergman space on the unit disk
###### Abstract.
In a recent paper [14], Ramos and Tilli proved certain sharp inequality for analytic functions in subdomains of the unit disk. We will generalize their main inequality for derivatives of functions from Bergman space with respect to two diferent measures. Some connections with an analog for the Fock spaces, earlier investigated in [6], will also be discussed.
Key words and phrases:Hyperbolic metric, Bergman spaces, isoperimetric inequality, sharp estimates The second author is partially supported by MPNTR grant 174017, Serbia.
Introduction
Let \(\mathbb{D}\) be a bounded domain with \(\mathbb{D}\) boundary \(\partial\mathbb{D}\). We say that \(\mathbb{D}\) is a _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_ of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\) if \(\mathbb{D}\) is a boundary domain of \(\mathbb{D}\). We say that \(\mathbb{D}\) is _boundary domain_
**Theorem 1.1**.: _Let \(\alpha>-1,\) and \(s>0\) be fixed. Among all functions \(f\in\mathcal{A}_{\alpha}\) and among all measurable sets \(\Omega\subset\mathbb{D}\) such that \(\mu(\Omega)=s\), the quotient \(R(f,\Omega)\) as defined in (1.2) satisfies the inequality_
\[R(f,\Omega)\leq R(1,\mathbb{D}_{s}), \tag{1.3}\]
_where \(\mathbb{D}_{s}\) is a disc centered at the origin with \(\mu(\mathbb{D}_{s})=s.\) Moreover, there is equality in (1.3) if and only if \(f\) is a multiple of some reproducing kernel \(K_{w}\) and \(\Omega\) is a ball centered at \(w\), such that \(\mu(\Omega)=s\)._
Note that, in the Poincare disc model in two dimensions, balls in the pseudohyperbolic metric coincide with Euclidean balls, but the Euclidean and hyperbolic centers differ in general, as well as the respective radii.
The proof of Theorem 1.1 is based on the isoperimetric inequality for hyperbolic metric on the unit disk and the point-wise estimate ([16])
\[|f(z)|^{2}\leq(1-|z|^{2})^{-\alpha}\|f\|_{\alpha}^{2},\ \ z\in\mathbb{D}. \tag{1.4}\]
The aim of this paper is to generalize this result for the higher order derivatives of holomorphic functions. In order to do so, define the Gaussian hypergeometric function
\[F(a,b;c;z)=\sum_{n=0}^{\infty}\frac{(a)_{n}(b)_{n}}{(c)_{n}n!}z^{n},\]
where \((d)_{n}=d(d+1)\ldots(n+n-1)\) is the Pochhammer symbol.
We begin by the following lemma (an analog of (1.4) for derivatives):
**Lemma 1.2**.: _Let_
\[g(|z|)=(\alpha)_{n}\cdot n!\cdot(1-|z|^{2})^{-\alpha-2n}F\left(1-\alpha-n,-n; 1;r^{2}\right).\]
_If \(f\in A_{\alpha}\), then_
\[|f^{(n)}(z)|^{2}\leq g(|z|)\|f\|_{\alpha}^{2}.\]
Proof.: From reproducing formula for functions from the Bergman space
\[f(z)=\int_{\mathbb{D}}\frac{f(w)}{(1-z\overline{w})^{\alpha}}dA_{\alpha}(w)\]
we find
\[f^{(n)}(z)=\int_{\mathbb{D}}\frac{f(w)\alpha(1+\alpha)\cdots(n+\alpha-1) \overline{w}^{n}}{(1-z\overline{w})^{n+\alpha}}dA_{\alpha}(w)\]
and consequently, by Cauchy-Schwarz inequality:
\[|f^{(n)}(z)|\leq\alpha(1+\alpha)\cdots(n+\alpha-1)\|f\|_{2,\alpha}\sqrt{\int _{\mathbb{D}}|1-z\overline{w}|^{-2\alpha-2n}|w|^{2n}dA_{\alpha}(w)}.\]
The integral can be calculated using polar coordinates and Parseval's formula, thus giving:
\[(\alpha-1) \int_{\mathbb{D}}|1-z\overline{w}|^{-2\alpha-2n}|w|^{2n}(1-|w|^{2} )^{\alpha}dA(w)\] \[=2(\alpha-1)\int_{0}^{1}\rho^{2n+1}(1-\rho^{2})^{\alpha}\bigg{(} \frac{1}{2\pi}\int_{0}^{2\pi}|1-z\rho e^{i\theta}|^{-2\alpha-2n}d\theta\bigg{)}d\rho\] \[=2(\alpha-1)\sum_{k=0}^{+\infty}\binom{\alpha+n+k-1}{k}^{2}|z|^{2 k}\int_{0}^{1}\rho^{2n+2k+1}(1-\rho^{2})^{\alpha}d\rho\] \[=\frac{n!}{(\alpha)_{n}}F(n+1,\alpha+n;1;|z|^{2}).\]
Hence,
\[|f^{(n)}(z)|^{2}\leq n!(\alpha)_{n}F(n+1,\alpha+n;1;|z|^{2})\|f\|_{2,\alpha}^{ 2},\]
and by, Euler's transformation, the last hypergeometric function is equal to \((1-|z|^{2})^{-2n-\alpha}F(-n,-\alpha-n+1;1;|z|^{2}).\)
Similar estimates for the first and higher order derivatives for functions in weighted Bergman and Fock spaces can be found in [3, 12].
Now define two weighted measures
\[d\mu_{n,\alpha}(z)=\frac{(\alpha+2n-1)(1-|z|^{2})^{\alpha+2n}dxdy}{\pi(\alpha )_{n}\cdot n!\cdot F\left(1-\alpha-n,-n;1;|z|^{2}\right)},z=x+iy,\]
and
\[d\nu_{n,\alpha}(z)=\frac{\Gamma(\alpha)(1-|z|^{2})^{\alpha+2n}dxdy}{\pi\Gamma( \alpha+2n-1)},z=x+iy,\]
and consider the space of holomorphic functions \(f\) defined on the unit disk, so that
\[\|f\|_{n,\alpha}^{2}:=\int_{\mathbb{D}}|f(z)|^{2}d\mu_{n,\alpha}(z)<\infty.\]
Denote this space by \(\mathcal{A}_{n,\alpha}.\) Observe that, for \(n=0,\)\(\mathcal{A}_{n,\alpha}=\mathcal{A}_{\alpha}.\)
The motivation for considering the first of them comes from the pointwise estimate (1.4). Since every measure of the form \(C(1-|z|^{2})^{\alpha+2n}\) on the unit disk \(\mathbb{D}\) is equivalent to \(d\mu_{n,\alpha},\) it also make sense to consider \(d\nu_{n,\alpha}.\) At the end of our paper we will see that, after certain limitting process, the results for the measure \(d\mu_{n,\alpha}\) can be transferred to the appropriate results for Fock spaces, earlier proved in [6].
Now we formulate the following extension of Theorem 1.1 for \(n\geq 1.\)
**Theorem 1.3**.: _Assume that \(f\in\mathcal{A}_{\alpha}\) and define the \(n-\)quotient_
\[R_{n,\mu}(f,\Omega)=\frac{\int_{\Omega}|f^{(n)}(z)|^{2}d\mu_{n,\alpha}}{\|f\|_ {\mathcal{A}_{\alpha}}^{2}}\]
_and the analogous \(R_{n,\nu}(f,\Omega)\) with the measure \(d\nu_{n,\alpha}.\) Then we have the following sharp inequalities_
\[R_{n,\mu}(f,\Omega)\leq\left(1-(1+s/\pi)^{1-X}\right),\quad\text{for}\quad 1 \leq n\leq 3, \tag{1.5}\]
\[R_{n,\nu}(f,\Omega)\leq\left(1-(1+s/\pi)^{1-2n-\alpha}\right),\quad\text{for} \quad n\geq 1, \tag{1.6}\]
_where \(\mu(\Omega)=\nu(\Omega)=s\) and \(X=(n+1)(n+\alpha).\) The equality can not be attained in any of these two estimates._
We believe that (1.5) holds for each \(n\geq 1,\) but we were not able to prove this.
## 2. Proof of Theorem 1.3
In this section we will prove the Theorem 1.3. In order to do this, we need the following lemmas
**Lemma 2.1**.: _For \(\alpha>1\) and \(n\in\mathbb{N},\) we have_
\[\int_{\mathbb{D}}|f^{(n)}(z)|^{2}(1-|z|^{2})^{-2}d\mu_{n,\alpha}(z)\leq(\alpha -1)\int_{\mathbb{D}}|f(z)|^{2}(1-|z|^{2})^{\alpha-2}\frac{dxdy}{\pi}\]
_and the analogous with \(d\nu_{n,\alpha}\) in the place of \(d\mu_{n,\alpha}.\)_
We postpone the proof of this lemma for the next section.
**Lemma 2.2**.: _For \(n\geq 0\) and \(\alpha>1,\) we have_
\[\Delta\log g(|z|)\leq 4(n+1)(n+\alpha)(1-|z|^{-2})^{-2}.\]
Proof.: Let us denote \(F(t)=F(n+1,n+\alpha;1;t)\). From the formula for laplacian in polar coordinates we find
\[\Delta\log g(\rho) =\Delta\log F(\rho^{2})=\frac{1}{\rho}\frac{\partial}{\partial \rho}\log F(\rho^{2})+\frac{\partial^{2}}{\partial\rho^{2}}\log F(\rho^{2})\] \[=\frac{4F^{\prime}(\rho^{2})F(\rho^{2})+4\rho^{2}F(\rho^{2})F^{ \prime\prime}(\rho^{2})-4\rho^{2}(F^{\prime}(\rho^{2}))^{2}}{F^{2}(\rho^{2})}.\]
Hence the inequality we intend to prove is equivalent to
\[F(t)F^{\prime}(t)+tF(t)F^{\prime\prime}(t)-t(F^{\prime}(t))^{2}\leq(n+1)(n+ \alpha)(1-t)^{-2}F^{2}(t).\]
This inequality follows from the next two inequalities:
\[(n+1)(n+\alpha)(1-t)^{-1}(F(t))^{2}\geq F^{\prime}(t)F(t)\]
and
\[(n+1)(n+\alpha)((1-t)^{-2}-(1-t)^{-1})(F(t))^{2}\geq t(F(t)F^{\prime\prime}(t )-(F^{\prime}(t))^{2}).\]
First of them, after using \(F(a,b;c;t)=(1-t)^{c-a-b}F(c-a,c-b;c;t)\), is easily seen to be equivalent with \(F(-n,1-\alpha-n;1;t)\geq F(-n,1-\alpha-n;2;t)\).The second inequality reduces to
\[(n+1)(n+\alpha)(1-t)^{-2}\geq\bigg{(}\frac{F^{\prime}}{F}\bigg{)}^{\prime}(t),\]
or, by using the same identity:
\[(1-t)^{-2}\geq\bigg{(}(1-t)^{-1}\frac{F(-n,1-\alpha-n;2;t)}{F(-n,1-\alpha-n;1;t)}\bigg{)}^{\prime}\] \[=(1-t)^{-2}\frac{F(-n,1-\alpha-n;2;t)}{F(-n,1-\alpha-n;1;t)}+(1-t )^{-1}\bigg{(}\frac{F(-n,1-\alpha-n;2;t)}{F(-n,1-\alpha-n;1;t)}\bigg{)}^{\prime}.\]
However, main results of the papers [2, 5] gives that \(\frac{F(-n,1-\alpha-n;2;t)}{F(-n,1-\alpha-n;1;t)}\) is monotone decreasing and \(\leq 1\), which concludes the proof.
Now, we turn to the proof of main theorem. First we define
\[u_{n}(z)=\frac{|f^{(n)}(z)|^{2}}{g(|z|)}\]
with \(g(|z|)=\pi n!(\alpha)_{n}F(n+1,n+\alpha;1;|z|^{2})\) and also
\[I_{n}(s)=\int_{\{z:u_{n}(z)>u_{n}^{*}(s)\}}u_{n}(z)d\mu(z),\]
where \(u_{n}^{*}(s)\) is the unique positive real number \(t\) so that \(\mu(\{z:u_{n}(z)>t\})=s\).
Observe that
\[I_{n}(0)=\int_{\{z:u_{n}(z)>u_{n}^{*}(0)\}}u_{n}(z)d\mu(z)=0, \tag{2.1}\]
and
\[\begin{split} I_{n}(+\infty)&=\int_{\{z:u_{n}(z)>u_ {n}^{*}(+\infty)\}}u_{n}(z)d\mu(z)\\ &=\int_{\{z:u_{n}(z)>0\}}u_{n}(z)d\mu(z)=\int_{\mathbb{D}}u_{n}(z )d\mu(z)\leq 1\end{split}. \tag{2.2}\]
In a similar way as in [14] we can prove
**Lemma 2.3**.: _The function \(\varrho(t):=\mu(\{u_{n}>t\})\) is absolutely continuous on \((0,\max u],\) and_
\[-\varrho^{\prime}(t)=\int_{\{u=t\}}|\nabla u_{n}|^{-1}(1-|z|^{2})^{-2}\,d \mathcal{H}^{1}.\]
_In particular, the function \(u_{n}^{*}\) is, as the inverse of \(\varrho,\) locally absolutely continuous on \([0,+\infty),\) with_
\[-(u_{n}^{*})^{\prime}(s)=\left(\int_{\{u_{n}=u_{n}^{*}(s)\}}|\nabla u_{n}|^{-1}( 1-|z|^{2})^{-2}\,d\mathcal{H}^{1}\right)^{-1}.\]
Let us then denote the boundary of the superlevel set where \(u>u^{*}(s)\) as
\[A_{s}=\partial\{z:u_{n}(z)>u_{z}^{*}(s)\}.\]
We then imitate the corresponding proof in [14]. By Lemma 2.3,
\[I_{n}^{\prime}(s)=u_{n}^{*}(s),\quad I_{n}^{\prime\prime}(s)=-\left(\int_{A_{s }}|\nabla u_{n}|^{-1}(1-|z|^{2})^{-2}\,d\mathcal{H}^{1}\right)^{-1}.\]
Now we apply Cauchy-Schwarz inequality to get
\[\left(\int_{A_{s}}|\nabla u_{n}|^{-1}(1-|z|^{2})^{-2}\,d\mathcal{H}^{1}\right) \left(\int_{A_{s}}|\nabla u_{n}|\,d\mathcal{H}^{1}\right)\geq\left(\int_{A_{s }}(1-|z|^{2})^{-1}\,d\mathcal{H}^{1}\right)^{2},\]
letting
\[L(A_{s}):=\int_{A_{s}}(1-|z|^{2})^{-1}\,d\mathcal{H}^{1}\]
denote the length of \(A_{s}\) in the hyperbolic metric, we obtain the lower bound
\[I_{n}^{\prime\prime}(s)\geq-\left(\int_{A_{s}}|\nabla u_{n}|\,d\mathcal{H}^{1 }\right)L(A_{s})^{-2}. \tag{2.3}\]
To determine the first term in the product on the right-hand side of (2.3), we observe that
\[\Delta\log u_{n}(z) =\Delta\log\frac{|f^{(n)}(z)|^{2}}{g(|z|)}\] \[=\Delta\log|f^{(n)}(z)|^{2}-\Delta\log g(|z|)\] \[\geq-4(n+1)(n+\alpha)(1-|z|^{2})^{-2},\]
which then implies that, letting \(w(z)=\log u_{n}(z),\)
\[\frac{-1}{u_{n}^{*}(s)}\int_{A_{s}}|\nabla u_{n}|\,d\mathcal{H}^{1} =\int_{A_{s}}\nabla w\cdot\nu\,d\mathcal{H}^{1}\] \[=\int_{u_{n}>u_{n}^{*}(s)}\Delta w\,dz\] \[\geq-4(n+1)(n+\alpha)\int_{u_{n}>u_{n}^{*}(s)}(1-|z|^{2})^{-2}\,dz\] \[=-4(n+1)(n+\alpha)\mu(\{u_{n}>u_{n}^{*}(s)\})\] \[=-4(n+1)(n+\alpha)s.\]
Therefore,
\[I^{\prime\prime}(s)\geq-4(n+1)(n+\alpha)su_{n}^{*}(s)L(A_{s})^{-2}=-4(n+1)(n+ \alpha)sI^{\prime}(s)L(A_{s})^{-2}. \tag{2.4}\]
On the other hand, the isoperimetric inequality for the hyperbolic metric gives
\[L(A_{s})^{2}\geq 4\pi s+4s^{2},\]
so that, plugging into (2.4), we obtain
\[I^{\prime\prime}(s) \geq-4(n+1)(n+\alpha)sI^{\prime}(s)(4\pi s+4s^{2})^{-1}\] \[=-(n+1)(n+\alpha)I^{\prime}(s)(\pi+s)^{-1}. \tag{2.5}\]
Then we obtain
\[I_{n}^{\prime\prime}(s)+(n+1)(n+\alpha)I_{n}^{\prime}(s)(\pi+s)^{-1}\geq 0.\]
Thus
\[J(t)=I_{n}(T(t)),\]
is convex, where
\[T(t)=-\pi+\pi\left((1-t)\right)^{\frac{1}{1-(n+1)(n+\alpha)}}.\]
Since \(T(0)=0\) and \(T(1)=+\infty\), it follows that \(J(0)=0\) and \(J(1)\leq 1\). Thus \(J(s)\leq s\). In other words,
\[I_{n}(s)\leq\theta_{n}(s)\]
where
\[\theta_{n}(s)=1-(1+s/\pi)^{1-X},\]
and
\[X=(n+1)(n+\alpha).\]
In the case of measure \(d_{\nu_{n,\alpha}}\), we define \(u_{n}\), as above, but with \(g(|z|)\)
\(=\frac{\pi\Gamma(\alpha+2n-1)}{\Gamma(\alpha)}.\) Proceeding as earlier, we have
\[\Delta\log u_{n}(z)\geq-4(2n+\alpha)(1-|z|^{2})^{-2},\]
thus getting
\[\frac{-1}{u_{n}^{*}(s)}\int_{A_{s}}|\nabla u_{n}|\,d\mathcal{H}^{1}\geq-4(2n+ \alpha)s\]
and
\[I^{\prime\prime}(s)\geq-4(2n+\alpha)sI^{\prime}(s)L(A_{s})^{-2},\]
while the isoperimetric inequality for the hyperbolic metric implies
\[I^{\prime\prime}(s)\geq-(2n+\alpha)I^{\prime}(s)(\pi+s)^{-1}.\]
Now it is evident that our estimate holds with \(X=2n+\alpha\).
## 3. A proof of Lemma 2.1
Let us first prove the Lemma 2.1 in the case of the measure \(d\nu_{n,\alpha}\), since its proof is significantly easier.
Using Taylor expansion and Parseval's identity we get the following equivalent form:
\[\sum_{k=n}^{+\infty}\frac{k^{2}(k-1)^{2}\cdots(k-n+1)^{2}|a_{k}|^{ 2}\Gamma(\alpha)}{\Gamma(\alpha+2n-1)}\int_{0}^{1}2\rho^{2k-2n+1}(1-\rho^{2})^ {\alpha+2n-2}d\rho\] \[\leq\sum_{k=0}^{+\infty}\frac{k!\Gamma(\alpha)}{\Gamma(k+\alpha) }|a_{k}|^{2}.\]
This is easily seen to be equivalent with
\[c_{k,n}=\frac{k!}{(k-n)!}\frac{\Gamma(k+\alpha)}{\Gamma(k+n+\alpha)}\leq 1,\]
which follows from the observations that \(\frac{c_{k+1,n}}{c_{k,n}}=\frac{(k+1)(k+\alpha)}{(k-n+1)(k+n+\alpha)}\geq 1\) and \(\lim_{k\to+\infty}c_{k,n}=1\).
The case of the measure \(d\mu_{n,\alpha}\) is much harder and we will prove it only for \(n\leq 3\).
Similarly as above, we reduce it to the inequality:
\[\sum_{k=n}^{+\infty}\frac{k^{2}(k-1)^{2}\cdots(k-n+1)^{2}|a_{k}|^ {2}}{n!(\alpha)_{n}}\int_{0}^{1}\frac{2\rho^{2k-2n+1}(1-\rho^{2})^{\alpha+2n-2 }}{F(1-\alpha-n,-n;1;\rho^{2})}d\rho\] \[=\sum_{k=n}^{+\infty}\frac{k^{2}(k-1)^{2}\cdots(k-n+1)^{2}|a_{k}| ^{2}}{n!(\alpha)_{n}}\int_{0}^{1}\frac{t^{k-n}(1-t)^{\alpha+2n-2}}{F(1-\alpha -n,-n;1;t)}dt\] \[\leq\frac{1}{\alpha+2n-1}\sum_{k=0}^{+\infty}\frac{k!\Gamma( \alpha)}{\Gamma(k+\alpha)}|a_{k}|^{2}.\]
Hence, it is sufficient (and necessary) to prove that
\[\frac{k^{2}(k-1)^{2}\cdots(k-n+1)^{2}}{n!(\alpha)_{n}}\int_{0}^{1 }\frac{t^{k-n}(1-t)^{\alpha+2n-2}}{F(1-\alpha-n,-n;1;t)}dt\] \[\leq\frac{1}{\alpha+2n-1}\frac{k!\Gamma(\alpha)}{\Gamma(k+\alpha )}\quad\text{for}\quad k\geq n.\]
Denoting \(P(t)=F(1-\alpha-n,-n;1;t)\) and reformulating our inequality we stand at
\[\frac{1}{B(k-n+1,\alpha+2n-1)}\int_{0}^{1}t^{k-n}(1-t)^{\alpha+2 n-2}P(t)^{-1}dt\] \[\leq\frac{n!\Gamma(k+\alpha+n)\Gamma(\alpha+n)}{\Gamma(\alpha+2n) \Gamma(k+\alpha)k(k-1)\cdots(k-n+1)}\]
We will rewrite \(\frac{1}{P(t)}\) using partial fractions as
\[\frac{1}{P(t)}=\frac{1}{(1+\beta_{1}t)(1+\beta_{2}t)\cdots(1+\beta_{n}t)}=\frac{ A_{1}}{1+\beta_{1}t}+\frac{A_{2}}{1+\beta_{2}t}+\cdots+\frac{A_{n}}{1+\beta_{n}t},\]
for some \(A_{j}\in\mathbb{R},j=\overline{1,n}\) and \(\beta_{j}\in\mathbb{R}^{+},j=\overline{1,n}.\) This is possible, since by the formula from [1], \(P(t)=(1-t)^{n}P_{n}^{(0,\alpha-1)}(\frac{1+t}{1-t}),\) where \(P_{n}^{(0,\alpha-1)}(x)\) are the corresponding Jacobi polynomials and each \(P_{n}^{(0,\alpha-1)}(x)\) has exactly \(n\) simple real zeros (this is general theorem on orthogonal polynomials). Also, \(P(t)\) has only negative zeros because all its coefficients are positive. From
\[1=P(t)\sum_{j=1}^{n}\frac{A_{j}}{1+\beta_{j}t}=\sum_{j=1}^{n}A_{j}(1+\beta_{1} t)\cdots(1+\beta_{j-1}t)(1+\beta_{j+1}t)\cdots(1+\beta_{n}t)\]
taking the limit when \(t\to-\frac{1}{\beta_{j}},\) we find that
\[A_{j}=\beta_{j}^{n-1}\prod_{i\neq j}(\beta_{j}-\beta_{i})^{-1}.\]
Expansion into partial fractions reduces to the calculation of the integral
\[\int_{0}^{1}t^{k-n}(1-t)^{\alpha+2n-2}(1+\gamma t)^{-1}dt\]
which after the substitution \(z=\frac{(\gamma+1)t}{1+\gamma t}\) becomes
\[(\gamma+1)^{-k+n-1}\int_{0}^{1}z^{k-n}(1-z)^{\alpha+2n-2}\big{(}1- \frac{\gamma}{\gamma+1}z\big{)}^{-k-\alpha-n+1}dz\] \[=\frac{B(k-n+1,\alpha+2n-1)}{(\gamma+1)^{k-n+1}}F\big{(}k-n+1,n+ k+\alpha+1;k+\alpha+n;\frac{\gamma}{\gamma+1}\big{)},\]
by the Euler integral representation for hypergeometric function. Hence, we have:
\[\frac{\Gamma(k-n+1)\Gamma(\alpha+2n-1)}{\Gamma(k+n+\alpha)(\gamma +1)^{k-n+1}}F\big{(}k-n+1,n+k+\alpha+1;k+\alpha+n;\frac{\gamma}{\gamma+1}\big{)}\] \[=\frac{(n+k+\alpha+1)\Gamma(k-n+1)\Gamma(\alpha+2n-1)}{\Gamma(k+ n+\alpha)(\gamma+1)^{2-2n-\alpha}}\int_{0}^{1}y^{k+\alpha+n-2}\big{(}1+\gamma y \big{)}^{-\alpha-2n+1}dy,\]
where the last equality follows from the simple change of variable.
Now, the main inequality can be rewritten as
\[\sum_{j=1}^{n}A_{j}(1+\beta_{j})^{\alpha+2n-2}\int_{0}^{1}\frac{y ^{k+\alpha+n-2}}{\big{(}1+\beta_{j}y\big{)}^{\alpha+2n-1}}dy\] \[\leq\frac{n!\Gamma(n+\alpha)}{\Gamma(2n+\alpha)}\frac{(k+\alpha) _{n-1}}{(k-n+1)_{n}}.\]
Denote
\[\varphi(y)=\sum_{j=1}^{n}B_{j}y^{k-j}\]
and
\[\psi(y)=\frac{y^{k+\alpha+n-2}}{\varphi(y)}\sum_{j=1}^{n}\frac{A_{j}(1+\beta_{j}) ^{\alpha+2n-2}}{\big{(}1+\beta_{j}y\big{)}^{\alpha+2n-1}},\]
where \(B_{j}\) is chosen so that
\[\int_{0}^{1}\varphi(y)dy=\sum_{j=1}^{n}\frac{B_{j}}{k-j+1}=\frac{(k+\alpha)_{n -1}}{(k-n+1)_{n}}.\]
From
\[(x+\alpha)_{n-1}=\sum_{j=1}^{n}\frac{B_{j}(x-n+1)_{n}}{x-j+1},\]
by letting \(x\to j-1\), we get
\[(j+\alpha-1)_{n-1} =B_{j}(j-1)(j-2)\cdots 1\cdots(-1)(-2)\cdots(j-n)\] \[=(-1)^{n-j}(j-1)!(n-j)!B_{j},\]
i.e.
\[B_{j}=(-1)^{n-j}\frac{(j+\alpha-1)_{n-1}}{(j-1)!(n-j)!}.\]
Our inequality reads as
\[\int_{0}^{1}\varphi(y)\psi(y)dy\leq\frac{n!\Gamma(n+\alpha)}{\Gamma(2n+\alpha )}\frac{(k+\alpha)_{n-1}}{(k-n+1)_{n}},\]
therefore, it will be enough to prove that
\[\psi(y)\leq\psi(1)=\frac{n!\Gamma(n+\alpha)}{\Gamma(2n+\alpha)}!\]
After substitution \(y=1-x\), this becomes
\[\sum_{j=1}^{n}\frac{A_{j}}{1+\beta_{j}}\bigg{(}1-\frac{\beta_{j}}{\beta_{j}+1} x\bigg{)}^{-\alpha-2n+1}\leq\quad\frac{n!\Gamma(n+\alpha)}{\Gamma(2n+\alpha)} \sum_{j=1}^{n}B_{j}(1-x)^{-\alpha-n-j+2}.\]
Series expansion on \(x\) reduces the problem to proving
\[\binom{\alpha+2n+l-2}{l}\sum_{j=1}^{n}\frac{A_{j}}{1+\beta_{j}}\bigg{(}\frac{ \beta_{j}}{1+\beta_{j}}\bigg{)}^{l}\leq\frac{n!\Gamma(n+\alpha)}{\Gamma(2n+ \alpha)}\sum_{j=1}^{n}B_{j}\binom{\alpha+n+j+l-3}{l}.\]
The sum on right-hand side can be calculated:
\[\sum_{j=1}^{n}B_{j}\binom{\alpha+n+j+l-3}{l}\] \[=\sum_{j=0}^{n-1}(-1)^{n-j-1}\frac{(j+\alpha)_{n-1}}{j!(n-j-1)!} \binom{\alpha+n+j+l-2}{l}\] \[=\frac{\Gamma(\alpha+l+n-1)}{(n-1)!l!\Gamma(\alpha)}\sum_{j=0}^{n- 1}(-1)^{n-j-1}\binom{n-1}{j}\frac{(\alpha+n+l-1)_{j}}{(\alpha)_{j}}\] \[=\frac{(n+l-1)!\Gamma(\alpha+l+n-1)}{(l!)^{2}(n-1)!\Gamma(\alpha+ n-1)},\]
usng the next lemma:
**Lemma 3.1**.: \[\sum_{s=0}^{m}\frac{(-m)_{s}(\beta+l+m)_{s}}{(\beta)_{s}s!}=(-1)^{m}\frac{(l+1 )_{m}}{(\beta)_{m}}.\]
Proof.: Note that both sides of the identity are \(m-\)degree polynomials on \(l.\) Hence, it will be enough to prove it for \(m+1\) different numbers \(l.\) Taking \(l=-t,\) for \(t>m,\) we get that the LHS, by the Gauss theorem, is equal to \(\frac{\Gamma(t)}{\Gamma(t-m)(\beta)_{m}},\) while the RHS is \((-1)^{m}\frac{(1-t)_{m}}{(\beta)_{m}}.\) Since they are evidently equal, we conclude the proof.
Therefore, we need to prove
\[\sum_{j=0}^{n}A_{j}(1-t_{j})(1-t_{j}t)^{-\alpha-2n+1}\leq F(n,n+\alpha+n-1;1;t), \tag{3.1}\]
where \(t_{j}=\frac{\beta_{j}}{\beta_{j}+1}.\) One possible approach would be the usage of some properties of Hadamard products of series.
However, we will compare the appropriate coefficients on both sides of the previous inequality:
\[\sum_{j=1}^{n}A_{j}(1-t_{j})t_{j}^{l}\leq\frac{n(n+\alpha-1)(n+l-1)!\Gamma( \alpha+l+n-1)}{l!(\alpha+2n-1)\Gamma(\alpha+2n+l-1)}.\]
We will now express the left-hand side in another way. In fact, we calculate the \(l\)-th derivative of \(\frac{1}{P(t)}\) using two approaches. First, from
\[\frac{1}{P(t)}=\sum_{j=1}^{n}\frac{A_{j}}{1+\beta_{j}t}\]
we find
\[\bigg{(}\frac{1}{P(t)}\bigg{)}^{(l)}=(-1)^{l}l!\sum_{j=1}^{n}\frac{A_{j}}{1+\beta_ {j}t}\bigg{(}\frac{\beta_{j}}{1+\beta_{j}t}\bigg{)}^{l}\]
and
\[\bigg{(}\frac{1}{P(t)}\bigg{)}^{(l)}(1)=(-1)^{l}l!\sum_{j=1}^{n}\frac{A_{j}}{1+ \beta_{j}}\bigg{(}\frac{\beta_{j}}{1+\beta_{j}}\bigg{)}^{l}.\]
From this formula, we see the following equivalent form of our inequality:
\[(-1)^{l}\bigg{(}\frac{1}{P(t)}\bigg{)}^{(l)}(1)\leq\frac{n(n+\alpha-1)(n+l-1)! \Gamma(\alpha+l+n-1)}{(\alpha+2n-1)\Gamma(\alpha+2n+l-1)}.\]
From Leibnitz's formula we get:
\[\bigg{(}\frac{1}{P(t)}\bigg{)}^{(l)}=\bigg{(}\frac{1}{(1+\beta_{1 }t)(1+\beta_{2}t)\cdots(1+\beta_{n}t)}\bigg{)}^{(l)}\] \[=\sum_{\begin{subarray}{c}j_{1}+j_{2}+\cdots+j_{n}=l;\\ j_{1},j_{2},\ldots,j_{n}\geq 0\end{subarray}}\binom{l}{j_{1};j_{2};\ldots ;j_{n}}\bigg{(}\frac{1}{1+\beta_{1}t}\bigg{)}^{(j_{1})}\bigg{(}\frac{1}{1+ \beta_{2}t}\bigg{)}^{(j_{2})}\cdots\bigg{(}\frac{1}{1+\beta_{n}t}\bigg{)}^{(j _{n})}\] \[=\sum_{\begin{subarray}{c}j_{1}+j_{2}+\cdots+j_{n}=l;\\ j_{1},j_{2},\ldots,j_{n}\geq 0\end{subarray}}\binom{l}{j_{1};j_{2};\ldots ;j_{n}}\frac{(-1)^{j_{1}}j_{1}!\beta_{1}^{j_{1}}}{(1+\beta_{1}t)^{j_{1}+1}} \frac{(-1)^{j_{2}}j_{2}!\beta_{2}^{j_{2}}}{(1+\beta_{2}t)^{j_{2}+1}}\cdots \frac{(-1)^{j_{n}}j_{n}!\beta_{n}^{j_{n}}}{(1+\beta_{n}t)^{j_{n}+1}}\] \[=\frac{(-1)^{l}l!}{P(1)}\sum_{\begin{subarray}{c}j_{1}+j_{2}+ \cdots+j_{n}=l;\\ j_{1},j_{2},\ldots,j_{n}\geq 0\end{subarray}}\frac{\beta_{1}^{j_{1}}\beta_{2}^ {j_{2}}\cdots\beta_{n}^{j_{n}}}{(1+\beta_{1}t)^{j_{1}}(1+\beta_{2}t)^{j_{2}} \cdots(1+\beta_{n}t)^{j_{n}}},\]
which gives
\[\sum_{j=1}^{n}A_{j}(1-t_{j})t_{j}^{l}=\frac{1}{P(1)}\sum_{ \begin{subarray}{c}j_{1}+j_{2}+\cdots+j_{n}=l;\\ j_{1},j_{2},\ldots,j_{n}\geq 0\end{subarray}}t_{1}^{j_{1}}t_{2}^{j_{2}} \cdots t_{n}^{j_{n}}.\]
Finally, using \(P(1)=\frac{n!\Gamma(n+\alpha)}{\Gamma(2n+\alpha)}\), we reduce the inequality to showing
\[\sum_{\begin{subarray}{c}j_{1}+j_{2}+\cdots+j_{n}=l;\\ j_{1},j_{2},\ldots,j_{n}\geq 0\end{subarray}}t_{1}^{j_{1}}t_{2}^{j_{2}} \cdots t_{n}^{j_{n}}\leq\frac{(n+l-1)!(\alpha+n-1)_{l}}{l!(n-1)!(\alpha+2n-1) _{l}}. \tag{3.2}\]
If we denote the LHS of the last inequality by \(S_{l}\), then it takes the form
\[S_{l}\leq D_{l}=\frac{(n+l-1)!(\alpha+n-1)_{l}}{l!(n-1)!(\alpha+2n-1)_{l}}.\]
Using Pfaff transformation
\[P(t)=F(1-\alpha-n,-n;1;t)=(1-t)^{n}F(-n,n+\alpha;1;\frac{t}{t-1}),\]
we deduce that \(t_{j}=\frac{\beta_{j}}{\beta_{j}+1},1\leq j\leq n,\) are all zeros of polynomial \(F(-n,n+\alpha;1;1-t).\) Now, we will express this function as a polynomial on \(t\) and find a closed formula for the elementary symmetric polynomials of its zeros, using Vieta's formulas. First, we write it by the definition and change the order of summation:
\[\sum_{k=0}^{n}\frac{(-n)_{k}(n+\alpha)_{k}}{(k!)^{2}}(1-t)^{k}\] \[=\sum_{k=0}^{n}\frac{(-n)_{k}(n+\alpha)_{k}}{(k!)^{2}}\sum_{j=0}^ {k}\binom{k}{j}(-1)^{j}t^{j}\] \[=\sum_{j=0}^{n}(-1)^{j}t^{j}\sum_{k=j}^{n}\frac{(-n)_{k}(n+\alpha )_{k}}{(k!)^{2}}\binom{k}{j}.\]
The inner sum can be further rewritten as
\[\sum_{k=j}^{n}\frac{(-n)_{k}(n+\alpha)_{k}}{(k!)^{2}}\binom{k}{j}\] \[=\frac{(-n)_{j}(n+\alpha)_{j}}{(j!)^{3}}\sum_{k=j}^{n}\frac{(-n+j )_{k-j}(n+\alpha+j)_{k-j}k!}{(j+1)_{k-j}^{2}(k-j)!}\] \[=\frac{(-n)_{j}(n+\alpha)_{j}}{(j!)^{2}}\sum_{s=0}^{n-j}\frac{(- n+j)_{s}(n+\alpha+j)_{s}}{(j+1)_{s}s!}.\]
Denoting \(m=n-j,\) we have to find the sum \(\sum_{s=0}^{m}\frac{(-m)_{s}(2j+m+\alpha)_{s}}{(j+1)_{s}s!}\). But, by the Lemma 3.1 with \(\beta=j+1\) and \(l=j+\alpha-1,\) we see that it is equal to \((-1)^{m}\frac{(j+\alpha)_{m}}{(1+j)_{m}}.\) From this we conclude that
\[\sum_{k=0}^{n}\frac{(-n)_{k}(n+\alpha)_{k}}{(k!)^{2}}(1-t)^{k}=(-1)^{n}\frac{ (\alpha)_{n}}{n!}\sum_{j=0}^{n}\frac{(\alpha+n)_{j}}{(\alpha)_{j}}(-1)^{j} \binom{n}{j}t^{j}.\]
Hence, \(t_{j}=\frac{\beta_{j}}{\beta_{j}+1}\) are the zeros of the polynomial \(\sum_{j=0}^{n}\frac{(\alpha+n)_{i}}{(\alpha)_{j}}(-1)^{j}\binom{n}{j}t^{j}\) and by Vieta's formulas:
\[\sum t_{i_{1}}t_{i_{2}}\cdots t_{i_{k}}=\frac{(\alpha+n-k)_{k}}{(\alpha+2n-k )_{k}}\binom{n}{k}. \tag{3.3}\]
For \(n=1,\) this reduces to the Bernoulli inequality \(\left(\frac{\alpha}{\alpha+1}\right)^{l}\leq\frac{\alpha}{\alpha+l}.\) For \(n=2,\) after introducing \(t_{1}=\frac{\beta_{1}}{\beta_{1}+1}=\frac{\alpha+1-\sqrt{\frac{2(\alpha+1)}{ \alpha+2}}}{\alpha+3},\)\(t_{2}=\frac{\beta_{2}}{\beta_{2}+1}=\frac{\alpha+1+\sqrt{\frac{2(\alpha+1)}{ \alpha+2}}}{\alpha+3},\) we stand at:
\[\frac{t_{2}^{l+1}-t_{1}^{l+1}}{t_{2}-t_{1}}\leq\frac{(l+1)(\alpha+1)(\alpha+2 )}{(\alpha+l+1)(\alpha+l+2)}.\]
For \(l=0\) and \(l=1\), we have the equality. We will proceed by the induction. If we suppose that the inequality holds for \(k\), then we have, for \(l+1:\)
\[\frac{t_{2}^{l+2}-t_{1}^{l+2}}{t_{2}-t_{1}}=\frac{t_{2}(t_{2}^{l+1} -t_{1}^{l+1})}{t_{2}-t_{1}}+t_{1}^{l+1}\] \[\leq\frac{(l+1)(\alpha+1)(\alpha+2)}{(\alpha+l+1)(\alpha+l+2)}t_{ 2}+t_{1}^{l+1}\] \[\leq\frac{(l+2)(\alpha+1)(\alpha+2)}{(\alpha+l+2)(\alpha+l+3)},\]
where the last inequality we prove by the induction again. It is the equality for \(l=0\), while
\[\frac{t_{1}^{l+1}}{(\alpha+1)(\alpha+2)}\leq\frac{1}{\alpha+l+2}\bigg{(}\frac {l+2}{\alpha+l+3}-\frac{t_{2}(l+1)}{\alpha+l+1}\bigg{)}\]
gives
\[\frac{t_{1}^{l+2}}{(\alpha+1)(\alpha+2)} \leq\frac{t_{1}}{\alpha+l+2}\bigg{(}\frac{l+2}{\alpha+l+3}-\frac{ t_{2}(l+1)}{\alpha+l+1}\bigg{)}\] \[\leq\frac{1}{\alpha+l+3}\bigg{(}\frac{l+3}{\alpha+l+4}-\frac{t_{2 }(l+2)}{\alpha+l+2}\bigg{)},\]
since the last inequality is equivalent to
\[(t_{1}+t_{2})(l+2)(\alpha+l+1)(\alpha+l+4)-t_{1}t_{2}(l+1)(\alpha+l+3)_{2}-(l +3)(\alpha+l+1)_{2}\leq 0.\]
This can be easily checked using (3.3).
Similarly, for \(n=3\) we use the identity
\[S_{l+1}(t_{1},t_{2},t_{3})=t_{1}S_{l}(t_{1},t_{2},t_{3})+t_{2}S_{l}(t_{2},t_{3 })+t_{3}^{l+1}.\]
This holds for \(l=0\). If this is true for \(l\), then for \(l+1\), we have to prove:
\[t_{1}S_{l}(t_{1},t_{2},t_{3})+t_{2}S_{l}(t_{2},t_{3})+t_{3}^{l+1}\leq t_{1}D_{ l}+t_{2}S_{l}(t_{2},t_{3})+t_{3}^{l+1}\leq D_{l+1},\]
which is easily seen to be true for \(l=0\), since it reads as \(t_{1}D_{0}+t_{2}+t_{3}=t_{1}+t_{2}+t_{3}=S_{1}=D_{1}.\) Now, to prove \(t_{2}S_{l}(t_{2},t_{3})+t_{3}^{l+1}\leq D_{l+1}-t_{1}D_{l}\), (the last inequality) we again use the induction:
\[t_{2}S_{l+1}(t_{2},t_{3})+t_{3}^{l+2}=t_{2}(t_{2}S_{l}(t_{2},t_{3})+t_{3}^{l+ 1})+t_{3}^{l+2}\]
\[\leq t_{2}(D_{l+1}-t_{1}D_{l})+t_{3}^{l+2}\leq D_{l+2}-t_{1}D_{l+1},\]
or, equivalently:
\[D_{l+2}-(t_{1}+t_{2})D_{l+1}+t_{1}t_{2}D_{l}\geq t_{3}^{l+2}.\]
For \(l=0\) this reads as \(D_{2}-(t_{1}+t_{2})D_{1}+t_{1}t_{2}D_{0}\) which, by using \(S_{0}=D_{0}\) and \(S_{1}=D_{1}\), reduces to \(D_{2}\geq(t_{1}+t_{2}+t_{3})^{2}-(t_{1}t_{2}+t_{2}t_{3}+t_{3}t_{1}).\) This can be easily checked by using (3.3). Finally, using the induction again we see that our inequality follows from:
\[D_{l+3}-(t_{1}+t_{2}+t_{3})D_{l+2}+(t_{1}t_{2}+t_{2}t_{3}+t_{3}t_{1})D_{l+1}-t_ {1}t_{2}t_{3}D_{l}\geq 0.\]
Our approach become much more complicated and needs many modifications for \(n\geq 4\), hence we were not been able to prove the theorem in full generality for this case.
## 4. Consequences to Fock space case
Here we will show that an easy limitting argment with the measure \(d\mu_{n,\alpha}\) gives the appropriate Fock space inequality from [6] for each \(n\geq 1\). Indeed, from
\[\frac{k^{2}(k-1)^{2}\cdots(k-n+1)^{2}}{n!(\alpha)_{n}}\int_{0}^{1 }\frac{t^{k-n}(1-t)^{\alpha+2n-2}}{F(1-\alpha-n,-n;1;t)}dt\] \[\leq\frac{1}{\alpha+2n-1}\frac{k!\Gamma(\alpha)}{\Gamma(k+\alpha )}\quad\text{for}\quad k\geq n,\]
after the substitution \(y=Rt\) with \(\alpha=R\), we get:
\[\int_{0}^{R}\frac{y^{k-n}(1-y/R)^{R+2n-2}}{F(1-R-n,-n;1;y/R)}dy\] \[\leq\frac{n!((k-n)!)^{2}}{k!}\frac{R^{k+1-n}(R)_{n}\Gamma(R)}{(R+ 2n-1)\Gamma(k+R)}\quad\text{for}\quad k\geq n.\]
Now,
\[F(1-n-R,-n;1;y/R)=\sum_{k=0}^{n}\frac{(-n)_{k}(1-n-R)_{k}}{(k!)^{2}}\frac{y^{k }}{R^{k}},\]
with \(R\to+\infty\) tends to
\[P_{n}(y)=\sum_{k=0}^{n}\frac{(-n)_{k}(-1)^{k}}{(k!)^{2}}y^{k}=L_{n}(-y),\]
where \(L_{n}\) is the \(n-\)th degree Laguerre polynomial. Hence, taking the limit when \(R\to+\infty\) in the last inequality, we get:
\[\int_{0}^{+\infty}\frac{y^{k-n}e^{-y}}{L_{n}(-y)}dy\] \[\leq\frac{n!((k-n)!)^{2}}{k!}\quad\text{for}\quad k\geq n.\]
|
2301.05716 | In-situ or accreted? Using deep learning to infer the origin of
extragalactic globular clusters from observables | Globular clusters (GCs) are powerful tracers of the galaxy assembly process,
and have already been used to obtain a detailed picture of the progenitors of
the Milky Way. Using the E-MOSAICS cosmological simulation of a (34.4 Mpc)$^3$
volume that follows the formation and co-evolution of galaxies and their star
cluster populations, we develop a method to link the origin of GCs to their
observable properties. We capture this complex link using a supervised deep
learning algorithm trained on the simulations, and predict the origin of
individual GCs (whether they formed in the main progenitor or were accreted
from satellites) based solely on extragalactic observables. An artificial
neural network classifier trained on $\sim50,000$ GCs hosted by $\sim 700$
simulated galaxies successfully predicts the origin of GCs in the test set with
a mean accuracy of $89$ per cent for the objects with [Fe/H]<-0.5 that have
unambiguous classifications. The network relies mostly on the alpha-element
abundances, metallicities, projected positions, and projected angular momenta
of the clusters to predict their origin. A real-world test using the known
progenitor associations of the Milky Way GCs achieves up to $90$ per cent
accuracy, and successfully identifies as accreted most of the GCs in the inner
Galaxy associated to the Kraken progenitor, as well as all the Gaia-Enceladus
GCs. We demonstrate that the model is robust to observational uncertainties,
and develop a method to predict the classification accuracy across observed
galaxies. The classifier can be optimized for available observables (e.g. to
improve the accuracy by including GC ages), making it a valuable tool to
reconstruct the assembly histories of galaxies in upcoming wide-field surveys. | Sebastian Trujillo-Gomez, J. M. Diederik Kruijssen, Joel Pfeffer, Marta Reina-Campos, Robert A. Crain, Nate Bastian, Ivan Cabrera-Ziri | 2023-01-13T19:00:01Z | http://arxiv.org/abs/2301.05716v1 | In-situ or accreted? Using deep learning to infer the origin of extragalactic globular clusters from observables
###### Abstract
Globular clusters (GCs) are powerful tracers of the galaxy assembly process, and have already been used to obtain a detailed picture of the progenitors of the Milky Way. Using the E-MOSAICS cosmological simulation of a (34.4 Mpc)\({}^{3}\) volume that follows the formation and co-evolution of galaxies and their star cluster populations, we develop a method to link the origin of GCs to their observable properties. We capture this complex link using a supervised deep learning algorithm trained on the simulations, and predict the origin of individual GCs (whether they formed in the main progenitor or were accreted from satellites) based solely on _extragalactic_ observables. An artificial neural network classifier trained on \(\sim 50,000\) GCs hosted by \(\sim 700\) simulated galaxies successfully predicts the origin of GCs in the test set with a mean accuracy of 89 per cent for the objects with \(\rm[Fe/H]<-0.5\) that have unambiguous classifications. The network relies mostly on the alpha-element abundances, metallicities, projected positions, and projected angular momenta of the clusters to predict their origin. A real-world test using the known progenitor associations of the Milky Way GCs achieves up to 90 per cent accuracy, and successfully identifies as accreted most of the GCs in the inner Galaxy associated to the _Kraken_ progenitor, as well as all the _Gaia-Enceladus_ GCs. We demonstrate that the model is robust to observational uncertainties, and develop a method to predict the classification accuracy across observed galaxies. The classifier can be optimized for available observables (e.g. to improve the accuracy by including GC ages), making it a valuable tool to reconstruct the assembly histories of galaxies in upcoming wide-field surveys.
keywords: Galaxies - galaxies: evolution - galaxies: formation - galaxies: structure - galaxies: haloes - galaxies: star clusters: general
## 1 Introduction
One of the major goals of astrophysics is understanding the physical processes that gave rise to galaxies out of the tiny density perturbations that emerged from the epoch of recombination. The advent of modern cosmology has provided precise knowledge of these initial conditions in the context of the highly successful \(\Lambda\) Cold Dark Matter (\(\Lambda\)CDM) cosmological paradigm. The \(\Lambda\)CDM model makes detailed predictions for the formation and evolution of dark matter (DM) haloes, which are the sites for baryonic material to condense into the galaxies we observe today (Blumenthal et al., 1984; Navarro et al., 1995; Springel et al., 2005). Hydrodynamical cosmological simulations in the \(\Lambda\)CDM framework predict that galaxies assemble through a combination of in-situ star formation in cold gas that is continuously accreted from cosmic filaments, and continuous infall of smaller satellite galaxies along with their DM, gas, and stars (Naab & Ostriker, 2017). Sophisticated dynamical models of external galaxies using integral field spectroscopic data are now able to recover the properties of kinematically and chemically distinct 'cold' and 'hot' populations that trace the in-situ and accreted stellar components, respectively (e.g. Zhu et al., 2020; Poci et al., 2021).
However, reconstructing the detailed merger history of a galaxy from observations remains an extremely challenging task.
The first chemo-dynamical studies of galaxies date back to the 1960s, when observations of the stars and globular cluster (GC) populations of the Milky Way (MW) showed that the kinematics of stars contained important clues to the origin of the various components. This pioneering work by Eggen et al. (1962) found that while young stars follow nearly circular orbits, older stars have eccentric radial orbits with lower angular momentum and higher vertical velocity dispersion, all indicative of their accreted origin. Later studies of the ages and metal abundances of GCs in the Milky Way showed that while inner GCs follow a tight age-metallicity relation, the outer GCs have a broad range of ages at fixed metallicity (Searle & Zinn, 1978). This simple observation confirmed the scenario where the MW disc stars and GCs formed early, while the halo formed slowly from material that continued to accrete long after the disc was in place. Several decades later, the Sloan Digital Sky Survey (SDSS; York et al., 2000) found the first evidence of a past accretion event in the Milky Way, the Sagittarius stream, a remnant of the accretion of the _Sagittarius_ dwarf galaxy (Ibata et al., 1994).
The _Gaia_ survey (Gaia Collaboration et al., 2018) revolutionized the field of Galactic archaeology by precisely measuring the 3D positions and motions of millions of stars in the inner halo, enabling the search for the progenitor galaxies of the Milky Way (MW) using the phase-space clustering of halo stars (for a review, see Helmi, 2020). In the last five years these data, combined with other spectroscopic surveys, led to the identification of the stellar debris from one of the most massive galaxy ever accreted by the Milky Way, _Gaia-Enceladus_ (also known as the _Gaia Sausage_) (Helmi et al., 2018; Haywood et al., 2018; Belokurov et al., 2018), and of at least six additional progenitors (e.g., Deason et al., 2019; Myeong et al., 2018; McSween et al., 2019; Iorio & Belokurov, 2019; Koppelman et al., 2019, 2019; Mackereth et al., 2019; Necib et al., 2020, 2020; Vassiliev, 2019; Gallart et al., 2019; Horta et al., 2021; Malhan et al., 2022). Their location in 6D phase-space together with their metallicities and alpha-element abundances, identifies these substructures as having formed in satellites with different masses and star formation histories (see Helmi, 2020). The new data therefore allowed the global properties (such as mass and accretion redshift) to be determined for the most massive progenitors of the Galaxy. More recently, the H3 survey (Conroy et al., 2019) of high latitude stars in the MW found evidence of six chemo-dynamical substructures in the outer halo, beyond the reach of _Gaia_(Naidu et al., 2020). This brought the census of Galactic progenitors up to \(\sim 10\), accounting for \(\sim 95\) per cent of the mass of the stellar halo. Achieving a similarly detailed assembly reconstruction for large samples of galaxies would undoubtedly open an entirely new window into galaxy formation and cosmology.
_Gaia_ also provided the precise orbits of nearly all of the Galactic globular clusters (Gaia Collaboration et al., 2018; Vassiliev, 2019; Baumgardt et al., 2019). These data, along with the GC chemical abundances and ages, offered a novel and complementary way of reconstructing galaxy assembly. GCs are particularly powerful tracers of galaxy assembly because they can be studied at much larger distances than individual stars, up to \(\sim 100\) Mpc, have long phase-mixing timescales, and their abundance relative to field stars increases in low-mass galaxies (Peng et al., 2008; Georgiev et al., 2010; Forbes et al., 2018). Using hydrodynamical cosmological simulations from the E-MOSAICS project, which include the formation and evolution of star clusters, Kruijssen et al. (2019) demonstrated that GCs are excellent tracers of the properties of their progenitor galaxies. Kruijssen et al. (2019) then used the age-metallicity relation of the MW GCs to obtain the most detailed reconstruction to date of the merger tree of the Galaxy. Trujillo-Gomez et al. (2021) found a surprising amount of galaxy assembly information encoded in the 3D GC system kinematics of simulated MW-mass galaxies, and applied a statistical method to the _Gaia_ data to produce an independent and consistent reconstruction of the MW merger tree. Massari et al. (2019) used phase-space and age-metallicity information to associate most of the accreted GCs to each of the five most massive (likely) progenitors, _Gaia-Enceladus_, _Kraken_, _Sagittarius_, _Sequia_, and the progenitor of the _Helmi streams_. Pfeffer et al. (2020) studied the relationship between the current phase-space distribution of GCs and the properties of their progenitors in cosmological simulations. Using machine learning to exploit this relation, along with the GC ages and metallicities in the simulations, Kruijssen et al. (2020) trained an artificial neural network to recover the masses and accretion redshifts of the five dominant MW progenitors. New studies continue to uncover further details of the MW assembly. For instance, Malhan et al. (2022) analyzed the statistical 6D distribution of a large population of tracers in the Galactic halo (including GCs and stellar streams) to robustly search for phase-space substructures, and discovered a potential additional progenitor named _Pontus_.
In this study, we aim to provide the initial steps to extend the powerful methods that have been applied to the Milky Way to recover the assembly histories of external galaxies based on their observed GC populations. First, we study the relation between the fraction of GCs accreted from satellites and fundamental galaxy properties in the simulations. We then investigate the link between GC origin (whether a GC was formed in-situ within the galaxy or was accreted), and its individual properties as determined by standard photometric and spectroscopic observations. The main result we highlight is that extragalactic GC observables contain a record of their progenitor properties, and that this information can be used to recover the origin of individual GCs using only a few key observables (their positions, radial velocities, and metallicities, and the stellar mass and effective radius of their host galaxy). With the goal of applying our classifier algorithm to upcoming deep, wide-field spectroscopic galaxy surveys, we provide the classification model code in a public repository.
The paper is organized as follows. Section 2 describes the simulations and the galaxy and GC sample selection. Section 3 shows how the accreted fraction depends on galaxy properties. Section 4 describes the deep learning model used to predict GC origin in external galaxies, and Section 5 provides a detailed analysis of its predictions for simulated galaxies, and the results of the a real-world test using the MW data. The results and discussed in Section 6, and summarized in Section 7.
## 2 Simulated Galaxy and GC Sample
In this work we use the simulated galaxies and star cluster populations from the E-MOSAICS simulations. Below we describe the simulations and sample selection criteria.
### The E-MOSAICS simulations
E-MOSAICS (M Modelling Star cluster population Assembly In Cosmological Simulations within EAGLE) is a suite of hydrodynamical cosmological simulations that follow the formation and co-evolution of galaxies and their star cluster populations (Pfeffer et al., 2018;
Kruijssen et al., 2019). The physics of galaxy formation is implemented using the EAGLE model (Schaye et al., 2015; Crain et al., 2015), which uses a feedback prescription calibrated to reproduce the stellar mass function and disc-galaxy sizes at \(z=0\). The EAGLE model also reproduces many additional key properties of the observed galaxy population, including their present-day luminosities and colours (Trayford et al., 2015), the evolution of the stellar mass function, star formation rates (Furlong et al., 2015), and galaxy sizes (Furlong et al., 2016), and the chemical abundances of stars in the Milky Way (Mackereth et al., 2018).
To model the formation and evolution of star clusters, the simulations use an improved version of the MOSAICS subgrid model (Kruijssen et al., 2011; Pfeffer et al., 2018). Star clusters are treated as a subgrid population within each star particle, and form according to an environmentally-dependent prescription based on models for the fraction of stars formed in bound clusters (Kruijssen, 2012), and for the upper truncation mass of the Schechter initial cluster mass function (ICMF; Reina-Campos and Kruijssen, 2017). Both of these quantities are calculated using the local gas conditions, and increase with the gas pressure. Clusters lose mass via stellar evolution, two-body relaxation, and tidal shocks, and may be completely disrupted by infall into the centres of galaxies via dynamical friction. Mass loss due to tidal shocks and two-body relaxation is calculated self-consistently at each time step from the local tidal field.
The E-MOSAICS simulations have been shown to reproduce several key properties of GC populations. These include the massive end of the GC mass function (Pfeffer et al., 2018; Hughes et al., 2022), GC specific frequencies (Kruijssen et al., 2019; Bastian et al., 2020), the color-luminosity relation of metal-poor GCs (the 'blue tilt', Usher et al., 2018), the GC radial distribution (Reina-Campos et al., 2021) and kinematics (Trujillo-Gomez et al., 2021), and the GC system mass-halo mass relation (Bastian et al., 2020). They also reproduce the age distribution of GCs in satellite streams (Hughes et al., 2019), and the fraction of stars in the bulge of the Galaxy that were born in GCs (Hughes et al., 2019). These simulations demonstrated that the properties of GC populations reflect the environment and assembly of their host galaxies (Kruijssen et al., 2019), and this allowed the most detailed reconstruction so far of the merger tree of the Milky Way (Kruijssen et al., 2019), including the prediction of the masses and accretion times of its five most massive progenitors using the properties of its GCs (Kruijssen et al., 2020). We refer the reader to Pfeffer et al. (2018) for a complete description of the physical models in the simulations.
The E-MOSAICS simulations are unique in their ability to model star cluster populations in a cosmological volume to \(z=0\), and their success in reproducing galaxy and GC observables makes them an ideal tool to investigate how the intrinsic properties of GCs relate to their natal galaxies. In this work we use the galaxies and GCs from the E-MOSAICS (34.4 \(\rm{\AA}\)\(
## 3 Globular cluster origin across the galaxy population
We begin by examining how GC origin is related to GC and host galaxy properties. Figure 1 shows the stellar-to-halo mass relation of the simulated galaxies coloured by the number of GCs they host. The size of the GC population increases steeply with halo mass, reproducing the observed qualitative trend (e.g. Blakeslee et al., 1997; Burkert and Forbes, 2020). In a more detailed analysis, we found that the relation between halo mass and total mass in GCs in the simulations also matches observations (Bastian et al., 2020). At fixed galaxy stellar mass there is a weak secondary trend of increasing number of GCs with increasing halo mass.
Figure 2 shows the fraction of accreted GCs and stars in each galaxy as a function of galaxy stellar mass. The fraction of accreted GCs in E-MOSAICS increases with galaxy stellar mass following the qualitative trend found for stars in semi-empirical and semi-analytical models, as well as in cosmological simulations including EAGLE (e.g. Rodriguez-Gomez et al., 2016; Qu et al., 2017; Clauwens et al., 2018; Tacchella et al., 2019; Davison et al., 2020; Moster et al., 2020). This is not surprising, and is a direct consequence of the hierarchical nature of galaxy assembly combined with the shape of the fundamental stellar-to-halo mass relation (Fig. 1). The stellar-to-halo mass relation is very steep at low masses and becomes shallower for galaxies more massive than the MW. Massive galaxies are therefore partially assembled by hierarchical accretion of satellites with relatively high stellar masses, while dwarfs accrete only satellites with relatively low stellar masses. While the accreted fraction increases with galaxy mass for both stars and GCs, Fig. 2 shows that the fraction of accreted GCs is always larger. This is a result of the higher mean specific frequencies of satellites relative to centrals.
There is significant scatter in the GC accreted fraction at fixed galaxy stellar mass. To understand the physical drivers of the scatter, we search for secondary trends in the GC accreted fraction. Figure 2 also shows the median GC accreted fraction of galaxies hosted by the least/most massive DM haloes in each stellar mass bin (in the lower/upper quartile of the distribution of \(M_{\rm halo}\) in each bin). At fixed stellar mass, galaxies hosted by more massive DM haloes have larger GC accreted fractions, as expected from their larger fraction of accreted material from DM-rich satellites. The left panel of Figure 3 shows how the accreted fraction of stars varies with galaxy metallicity, with metal-poor galaxies typically hosting a larger fraction of accreted stars. This trend is driven by the decrease in the mean metallicity due to the accretion of a larger fraction of stars from satellites1. We also find that the mean metallicity of accreted GCs is higher in galaxies with higher GC accreted fractions due to the dominant contribution of the most massive satellite (which also contains the most metal-rich GCs).
Footnote 1: There is an additional weak trend where the in-situ GCs in massive galaxies with high GC accreted fractions tend to be less enriched than in those with low accreted fractions. This originates from the anticorrelation between halo mass (or formation time) and galaxy metallicity at fixed stellar mass discussed above.
In the right panel of Figure 3 we show an even stronger trend found in the accreted fraction of metal-poor and metal-rich GC systems (i.e. considering the mean GC metallicity of each galaxy). At fixed stellar mass, galaxies with metal-poor GC systems have systematically higher accreted GC fractions compared to those with metal-rich systems. As in the case of the stellar component, this distinct metallicity dependence of the accreted fraction results from a combination of the overall effect of larger fractions of (metal-poor) GCs accreted from satellites on the mean GC metallicity, and the effect of DM halo formation times on the in-situ GCs.
Figure 4 shows the accreted fraction of metal-poor and metal-rich GC subpopulations as a function of galaxy stellar mass. The subpopulations are defined using the stellar mass-dependent split shown in Table 1. Metallicity alone is not a direct proxy for GC origin. While metal-rich GCs tend to form in-situ in low-mass galaxies, the metal-poor population is typically a mix of in-situ and accreted objects. Figure 5 shows the distribution of accreted and in-situ GCs as a function of GC metallicity and host galaxy mass. It confirms that while accreted GCs tend to be more metal-poor than in-situ GCs, there is significant overlap and galaxy-to-galaxy variation in the populations, and metallicity alone is generally not enough to determine GC origin.
Figure 1: Number of GCs hosted by each simulated galaxy as a function of the galaxy stellar and halo mass. The number of hosted GCs increases steeply with both stellar and halo mass. At fixed halo mass, galaxies with larger stellar mass tend to host more GCs. See Section 2.2 for the sample selection criteria.
Figure 2: Fraction of stars and GCs accreted from satellites as a function of host galaxy stellar mass (points), coloured by halo mass. The grey line and shading show the median, [5,95], and [25,75] percentile range of the accreted fraction in bins of stellar mass (with error bars corresponding to the uncertainty in the median). The blue and red lines show the median accreted fraction in the bottom and top quartiles of halo mass in each bin, respectively. The dotted line shows the fraction of accreted stars. The median GC accreted fraction increases with stellar mass, and is always larger than for stars. At fixed stellar mass, galaxies hosted by more massive DM haloes have larger fractions of accreted GCs.
## 4 Predicting GC origin using machine learning
We now turn to the question of whether the origin of a particular GC can be predicted using its observable properties, and which observables are best suited for this purpose. We take advantage of the flexibility and predictive power of deep learning algorithms when applied to problems with highly nonlinear relations between the input and output variables. In addition, we explore other supervised learning techniques to find possible alternatives with higher predictive power.
After exploring several classifier algorithms including k-nearest neighbors (Fix & Hodges, 1989), Logistic Regression (Pearl & Reed, 1920), Support Vector Machines (Boser et al., 1992), Decision Trees (Hunt et al., 1966), and Random Forests (Breiman, 2001), we find that their predictive accuracy is generally lower compared to deep learning, while most do not provide probabilistic outputs. The probabilistic output of neural networks will be key for tuning and predicting the uncertainties in the model (see Sections 5.1 and 5.4).
Figure 4: Accreted GC fraction as a function of galaxy stellar mass for metal-poor (top panel) and metal-rich (bottom panel) GC populations. The GC subpopulations are selected based on the stellar mass-dependent metallicity split in Table 1. Metal-poor GCs typically have a mixed origin. Metal-rich GCs in low-mass galaxies are almost exclusively formed in-situ, while in galaxies more massive than the MW they have a mixed origin.
Figure 5: Origin of individual GCs in the simulation as a function of GC metallicity and host galaxy mass. The upper mass-dependent metallicity limit reflects the selection applied to reduce contamination by artificially undersfrupted GCs (see Section 2.2). Accreted GCs tend to have lower metallicities than in-situ GCs, but there is a significant overlap between the two populations. There is also significant variation in the metallicity distribution of the accreted and in-situ GCs across galaxies of similar mass due to differences in their assembly histories.
Figure 3: Accreted GC fraction as a function of galaxy stellar mass. Left: coloured by mean galaxy metallicity [Fe/H]. The blue and red lines show the median accreted GC fractions for the galaxies with the 25 per cent lowest and highest metallicities, respectively. Right: coloured by mean GC metallicity. The blue and red lines show the median accreted GC fractions for the galaxies in the bottom and top GC system metallicity quartiles, respectively. At fixed stellar mass, galaxies with higher metallicity stars and GCs have systematically lower accreted fractions than those at lower metallicities. This is driven by the fact that accreted stars and GCs from low-mass satellites are metal-poor.
### Algorithm description
For the fiducial model we employ a Multilayer Perceptron artificial neural network2 architecture (MLP; Rumelhart et al., 1986) with dense, sequential layers. MLPs are powerful classifiers that are ideally suited for complex problems where the classes are not linearly separable, as we expect here. They are also advantageous compared to more traditional models because they automatically create useful new features from the provided inputs. We use the deep learning library Keras(Chollet et al., 2015) implemented within the TensorFlow framework (Abadi et al., 2015).
Footnote 2: also known as feed-forward neural network.
The MLP architecture consists of several layers of artificial neurons that are connected in sequence, such that each neuron takes as input the combined outputs from all the neurons in the previous layer. To adapt the model to our specific classification task, we set the input layer to contain as many nodes (i.e. neurons) as the dimensions of the input data (i.e. the number of GC observables, \(N_{\text{input}}\)), and the output layer to have 2 dimensions corresponding to the two possible classification labels: _in-situ_ and _accreted_. The number of hidden layers \(N_{\text{layers}}\), and the number of nodes per layer (\(N_{\text{nodes}}\)) are left as free parameters to be optimized using the validation data. The input and hidden layers use the standard 'Rectified Linear Unit' (ReLU) activation function, \(h(x)=\max(0,x)\), and the output layer uses the sigmoid activation function to convert the output into a binary probability in the range \([0,1]\), \(P_{\text{in-situ}}=1-P_{\text{accreted}}\). The model is compiled using the 'Adam' optimizer (Kingma & Ba, 2014), with the standard binary cross-entropy loss function used in binary classification tasks,
\[\mathcal{L}=-\frac{1}{N_{\text{GC}}}\sum_{i=1}^{N_{\text{GC}}}y_{i}\log(P_{i} )+(1-y_{i})\log(1-P_{i}), \tag{2}\]
where \(y_{i}\) is the true label and \(P_{i}\) is the output probability for GC \(i\) (1 for in-situ, 0 for accreted). Below we describe the input features and training procedure.
### Training a neural network classifier on simulated galaxies and their GCs
To select the set of input observables (i.e. the features) used by the model to predict GC origin, we first explore a large set of physically-motivated GC and host galaxy observables. We iteratively remove features that do not affect the accuracy of the predictions to reduce as much as possible the complexity of the model. This procedure yields a fiducial set of \(N_{\text{input}}=17\) observables that we use in the final step to optimize the artificial network (ANN) architecture and to train the fiducial model. Table 2 summarizes the features. These are all derived using physically-motivated combinations of GC observables (metallicity, alpha-element abundance, projected position on the sky, and line-of-sight velocity), and global galaxy properties (stellar mass, mean metallicity, effective radius, and stellar velocity dispersion). Since GC ages are notoriously difficult to measure precisely beyond the Milky Way, we ignore them here and evaluate their contribution to the predictions in Section 5.7.
We select a single random orientation for each galaxy corresponding to a projection onto the \(x-y\) plane of the simulation box, and calculate the positions and velocities in the reference frame of the centre of the galaxy obtained using SuFIno. We define the GC 'rotation velocity' as the dot product of the GC LOS velocity \(\mathbf{V_{p}}\) and the unit vector pointing in the direction of net rotation of the galaxy at the projected GC position, \(V_{\text{rot}}\equiv\mathbf{V_{p}}\cdot\mathbf{V_{rot}^{gal}}/\mathbf{V_{rot }^{gal}}\). We further test an augmented feature set by including an additional set of six features that describe the distribution of the projected distance and LOS velocity of the GC system (using the median, inter-quartile range, skewness, and kurtosis), and quantify the projected GC distance and velocity relative to the four nearest GC neighbours. We find that these additional features do not increase the model performance, and therefore keep only the original set of 17 input observables. Having chosen the final feature set, we follow the common practice of standardizing each feature by subtracting the mean and diving by the standard deviation to obtain distributions with a mean of 0 and standard deviation of 1.
To train the model we use the sample of 69,136 simulated GCs with unambiguous origin hosted by 921 central galaxies (see Section 3). Normally, the model would be trained on a random subsample containing the majority of the simulated GCs (typically \(\sim\) 70-80 per cent), and the remaining fraction would be used in model validation and testing. However, to avoid leakage of the information on host galaxy properties from the GCs in the training set to the GCs in the test set, we adopt a different approach. Instead, we split the host galaxies randomly into a training set containing all the GCs hosted by a subset comprised of 80 per cent of the galaxies (50,612 GCs from 736 galaxies), and a test set containing all the GCs in the remaining 20 per cent (18,524 GCs from 185 galaxies). This ensures that the model is not exposed to any of the test data during training, and increases its capacity to generalize to other datasets, including GCs in the real Universe.
After selecting the training and test sets, we perform a grid search to optimize the main hyperparameters of the network: the number of layers, and the number of nodes (neurons) per layer. To maximize the use of the simulation data, we choose to use the same data for both validation and testing. We have verified that using separate validation and test data has no effect on the ability of the model to generalize to new data.3 We evaluate the validation accuracy of predictions using the test set for a model with parameters in the two-dimensional grid defined by the values \(N_{\text{layers}}\in[2,3,4,5,6,7,8,9,10]\), and \(N_{\text{nodes}}\in[10,20,50,100,200]\). In each iteration, the training is stopped after 30 epochs, or when the accuracy does not increase over 5 epochs. The architecture with \([N_{\text{layers}},\,N_{\text{nodes}}]=[4,20]\) results in the highest validation accuracy \(\approx 80\) per cent, and we select it for the fiducial model. Using these parameters we retrain the final model for 100 epochs, stopping early when the accuracy does not increase over the last 20 epochs. This model is saved and used to evaluate the predictions and performance in Section 5. We refer to it throughout the paper as the 'fiducial' model.
Footnote 3: We tested this by running an experiment where the model performance was tested using data that had not been used in the hyperparameter tuning. The accuracy was unaffected.
## 5 Results
In this section we evaluate and tune the performance of the fiducial model on the simulated test data. We then analyze the detailed predictions and the relative importance of each observable, and evaluate the model confidence. We also perform the first real-world test of the algorithm by predicting the origin of the MW GCs. Lastly, we test the impact of observational uncertainties and evaluate the performance improvement when GC ages are included.
### Model performance
We now evaluate the performance of the classifier on the test sample containing 185 galaxies (i.e. 20 per cent of the sample) drawn at random from the simulation, and the 18,524 GCs they host (with unambiguous origin). Figure 6 shows the distribution of predicted probabilities \(P_{\rm in-situ}\) for the simulated test sample (top panel) along with the number of correct predictions. To calculate the accuracy (i.e. the fraction of correct predictions), we must map the predicted probabilities output by the classifier to binary class labels assuming a simple probability threshold \(P_{\rm thresh}=0.5\), such that a GC is labelled 'in-situ' when \(P_{\rm in-situ}>P_{\rm thresh}\), and 'accreted' otherwise. The overall accuracy of the model (measured across the entire test GC sample) is 80 per cent. We find that the classifier produces two distinct peaks in the distribution of probabilities, with each peak near the maximum probability for each class (i.e. \(P_{\rm in-situ}\sim 0\) or \(P_{\rm in-situ}\sim 1\)). This shows that the neural network reaches a high confidence when predicting the origin of the majority of the GCs in the test sample. The fraction of correct predictions in each bin is shown in the bottom panel of Figure 6. The accuracy increases monotonically towards the most confident predictions (at \(P_{\rm in-situ}\sim 0\), and \(P_{\rm in-situ}\sim 1\)). This is evidence that the classifier successfully predicts the correct labels with high confidence (i.e. high probabilities). A minority of the predictions lie in the ambiguous region with \(P_{\rm in-situ}\sim 0.3-0.7\).
To exploit the probabilistic nature of the model to improve the accuracy of the classifications, we introduce a new label, and define 'ambiguous' predictions as those with \(P_{\rm in-situ}>P_{\rm thresh}\) and \(P_{\rm accreted}=1-P_{\rm in-situ}>P_{\rm thresh}\), where \(P_{\rm thresh}\) is the decision threshold. Figure 7 shows the effect of increasing the decision threshold on the fraction of unambiguous predictions, and on their accuracy. As expected, the accuracy increases with \(P_{\rm thresh}\), while the completeness (i.e. the unambiguous fraction of predictions) decreases: \(3/4\) of the sample reaches an accuracy of 85 per cent, while only half of the sample reaches 90 per cent accuracy. To optimize both the accuracy and the sample completeness, we define the unambiguous predictions using a fiducial value of \(P_{\rm thresh}=0.79\). This results in an accuracy of \(\sim 89\) per cent (for a 60 per cent completeness). The ambiguous region is shown using grey shading in Figure 6.
The results of the classification of the test set are shown in Figure 8 using the standard confusion matrix. The columns represent the true labels, and the rows show the number of GCs in each column that are predicted to be in-situ, accreted, or ambiguous. After removing the ambiguous predictions, the model erroneously
\begin{table}
\begin{tabular}{l l l} \hline \hline Feature & Object & Definition \\ \hline \(\log M_{\rm in}^{\rm gal}\) & galaxy & stellar mass \\ \(\log R_{\rm in}^{\rm gal}\) & galaxy & projected effective radius \\ \([{\rm Fe/H}]_{\rm gal}\) & galaxy & mean metallicity \\ \([{\rm\alpha/Fe}]_{\rm gal}\) & galaxy & mean oxygen abundance relative to iron [O/Fe] \\ \(\sigma_{\rm gal}\) & galaxy & stellar velocity dispersion \\ \(\log_{\rm NG}\) & galaxy & total number of GCs \\ \(\sigma_{\rm GC}\) & galaxy & GC system velocity dispersion \\ \([{\rm Fe/H}]\) & GC & metallicity \\ \([{\rm\alpha/Fe}]\) & GC & oxygen abundance relative to iron [O/Fe] \\ \([{\rm\alpha/Fe}]\) & GC/galaxy & metallicity relative to the galaxy, \([{\rm Fe/H}]-[{\rm Fe/H}]_{\rm gal}\) \\ \(\Delta[{\rm\alpha/Fe}]\) & GC/galaxy & alpha-abundance relative to the galaxy, \([{\rm\alpha/Fe}]-[{\rm\alpha/Fe}]_{\rm gal}\) \\ \(\log R_{\rm p}/R_{\rm in}^{\rm gal}\) & GC/galaxy & projected distance from galaxy centre in units of the galaxy effective radius \\ \(\sqrt{|V_{\rm P}|/\sigma_{\rm gal}}\) & GC/galaxy & LOS velocity in units of the galaxy velocity dispersion \\ \(\sqrt{|V_{\rm P}|/\sigma_{\rm GC}}\) & GC/galaxy & LOS velocity in units of the GC system velocity dispersion \\ \(\sqrt{|V_{\rm P}|/\sigma_{\rm gal}}\) & GC/galaxy & projected rotation velocity’: dot product of LOS velocity and the unit vector pointing along the galaxy rotation velocity at the GC projected \\ \(\log R_{\rm p}|V_{\rm P}|\) & GC & ‘projected angular momentum’: product of the projected galactocentric distance and magnitude of LOS velocity \\ \((R_{\rm p}V_{\rm in})^{1/3}\) & GC/galaxy & ‘projected angular momentum vector’: product of projected galactocentric distance and \(V_{\rm rot}\) (see Section 4.2) \\ \hline \hline \end{tabular}
\end{table}
Table 2: GC and host galaxy observables used as features in the fiducial neural network classifier. Projected positions and LOS velocities are calculated with respect to the position and velocity of the centre of the galaxy, assuming a single random orientation for each galaxy.
Figure 6: Accuracy distribution of the classifier on the simulated test sample as a function of the predicted probabilities. Top: distribution of the predicted in-situ probability \(P_{\rm in-situ}\) across the entire GC test sample compared to the distribution of only the correct predictions (assuming a decision threshold \(P_{\rm thresh}=0.5\)). Bottom: accuracy of the predictions in each probability bin. The vertical line marks the initial value of \(P_{\rm thresh}=0.5\) we adopt for labelling the predictions of the classifier, where \(P_{\rm in-situ}>P_{\rm thresh}\) corresponds to in-situ, and \(P_{\rm in-situ}\leq P_{\rm thresh}\) corresponds to accreted. The grey shaded region indicates ambiguous predictions as defined in Sect. 5.1. Both the probability distribution and the accurate predictions are peaked near the two extremes of \(P_{\rm in-situ}\), showing that the classifier makes accurate predictions with high confidence.
classifies 6 per cent (\(394/5834\)) of the accreted GCs, and a much larger fraction of in-situ GCs (\(891/4070\) = 18 per cent).
The fraction of predicted labels for each class is shown in Figure 9. The ambiguous class represents a nearly constant fraction \(\sim 25\)\(-\)\(45\) per cent of the predicted labels across the entire range of host galaxy stellar masses. The figure also shows that the model correctly predicts the dominant GC origin as a function of host galaxy stellar mass. Indeed, we find that the predictive accuracy remains nearly constant as a function of galaxy mass. However, the fraction of the dominant class is slightly overpredicted in dwarfs and massive ellipticals (but still lies within the uncertainty defined by the grey band).
To evaluate the impact of global galaxy properties on the model performance, Figure 10 shows the accuracy obtained across each galaxy as a function of galaxy stellar mass, metallicity, and GC accreted fraction. To properly account for the large class imbalance in some galaxies (i.e. where accreted or in-situ GCs dominate), we also show the balanced accuracy (defined as the average of the accuracies calculated separately for each class). The accuracy reaches \(>80\) per cent for the majority of galaxies, while it drops below 60 per cent in only a few galaxies. The low values of balanced accuracy in the most massive galaxies and several dwarfs are due to poor performance in identifying GCs in the minority class (which corresponds to in-situ for massive galaxies, and accreted in some dwarfs). This is more common among the most massive galaxies due to the small number of these objects in the training set. In addition, accreted GCs in massive galaxies have properties that are very similar to in-situ GCs due to the high masses of their satellite progenitors (see Sec. 5.3). There is a weak trend of decreasing accuracy in metal-poor galaxies, which reflects the weak correlation between accreted fraction and metallicity (see Fig. 3).
### Detailed predictions for simulated GC systems
To evaluate the performance of the classifier across individual GCs, we now look at a few specific examples of galaxies in the test set. Figure 11 shows the projected distributions of GCs labeled by their predicted and true origin in four example galaxies selected randomly in each stellar mass bin, including a massive elliptical, a MW-mass galaxy, a massive dwarf, and a low-mass dwarf. In the massive elliptical galaxy, the model has difficulty identifying any of the in-situ GCs (as indicated by the low balanced accuracy), despite the galaxy containing 11 per cent of GCs in this class. Similarly, in the MW-mass galaxy that is currently undergoing a massive merger, the model achieves high accuracy overall but again has difficulty identifying the small fraction of in-situ GCs. Across the dwarf galaxies the model shows excellent performance on both classes (i.e. a high overall and balanced accuracy), despite the relatively low accreted fractions. The better performance in low-mass galaxies is consistent with their relative dominance across the training set. In general, the model seems to produce lower confidence predictions at intermedi
Figure 8: Confusion matrix showing the distribution of the predicted versus true labels of GCs in the test set. The ‘ambiguous’ label corresponds to predictions with low confidence, \(P<P_{\rm thresh}=0.79\). For each category the matrix shows the number of GCs, and the fraction relative to the total sample in parentheses. The background shading is darker for larger fractions. The model is excellent at classifying accreted GCs (with only 6 per cent falsely identified as in-situ), but misclassifies in-situ GCs in 18 per cent of the cases.
Figure 7: Effect of increasing the decision threshold of the neural network classifier on the fraction of unambiguous predictions in the test sample and their accuracy. The colour bar shows the decision threshold \(P_{\rm thresh}\) for each class as well as for the combined sample. A GC is labelled ‘in-situ’ when \(P_{\rm in-situ}>P_{\rm thresh}\), and ‘accreted’ when \(P_{\rm accretret}\) in \(1-P_{\rm in-situ}>P_{\rm thresh}\). The star symbol indicates the fiducial decision threshold adopted in this work. It corresponds to an accuracy of \(\sim 89\) per cent on 60 per cent of the GC sample.
Figure 9: Fraction of test GCs in the predicted classes as a function of host galaxy stellar mass. The ‘ambiguous’ class corresponds to predictions below the confidence threshold, \(P<P_{\rm thresh}=0.79\). The grey line and grey shaded area show the predicted in-situ fraction and the uncertainty range (due to the ambiguous predictions). The dashed line indicates the true in-situ fraction. Even though \(\sim 25\)\(-\)\(45\) per cent of the sample is classified as ambiguous at a given mass, the model correctly predicts the majority class as a function of stellar mass.
ate galactocentric distances, where the in-situ and accreted GCs are co-spatial.
To understand the role of the GC phase-space distribution in the model predictions we show the same galaxies in projected position-velocity space in Figure 12. The importance of the 'projected angular momentum' \(R_{\rm p}|V_{\rm p}|\) is evident in the massive elliptical, with the decision boundary of the algorithm describing a near circle in position-velocity space, equivalent to a nearly constant projected angular momentum. This separation is also clear in lower mass galaxies, but in those cases the model predicts a more complicated boundary based on additional GC properties including their chemical abundances (see Fig. 13). We investigate which GC properties are most important for predicting GC origin in the next section.
### Importance of each GC and galaxy observable
To assess how important each GC and galaxy observable is for the predictions of the model, we calculate the 'permutation feature importances' (Breiman, 2001). For a given feature, its importance is defined as the mean decrease in accuracy when the feature information in the test set is removed from the model input. For this, the feature vector of the desired feature is randomly shuffled while leaving the other features unchanged. The accuracy is then computed using the predictions over several random realizations of the shuffled data \(N_{\rm iter}\). The importance is then the difference in accuracy between the shuffled data and the fiducial model averaged over all realizations. The left panel of Figure 13 shows the result using \(N_{\rm iter}=30\).
Surprisingly, the most important features are host galaxy properties: the 2D effective radius, velocity dispersion (which is very similar for GCs and stars), and alpha-element abundance. These are followed by the GC projected galactocentric radius, the projected angular momentum, galaxy metallicity, and GC alpha-abundance offset relative to the galaxy. The importance of the galaxy properties might seem counterintuitive at first glance. However, it can be explained in two ways. First, most of the galaxy properties we use here correlate strongly with stellar mass, such that the classifier can obtain galaxy mass or size information indirectly from any of them. As we show in Fig. 2, galaxy mass is the strongest predictor of GC accreted fraction, so it is natural for the algorithm to use it to estimate to first order the likelihood of a GC having formed in-situ. Second, highly covariant features can skew the results of the permutation technique, artificially reducing the importance of all features in a covariant cluster (Wei et al., 2015). This occurs because the model can always obtain the information on a permuted feature from one of its covariates.
To remove this possible bias, we perform a clustering analysis of all the features and split them into covariant groups based on a correlation threshold, and select only one feature from each group (see Appendix A for details). We train a new model with the fiducial architecture but using only the selected subset of features4, and obtain the new permutation importances (right panel of Fig. 13). The three remaining galaxy properties still have the highest importance, followed by the GC metallicity and alpha-abundance relative to the galaxy, and the projected angular momentum and projected galactocentric radius in units of \(R_{\rm e}\).
Footnote 4: Since removing covariant features may lead to a slight loss of predictive power, we use this model variant only for the purpose of evaluating feature importance.
For galaxies in surveys with limited data (i.e. no alpha abundances), Figure 13 also provides an estimate of the performance if the model when a specific observable is not included. However, the optimal solution in this case would be to retrain a new model with the reduced feature set (see Section 5.5 for a discussion of the performance of such a reduced model).
In Appendix B we show that the observables with the highest importances have distributions across the GC sample that lead
Figure 10: Predictive accuracy of the classifier on the simulated GC test set as a function of host galaxy properties. Left: accuracy across each galaxy versus galaxy mass and accreted GC fraction (top) and galaxy [Fe/H] (bottom). Right: same coloured by balanced accuracy. The accuracy tends to be lower in low-mass galaxies with high accreted fractions (the outliers in that mass range). The balanced accuracy in massive ellipticals is low because of the low performance of the model when predicting the origin of in-situ GCs (the minority class) as a result of the small number of training galaxies. The accuracy depends only weakly on galaxy metallicity.
to the most distinct separation of in-situ and accreted objects. To understand why the classifier performs poorly in massive elliptical galaxies, Figure 14 shows the distribution of GC observables for galaxies with \(M_{*}>10^{11}\) M\({}_{\odot}\). The buildup of elliptical galaxies is dominated by massive satellites that contribute GCs with similar chemical abundances to the main progenitor GCs, and violent relaxation further mixes the two populations in phase space. The GC observables of in-situ and accreted populations entirely overlap, and this partly explains why the model cannot discriminate between the two classes from these data.
### Estimating uncertainty in the model predictions
Ideally we would like to predict not only the GC origin of each GC in an external galaxy, but also to have an idea of the uncertainty in the prediction. To estimate this predictive uncertainty we formulate a new problem: can we predict the accuracy of the model across a galaxy using only the observed properties of the galaxy? This would provide an estimate of how much the predictions for a given observed galaxy can be trusted. We explored a variety of regression algorithms including a Multilayer Perceptron with a linear activation function for the output layer (Rumelhart et al., 1986), a Random Forest (Breiman, 2001), and a Ridge Regressor (Hoerl & Kennard, 1970). Each model was trained on all the galaxy features listed in Table 2, in addition to the features describing the distribution of GC
Figure 11: Projected distributions of GCs in the test sample and their predicted origin compared to their true origin. Each row shows the GCs hosted by selected galaxies (dots) in the random test set in each of four representative stellar mass bins, ranging from massive ellipticals (top row) to low-mass dwarfs (bottom row). The left column shows the true origin, while the middle and right columns show the predicted labels and probabilities of in-situ origin \(P_{\rm in-situ}\), respectively. GCs with ambiguous classifications are shown in white in the middle column. The stellar mass, GC in-situ fraction, accuracy, balanced accuracy, and fraction of ambiguous predictions are indicated in each row. The stellar surface density is shown in grey-scale. The model produces high accuracy predictions for low-mass galaxies but has difficulty identifying in-situ GCs in massive galaxies with high accreted GC fractions due to their rarity in the training set.
galactocentric radii and LOS velocities in each galaxy (their mean, inter-quartile range, skewness, and kurtosis).
Perhaps unsurprisingly, we find that none of these algorithms can predict the accuracy of the fiducial classifier. To predict the galaxy-wide accuracy, the models would need to know the true GC origin labels, and this is precisely the information we lack for real galaxies. Deep learning offers a possible solution: the output of MLP classifiers is a set of class membership probabilities. We may therefore exploit the correlation that was found between the label probabilities \(P_{\rm in-situ}\) and the full sample accuracy in Fig. 6 to predict the model uncertainty. We define the 'confidence' of the model predictions for a galaxy by how close on average the predicted probabilities get to complete certainty,
\[{\rm mean\ confidence}=\frac{1}{N_{\rm GC}}\sum_{i=1}^{N_{\rm GC}}{\rm max} \left(P^{i}_{\rm in-situ},P^{i}_{\rm accreted}\right). \tag{3}\]
We examine the relation between the galaxy-wide accuracy and mean prediction confidence using the simulation test set in Figure 15. To calculate the mean confidence we use all the GC predictions, including those with \(P<P_{\rm thresh}\). Despite the large scatter, there is a highly significant correlation (\(p=3\times 10^{-9}\)) between mean prediction confidence and accuracy. The median accuracy increases from \(\sim 0.8\) to \(\sim 1.0\) as the mean confidence increases from \(\sim 0.70\) to \(\sim 0.95\). This shows that the neural network successfully learned which regions of the high-dimensional feature space contain both in-situ and accreted GCs, and therefore lead to ambiguous predictions. We can then use the distribution of galaxy-wide accuracy in Fig. 15 to estimate the probability that the classifier will reach a given desired accuracy in a real galaxy. For instance, we expect that the classifier will be more than 90 per cent accurate in 3 out of 4 galaxies that reach a mean confidence \(\sim 0.85\). The dashed line in
Figure 12: Projected position-velocity distributions of GCs in the test sample and their predicted origin compared to their true origin. The rows show the LOS velocity versus projected galactocentric radius for the randomly selected simulated galaxies in Fig. 11. The left column shows the true origin, while the middle and right columns show the predicted labels and probabilities of in-situ origin \(P_{\rm in-situ}\), respectively. GCs with ambiguous classifications are shown in grey in the middle column. The stellar mass, GC in-situ fraction, accuracy, balanced accuracy, and fraction of ambiguous predictions are indicated in each row. The decision boundary is clear in the right panels for massive galaxies, and highlights the predictive power of the ‘projected angular momentum’ \(R_{\rm p}|V_{\rm p}|\).
Fig. 15 shows a linear fit to the data with the parameters provided in the legend.
### Testing the model on the Milky Way GCs
Simulations are rough simplifications of the real Universe. As such, they may or may not capture the physical processes linking GC formation to their observable properties. With any supervised deep learning model trained on simulation data the question therefore arises: does the complex relationship between the features and target variable learned by the model resemble the actual relation in the real Universe? In other words, does the performance of the model using real data match the performance on the simulated test data? To answer this question we now perform a first, real-world test of the ANN classifier using data for the Milky Way GCs.
For this test we use the detailed data on the MW GC system that has been compiled over several decades, together with the progenitor associations determined recently using _Gaia_ orbital information, chemical abundances, and ages (Massari et al., 2019; Kruijssen et al., 2020; Kruijssen et al., 2020). We use the compilation of GC metallicity data from Harris (1996, 2010 edition), and the 3D positions and velocities compiled by Baumgardt et al. (2019) from a combination of _HST_ and _Gaia_ data. To extend the applicability of the model to surveys that do not include the most difficult to obtain GC observables, we build a new'minimal' ANN classifier using a reduced feature set (by
Figure 14: Joint and marginal distributions of GC origin across the observables with the most predictive power for GCs hosted by massive ellipticals. The panels show the distribution of in-situ and accreted GCs across simulated galaxies with \(M_{\star}>10^{11}\) M\({}_{\odot}\). The overlap of the two classes across all the observables partly explains the underperformance of the classifier in the most massive galaxies.
Figure 13: Permutation importance of each of the input features (i.e. observables) of the classifier. The value for each feature corresponds to the decrease in accuracy when the feature data in the test set is randomly shuffled before making predictions. Left: using all features. Right: After removing highly covariant features and retraining the model with only independent ones (see Sec. 5.3 for details). The black lines show the standard deviation in the result over 30 random iterations. The projected galaxy effective radius, stellar mass, and alpha-element abundance are the most predictive host galaxy properties. The most predictive GC observables are GC metallicity and alpha-abundance relative to the host galaxy, and projected angular momentum \(R_{\rm p}|V_{\rm p}|\) and relative projected radius \(R_{\rm p}/R_{\rm e}\).
Figure 15: Accuracy of the ANN classifier as a function of the mean confidence in the predictions across each galaxy in the test set. The black line shows the binned median, and the dark and light shading contain the top 75 and 95 per cent of the accuracy distribution in each bin. The dashed line shows a linear fit, with parameters given in the legend. Here we define confidence as the maximum of the predicted class probabilities for each GC, max (\(P_{\rm in-situ}\), \(P_{\rm accreted}\)). There is a highly significant correlation between mean prediction confidence and accuracy. About 75 per cent of simulated galaxies with a mean prediction confidence \(\sim 0.85\) reach at least 90 per cent accuracy.
removing the alpha-element abundances and velocity dispersions), and train it using the fiducial simulation training set. As we show below, this retraining procedure achieves a better performance than simply neglecting these features in the fiducial model (where the loss of accuracy would be \(>5\) per cent, see Fig. 13). Table 3 summarizes the features of the minimal classifier.
For the global properties of the Galaxy we assume \(M_{*}^{\rm gal}=5\times 10^{10}\) M\({}_{\odot}\), \(R_{\rm e}^{\rm gal}=3.8\) kpc (Cautun et al., 2020), and \(\left[{\rm Fe}/{\rm H}\right]_{\rm gal}=0.0\)(Bland-Hawthorn & Gerhard, 2016). We apply the same metallicity selection used for the simulation to the MW GCs (see Table 1), without imposing a GC mass cut (since this was only used to remove artefacts in the simulation). This results in a sample of 129 GCs with \(-2.5<\left[{\rm Fe}/{\rm H}\right]<-0.5\). For the true origin labels we use the classification by Massari et al. (2019) (as revised by Kruijssen et al. 2020 for Pal 1 and NGC 6441) based on the GC ages, metallicities, and orbits. To obtain the projected positions and LOS velocities we artificially incline the plane of the Galaxy by an angle \(i\) deg (around the x-axis) towards the observer.
As in the case of the fiducial model, we optimize the architecture using a grid search for the combination \(\left[N_{\rm layers},N_{\rm nodes}\right]\) that yields the highest accuracy on the simulation test set. For a decision threshold \(P_{\rm thresh}=0.5\), the resulting network achieves an accuracy of \(\sim 78\) per cent on the test data (using \(N_{\rm layers}=2\) and \(N_{\rm nodes}=50\)). This corresponds to a decrease of \(\sim 2\) per cent compared to the fiducial model. During testing of the minimal model we found that the accuracy of the MW predictions varies significantly across identically trained models (with a dispersion of \(\approx 3\) per cent) as a result of the inherent stochasticity in the ANN training process5. This stochasticity is averaged out when considering the large simulated GC test sample, but becomes more important when evaluating the predictions for the small set of GCs in the MW system (see Appendix C).
Footnote 5: This is a well known trade-off of the computational efficiency necessary for estimating the gradient of the loss function in a high-dimensional feature space.
To reduce the variance in the MW predictions we create an ensemble of 5000 models trained on identical simulation data, and vary the network complexity by sampling uniformly from the grid of \(\left[N_{\rm nodes},N_{\rm layers}\right]\) described in Sec. 4.2. We then tested each model on three different samples: the full simulation test set, the subset of 24 \(L^{*}\) galaxies (i.e. \(10^{10}\leq M_{*}/\) M\({}_{\odot}\leq 10^{11}\)) in the simulation test set, and the projected MW GC system (at random inclinations sampled uniformly from the range \(0\leq\cos i\leq 1\)). The results are shown in Figure 16 as a function of the performance on the \(L^{*}\) galaxy test set. We find an interesting statistically-significant correlation between the performance on the \(L^{*}\) simulations and on the real Milky Way, and a weaker correlation with the full test set. This indicates that models with above-average performance on the simulations will in general also produce more accurate predictions on real galaxies. In other words, the best models tend to have the best generalization capacity, and this would only be true if the simulation captures the physical processes responsible for the formation and evolution of the Galaxy.
The perfomance of the model ensemble on each sample as a function of the accuracy threshold is shown in the right panel of Figure 16. In addition to the MW, we show the accuracy on the two main progenitors, _Kraken_ and _Gaia-Enceladus_. The predictive accuracy of the ensemble increases with the threshold for the \(L^{*}\) test sample, the MW, and its two main progenitors (and increases slightly for the full test set).
To visualize the predictions, the first column of Figure 17 shows the position-velocity diagram of the projected MW GCs coloured by their true origin, where each row corresponds to a different viewing angle. The other two columns show the origin labels (middle) and probabilities (right) predicted by the minimal classifier. To obtain the predictions we selected the model with the highest performance on the \(L^{*}\) test set, and further optimized its performance by tuning \(P_{\rm thresh}\) to achieve a high accuracy and low ambiguous fraction (see Appendix C for details). Using only a single model from the ensemble may increase stochasticity (i.e. noise) in the results, but we checked explicitly that this is not the case when comparing to a voting ensemble of the 100 best models. Figure 16 shows that selecting the model with the best performance on the \(L^{*}\) simulation test set guarantees a high accuracy on the MW system (blue line), without sacrificing the performance (i.e. due to overfitting) across the broad galaxy population (gray line).
The best-performing model predicts the origin of up to \(\sim 9/10\) of the MW GCs unambiguously with an accuracy of \(85-87\) per cent overall, and \(\geq 80\) and 100 per cent for the _Kraken_ and _Gaia-Enceladus_ GCs respectively (adopting \(P_{\rm thresh}=0.52\)). Increasing the decision threshold to \(P_{\rm thresh}=0.6\) improves the MW accuracy to 90 per cent and the ambiguous fraction to 0.4 (see AppendixC). For the baseline value \(P_{\rm thresh}=0.5\) the performance is comparable to the accuracy obtained on the full test set drawn from the simulations (grey line in the right panel of Fig. 16), and on the 24 simulated galaxies with masses \(10^{10}<M_{*}<10^{11}\) M\({}_{\odot}\) (x-axis of right panel of Fig. 16). The excellent performance on the MW GCs implies that the simulation training data accurately follows the physical processes that shape the observed properties of in-situ and accreted GC populations and their host galaxies in the real Universe, and that the ANN effectively learned this relation.
The first column of Figure 17 also indicates the known galactic progenitors associated to each GC (from Kruijssen et al. 2020) using different symbols. Out of the five known progenitors that contributed accreted GCs, only the _Kraken_ orbits is located in the inner Galaxy, at galactocentric distances \(r\lesssim 10\) kpc. This could potentially make the classification more challenging for the model, since it relies partly on the projected galactocentric distance (see Section 5.3). Despite this, we find that the model correctly identifies as accreted 8\(-\)10 out of the 13 known _Kraken_ GCs (shown as squares), in addition to all the _Gaia-Enceladus_ GCs, and at least a few GCs in each of the other three progenitors.
The success of the deep learning classifier in identifying debris from all the known MW progenitors has important implications for the observational reconstruction of the assembly histories of other galaxies, where only limited GC phase-space information is available. The accurate identification of accreted GCs by the model in this test shows that there is enough archaeological information in extragalactic GC observables to partially reconstruct the merger trees of galaxies in large surveys. We will investigate this intriguing possibility in future work.
### Impact of uncertainties in observational data
The simulated observables used in training and evaluating the model so far assume measurements with perfect precision. Some GC and galaxy observables can include large uncertainties that arise either from the quality of the data, or from the methods used to infer the physical property from either the photometry or the spectra. Here we perform an analysis of the effect of uncertainties on the predictions to understand the sensitivity of the model, and to provide
benchmarks for the expected behaviour of the model for given values of the uncertainties.
For this, we perform a Monte Carlo experiment. We first inject random noise following a normal distribution with the width given by the relative uncertainty in each of the GC observables in the test set, \(\left[{\rm Fe/H}\right]\), \(\left[\alpha/{\rm Fe}\right]\), \(R_{\rm p}\), and \(V_{\rm p}\), in addition to the host galaxy properties \(\log\,M_{\rm s}^{\rm gal}\) and \(\log\,R_{\rm e}^{\rm gal}\). The position and velocity errors are calculated relative to the effective radius and velocity dispersion of the galaxy, respectively. We then use the fiducial ANN classifier (trained on the unperturbed data) to obtain predictions for uncertainties in the range 0.0\(-\)0.5, equivalent to relative errors of up to a factor of 3 in stellar mass and effective radius, and absolute errors of up to 50 per cent of the effective radius and velocity dispersion for \(R_{\rm p}\) and \(V_{\rm p}\), respectively.
The resulting accuracy as a function of the relative observational uncertainty in each feature is shown in Figure 18 for each of the GC observables used in the fiducial feature set (see Table 2). The uncertainty in the alpha-element abundances dominates the prediction errors, but still only produces a slight \(\la\)1 per cent decrease in accuracy for errors as large as 0.2 dex. The model is robust to large individual uncertainties in the GC projected galactocentric radius, LOS velocity, and metallicity. Including uncertainties in all the observables results in a \(\sim 1.5\) per cent drop in accuracy for relative errors \(\sim 0.15\). The observational uncertainties in distances and LOS velocities of extragalactic GCs are typically smaller. Distances of galaxies within \(\sim 40\) Mpc can be determined to \(\sim 10\) per cent precision (e.g. Tonry et al., 2001; Blakeslee et al., 2009), and velocities to a precision \(\la 15\) km s\({}^{-1}\)(e.g. Forbes et al., 2017), or about \(\sim 12\) per cent of the MW velocity dispersion. Uncertainties in metallicity determinations are larger, \(\sim 0.15\) dex (e.g. Caldwell & Romanowsky, 2016), but still in the range where they would have a minimal effect on the accuracy of the model predictions. This analysis indicates that the uncertainties in the effective radius and the GC alpha abundances will be the dominant observational sources of error in the model's predictions.
### Including GC ages to improve performance
Due to limitations in the modelling of integrated spectra, GC ages are notoriously difficult to constrain beyond the Local Group (Worthey, 1994). However, recent studies suggest that the precision of extragalactic GC ages can be improved significantly, reaching \(\la 0.1\) dex relative uncertainties (Usher et al., 2019; Cabrera-Ziri & Conroy, 2022). High precision GC ages in the local Universe could therefore be within reach for wide spectroscopic surveys over the next decade. In this section we test the effect of including the ages
Figure 16: Performance of an ensemble of minimal classifiers on the simulated test set and on the MW GCs. Left: correlation between the accuracy of each model (points) on the MW GCs and on the 24 simulated \(L^{\star}\) galaxies in the test set (and its Spearman coefficient and \(p\)-value), with the colour indicating the accuracy on the full test set, and the contours showing a kernel density estimate of the underlying distribution. Right: accuracy of a voting ensemble as a function of the minimum \(L^{\star}\) galaxy test set accuracy used in the selection. The ensemble uses 5000 models trained on identical simulation data (and model architectures sampled from a grid of \(\left[N_{\rm nodes},N_{\rm hyper1}\right]\)). To obtain the MW accuracy the models are tested on randomly inclined MW GC system observables. We exploit the strong correlation between test set accuracy and MW accuracy to select a model with the highest performance on both simulated and observed galaxies.
\begin{table}
\begin{tabular}{l l l} \hline \hline Feature & Object & Definition \\ \hline \(\log\,M_{\rm s}^{\rm gal}\) & galaxy & stellar mass \\ \(\log\,R_{\rm e}^{\rm gal}\) & galaxy & projected effective radius \\ \(\left[{\rm Fe/H}\right]_{\rm gal}\) & galaxy & mean metallicity \\ \(\left[{\rm Fe/H}\right]\) & GC & metallicity \\ \(\Delta\left[{\rm Fe/H}\right]\) & GC/galaxy & metallicity relative to the galaxy, \(\left[{\rm Fe/H}\right]-\left[{\rm Fe/H}\right]_{\rm gal}\) \\ \(\log\,R_{\rm p}/R_{\rm e}^{\rm gal}\) & GC/galaxy & projected distance from galaxy centre in units of the galaxy effective radius \\ \(\log\,R_{\rm p}|V_{\rm p}|\) & GC & ‘projected angular momentum’: product of projected galactocentric distance and LOS velocity \\ \hline \hline \end{tabular}
\end{table}
Table 3: GC and host galaxy observables used as features in the ‘minimal’ classifier. Projected positions and LOS velocities are calculated with respect to the position and velocity of the centre of the galaxy, assuming a single random orientation for each galaxy.
of the simulated GCs in training the ANN classifier. For this we add the precise GC age as an additional feature, and then evaluate the performance of the model on the test data from the simulation. We then run a Monte Carlo experiment to add random log-normal noise to the ages in the test data, and calculate the accuracy as a function of the uncertainty in the ages as well as in each of the other observables. As in the fiducial model, to remove ambiguous results we assume a decision threshold that predicts GC origin for \(\sim 60\) per cent of the test sample, \(P_{\rm thresh}=0.83\).
The impact of including GC ages on the predictions is shown in Figure 19. Including ages increases the accuracy of the model with no uncertainties from \(\sim 89\) to \(\sim 93\) per cent. Relative uncertainties up to \(\sim 0.2\) in the GC metallicities, alpha-abundances, positions and velocities have almost no effect on the accuracy in this model. However, the classifier performance drops significantly when the precision of the ages is reduced below \(\sim 0.1\) dex. This demonstrates the importance of the GC ages compared to all the other observables in shaping the model predictions. For reference, recent advances in stellar population modelling make possible to achieve this level of precision in the age determination of clusters (see Cabrera-Ziri & Conroy 2022). For ages with a precision of \(\la 0.1\) dex (or about 25 per cent), the classifier reaches an accuracy of \(>92\) per cent, suggesting that the current limiting precision of GC ages is already high enough to significantly improve the performance of the ANN model.
## 6 Discussion
The GC observables we select in this work have been found to be good indicators of GC origin in previous studies. Hughes et al. (2019) found that the alpha-element abundance of recently accreted GCs is systematically lower at fixed [Fe/H] relative to in-situ GCs. Kruijssen et al. (2019, 2019) showed that at a fixed age, the metal
Figure 17: Predictions for the origin of the Milky Way GCs as a function of line-of-sight velocity and projected galactocentric distance. Each row shows the results assuming that the MW is observed at a different inclination, as indicated in the legend. Left: true origin is indicated using different symbols for each progenitor galaxy, and colors indicating in-situ and accreted GCs. Middle: predicted in-situ and accreted labels are indicated with using color, with ambiguous classifications shown in grey (and symbols corresponding to each progenitor). Right: predicted probability of in-situ origin \(P_{\rm in-situ}\). The accuracy for the entire GC system and for each of the two major progenitors is indicated in the middle panels. The performance of the minimal ANN classifier is robust to the assumed inclination of the Galaxy, and the model successfully identifies GCs in each of the five known progenitors, including at least 80 per cent of the GCs associated with _Knaken_ (squares), the progenitor debris located closest to the centre of the Milky Way, and all of the _Gaia-Enceladus_ GCs.
licity of GCs traces the metallicity of their galactic progenitor, and therefore GC age-metallicity relations can be used to reconstruct the assembly history of the Galaxy. Pfeffer et al. (2020) and Kruijssen et al. (2020) showed that adding 3D orbital information to the ages and metallicities allows the recovery of the masses and accretion redshifts of each progenitor.
We have extensively explored the space of GC and galaxy properties to use as features for reconstructing GC origin. In addition to our manual exploration, our choice of classifier model architecture (a densely layered neural network) is meant to take advantage of the ability of these networks to capture highly non-linear relationships between the features and the output. It is therefore unlikely that we excluded a feature in the simulations that would dramatically improve the performance of the classifier. More sophisticated simulations that track the individual abundances of many isotopes (e.g. Reina-Campos et al., 2022) may capture additional information that could improve the predictions.
Another more subtle issue that arises in this type of machine learning problem is the completeness of the training set. Due to the steepness of the galaxy stellar mass function, our volume-limited simulated galaxy sample is dominated by low-mass galaxies and contains only a handful of massive elliptical galaxies. While this provides an unbiased representation of the galaxy population, it is not ideal for supervised learning. As shown in Section 5.1, the classifier has difficulties capturing the relation between GC observables and their origin in the most massive galaxies partly due to the small size of the galaxy training sample, which only includes 4 galaxies with \(M_{*}>10^{11}\) M\({}_{\odot}\) compared to 86 MW-mass (\(10^{10}<M_{*}/\) M\({}_{\odot}<10^{11}\)) galaxies and 273 dwarfs in the mass range of the Magellanic Clouds (\(4\times 10^{8}<M_{*}/\) M\({}_{\odot}<3\times 10^{9}\)). Similarly, our results indicate that the loss of information when the phase-space distribution of the GC systems is observed in projection is one of the dominant limiting factors in the performance of our model, compared to one that takes as input the full 6D information. We have explicitly tested this hypothesis using the 'data augmentation' technique. This was done by retraining the model using an extended training set that includes three orthogonal projections of the simulation box, instead of the single projection used for the fiducial model. This procedure effectively yields a three times larger training set. There was no significant improvement in the predictive accuracy, which suggests that the method is limited only by the lack of depth information, and not by the number of galaxies (or projections) in the training set. A further limitation of the model presented here is that the selection of the training sample implies that the results may apply only to central galaxies. Achieving a similarly good performance on satellites will likely require training specifically with satellite galaxies because their evolution is more sensitive to environmental processes.
Lastly, there is the possibility that the simulation used for training the algorithm does not capture certain aspects of the formation and evolution of galaxies and GCs, and this is the most difficult aspect of the uncertainties to quantify. As described in Sec. 2.1, E-MOSAICS reproduces many properties of observed galaxies and GCs. However, the EAGLE model produces \(L^{*}\) galaxies with stellar masses that are \(\sim 0.1\)-\(0.2\) dex below observations (Schaye et al., 2015). Furthermore, the lack of a cold interstellar medium in EAGLE results in the artificial survival of too many young, metal-rich clusters that should have otherwise disrupted (for a detailed discussion, see Pfeffer et al., 2018 and Kruijssen et al., 2019). While the first problem is difficult to correct for in the training of the ANN, Fig. 18 suggests that the predictions are robust to large errors in the stellar mass. We have also attempted to remove the underdisrupted
Figure 19: Accuracy of a ANN classifier that includes GC ages in addition to all the features of the fiducial model. Each line shows the accuracy as a function of the relative error in the GC observables: metallicity, alpha abundances, projected position, and line-of-sight velocity. As in Fig. 18, the uncertainties in GC velocities are expressed as a fraction of the galaxy velocity dispersion, and for all other observables (including ages) in logarithmic units. The bottom line shows the combined effect of uncertainties in all the observables. The values are obtained by adding normally distributed Monte Carlo errors to the test set drawn from the simulations. Including GC ages significantly improves the accuracy of the predictions for uncertainties \(\lesssim\) 0.1 dex, but makes the model very sensitive to the precision of the age measurements.
Figure 18: Impact of observational uncertainties on the accuracy of the GC origin predictions. Each line shows the accuracy as a function of the relative error in each of the GC observables: metallicity, alpha abundance, projected position, and line-of-sight velocity, in addition to the galaxy stellar mass and effective radius. The bottom line shows the effect of uncertainties in all the observables combined. For \(V_{\rm p}\) the \(x\)-axis represents fractional uncertainty with respect to the velocity dispersion of the host galaxy, while for all other quantities it represents order of magnitude uncertainties (i.e. in dex). The values are obtained by adding normally-distributed Monte Carlo errors to the test set drawn from the simulations. The accuracy is robust to relative uncertainties as large as \(\sim 0.1\) in the GC observables (see Section 5.6 for the interpretation for each observable). The performance of the classifier is most sensitive to the precision of the alpha abundances and galaxy effective radius.
GCs from the training and test samples. A new generation of simulations with better modelling of \(L^{*}\) galaxies and improved ISM physics will be needed to extend the origin predictions to metal-rich GCs with \(\left[{\rm Fe/H}\right]>-0.5\) (see Reina-Campos et al., 2022), and will likely improve the identification of in-situ objects (see Sec. 5.1).
The reconstruction of the Milky Way assembly history using _Gaia_ and other spectroscopic surveys demonstrates that chemodynamical observations are a powerful tool. Thanks to these studies, the origin of the stellar halo of the MW has now been determined as a function of galactocentric radius (Naidu et al., 2020). This and other detailed observations like the radial profile of galactic components of different origin could become excellent tools to constrain cosmological hydrodynamical simulations. Simulations have already reached enough sophistication to reproduce many global galaxy observables, but still suffer from highly degenerate input physics, which limits their predictive power (for a review, see Naab & Ostriker, 2017). The deep learning approach we demonstrate in this paper could in principle be extended to constrain the spatial distribution of in-situ and accreted stars and GCs in galaxy samples of up to millions of objects in the local Universe. Classifiers trained using observables that are independent of specific highly uncertain physical processes (i.e. stellar feedback) could determine the spatial distribution of in-situ and accreted material across the galaxy population. By comparing these constraints to the output of state-of-the-art cosmological simulations, their built-in hypotheses regarding the physics of star formation and feedback could be tested. Similar methods could be employed to constrain the physics of the DM particle using galaxy surveys.
## 7 Conclusions
In this work we use nearly a thousand simulated galaxies and their GC systems in the E-MOSAICS (34.4 Mpc)\({}^{3}\) periodic volume to understand how the present day GC observables (e.g. metallicity, alpha abundances, projected distance and velocity) can be used to infer the origin of specific GCs (i.e. in-situ vs. accreted). We first investigate how galaxy properties including halo mass and metallicity influence the fraction of GCs that are accreted from satellites across the galaxy mass spectrum, from dwarfs to giant ellipticals. In the second part we use supervised deep learning algorithms to model and understand the relation between GC observables in external galaxies and their in-situ or accreted origin. For this we exploit the success of the E-MOSAICS cluster formation and evolution physics in reproducing the observed properties of GCs in the local Universe. We train a Multilayer Perceptron artificial neural network on the mapping between 17 GC and host galaxy observable features (see Table 2), and their true origin labels (i.e. in-situ versus accreted). We test the performance of the classifier on an independent random subset comprised of \(\sim 20\) per cent of the simulated galaxies, and use the known origin of the Milky Way GCs to benchmark the model for application on extragalactic GC systems. We investigate the importance of each observable for determining the predictions of the classifier, and the effect that uncertainties in the observations have on the accuracy of the predictions. Finally, we explore the benefits of including GC ages.
Our conclusions are summarised as follows:
1. The balance of in-situ formation and accretion of GCs is strongly shaped by galaxy mass, in a similar way as for the field stars. The median accreted fraction of GCs increases with mass, such that dwarf galaxies are typically dominated by in-situ GCs, and massive ellipticals contain mostly accreted GCs (Fig. 2). Despite the large scatter in accreted GC fractions across the simulated galaxies, we find a weak trend with halo mass: at fixed stellar mass, galaxies in more massive haloes host larger fractions of accreted GCs (Fig. 2). Metal-poor galaxies also tend to have larger accreted GC fractions due to a larger contribution of relatively metal-poor satellites to their assembly, and the late formation of their DM haloes (Fig. 3).
2. There is a strong dependence of GC origin on GC metallicity. Metal-poor GCs are typically a mix of in-situ and accreted objects, whereas the origin of metal-rich GCs depends on stellar mass: in low-mass galaxies (with \(M_{*}<10^{10}\) M\({}_{\odot}\)) they are almost entirely formed in-situ, and in galaxies more massive than the Milky Way they are mostly accreted (Figs. 4 and 5).
3. A Multilayer Perceptron artificial neural network classifier trained on the observable properties of more than 50,000 GCs hosted by 736 simulated galaxies predicts the in-situ/accreted origin of GCs in a test sample drawn from the same simulation with an overall accuracy of \(\sim 89\) per cent for objects with unambiguous labels (with a completeness of 60 per cent; Sec. 4.2 and 5.1). The classifier is excellent at identifying accreted GCs (6 per cent false-positive rate), and less accurate for in-situ GCs (18 per cent false-positive rate) (Fig. 8). The model performs generally well in low-mass galaxies (below the mass of the MW), but has more difficulty identifying in-situ GCs in the most massive galaxies (Figs. 10 and 11). This is likely due to the similarity of the observables of in-situ and accreted populations in massive galaxies (Fig. 14), their low fraction of in-situ GCs, the small number of these galaxies in the simulated volume (\(\sim 6\)), and the exclusion of GCs with \(\left[{\rm Fe/H}\right]>-0.5\) from the sample.
4. The classifier uses only a few dominant observables to predict GC origin. These include the effective radius, stellar mass, and alpha-element abundance of the host galaxy, together with the GC metallicity and alpha-abundance relative to the galaxy, and its projected angular momentum and galactocentric radius (Fig. 13). The high predictive importance of the galaxy effective radius seems to originate from its correlation with the assembly timescale of the galaxy and its effect on the GC accreted fraction (see Sec. 3). Simulated galaxies with larger effective radii formed later and in more massive DM haloes with larger accreted fractions.
5. Using the simulated test data, we find a significant correlation between the mean prediction confidence (an output of the ANN classifier) and the accuracy for each galaxy. This allows us to estimate the likelihood that predictions for GC origin in a real galaxy will reach a minimum desired accuracy (Fig. 15).
6. After removing observables that are either unimportant or difficult to obtain, we test a minimal version of the classifier on the Milky Way GCs with known origin. Assuming that the Galaxy is observed in projection, the optimized model achieves excellent performance, with an accuracy of \(\sim 85-90\) per cent that is nearly independent of the inclination. The model identifies GCs associated to each of the five known GC-rich progenitor galaxies, including most of the GCs accreted from _Kraken_, and all of the _Gaia-Enceladus_ GCs.
7. The classifier is robust to relatively large uncertainties in the observational data (i.e. larger than in currently available extragalactic data). Relative uncertainties in the GC metallicity, alpha-abundances, projected distance, and line-of-sight velocity of up to \(\sim 0.1\) decrease the accuracy on the test data by less than 1 per cent. The dominant effect is from the uncertainty in the alpha-abundances and galaxy effective radii (Fig. 18).
8. Including GC ages as an additional feature in the model sig
nificantly increases the performance, with an accuracy on the simulation test data of \(>92\) per cent. However, a precision of \(<0.1\) dex (or \(\sim 25\) per cent) is required in the age measurements. Ages with lower than 0.1 dex precision produce a steep decrease in performance compared to the fiducial model (Fig. 19).
The ANN classifier developed in this work can be readily used to make predictions for the origin of GCs in nearby galaxies for which metallicity, alpha-abundance, positions, and radial velocities have been measured. Over the next decade, wide-field space-based surveys will allow these data to be collected for very large samples of galaxies. The model developed in this work is the initial step in piecing together the assembly histories of galaxies beyond the Milky Way as a function of mass and environment, leading to a detailed understanding of the process of galaxy formation. In future work we will explore efficient methods to constrain galaxy merger histories using GC observables. In a follow-up paper we apply the model to predict the origin of the GCs in M31 (Trujillo-Gomez et al. in prep.).
The python implementation of the fiducial and minimal classifiers in Keras, along with an example of their use in an interactive Jupyter notebook is available at [https://github.com/sebastian-tg/GC-origin-ANNclassifier](https://github.com/sebastian-tg/GC-origin-ANNclassifier).
## Acknowledgements
STG gratefully acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 138713538 - SFB 881 ("The Milky Way System", subproject A08). STG and JMDK gratefully acknowledge funding from the European Research Council (ERC-StG-714907, MUSTANG). JMDK gratefully acknowledges funding from the German Research Foundation (DFG - Emmy Noether Research Group KR4801/1-1). COOL. Research DAO is a Decentralised Autonomous Organisation supporting research in astrophysics aimed at uncovering our cosmic origins. MRC gratefully acknowledges the Canadian Institute for Theoretical Astrophysics (CITA) National Fellowship for partial support. JP is supported by the Australian government through the Australian Research Council's Discovery Projects funding scheme (DP200102574). RAC is supported by the Royal Society. NB gratefully acknowledges financial support from the European Research Council (ERC-CoG-646928, Multi-Pop) as well as from from the Royal Society (University Research Fellowship). This study was supported by the Klaus Tschira Foundation. This work used the DiRAC Data Centric system at Durham University, operated by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment was funded by BIS National E-infrastructure capital grant ST/K00042X/1, STFC capital grants ST/H008519/1 and ST/K00087X/1, STFC DiRAC Operations grant ST/K003267/1 and Durham University. DiRAC is part of the National E-Infrastructure. The work also made use of high performance computing facilities at Liverpool John Moores University, partly funded by the Royal Society and LJIMU's Faculty of Engineering and Technology.
This work made use of the software packages: Numpy (Oliphant, 2006), Scipy (Virtanen et al., 2019), Matplotlib (Hunter, 2007), Pandas (McKinney, 2010), Seaborn (Waskom, 2021), Jupyter (Kluyver et al., 2016), Pynbody (Pontzen et al., 2013), Scikit-learn (Pedregosa et al., 2011), Tensorflow (Abadi et al., 2015), and Keras (Chollet et al., 2015).
## Data Availability
The data underlying this article will be made available upon reasonable request to the corresponding author.
|
2308.11268 | Orthogonal Constant-Amplitude Sequence Families for System Parameter
Identification in Spectrally Compact OFDM | In rectangularly-pulsed orthogonal frequency division multiplexing (OFDM)
systems, constant-amplitude (CA) sequences are desirable to construct
preamble/pilot waveforms to facilitate system parameter identification (SPI).
Orthogonal CA sequences are generally preferred in various SPI applications
like random-access channel identification. However, the number of conventional
orthogonal CA sequences (e.g., Zadoff-Chu sequences) that can be adopted in
cellular communication without causing sequence identification ambiguity is
insufficient. Such insufficiency causes heavy performance degradation for SPI
requiring a large number of identification sequences. Moreover,
rectangularly-pulsed OFDM preamble/pilot waveforms carrying conventional CA
sequences suffer from large power spectral sidelobes and thus exhibit low
spectral compactness. This paper is thus motivated to develop several order-I
CA sequence families which contain more orthogonal CA sequences while endowing
the corresponding OFDM preamble/pilot waveforms with fast-decaying spectral
sidelobes. Since more orthogonal sequences are provided, the developed order-I
CA sequence families can enhance the performance characteristics in SPI
requiring a large number of identification sequences over multipath channels
exhibiting short-delay channel profiles, while composing spectrally compact
OFDM preamble/pilot waveforms. | Shih-Hao Lu, Char-Dir Chung, Wei-Chang Chen, Ping-Feng Tsou | 2023-08-22T08:25:28Z | http://arxiv.org/abs/2308.11268v1 | Orthogonal Constant-Amplitude Sequence Families for System Parameter Identification in Spectrally Compact OFDM
###### Abstract
In rectangularly-pulsed orthogonal frequency division multiplexing (OFDM) systems, constant-amplitude (CA) sequences are desirable to construct preamble/pilot waveforms to facilitate system parameter identification (SPI). Orthogonal CA sequences are generally preferred in various SPI applications like random-access channel identification. However, the number of conventional orthogonal CA sequences (e.g., Zadoff-Chu sequences) that can be adopted in cellular communication without causing sequence identification ambiguity is insufficient. Such insufficiency causes heavy performance degradation for SPI requiring a large number of identification sequences. Moreover, rectangularly-pulsed OFDM preamble/pilot waveforms carrying conventional CA sequences suffer from large power spectral sidelobes and thus exhibit low spectral compactness. This paper is thus motivated to develop several order-\(I\) CA sequence families which contain more orthogonal CA sequences while endowing the corresponding OFDM preamble/pilot waveforms with fast-decaying spectral sidelobes. Since more orthogonal sequences are provided, the developed order-\(I\) CA sequence families can enhance the performance characteristics in SPI requiring a large number of identification sequences over multipath channels exhibiting short-delay channel profiles, while composing spectrally compact OFDM preamble/pilot waveforms.
Orthogonal frequency division multiplexing, orthogonal constant-amplitude sequences, pilot, preamble, system parameter identification, spectral compactness.
## I Introduction
Rectangularly-pulsed orthogonal frequency division multiplexing (OFDM) waveforms are commonly adopted in modern wireless communication systems [1]-[3] due to their feasibility of efficient implementation by fast Fourier transform, easy incorporation of cyclic prefix (CP) to facilitate initial synchronization and channel estimation, and robustness against frequency-selective channel dispersion. In rectangularly-pulsed OFDM systems, constant amplitude (CA) sequences are often used as the training sequence in frequency domain to modulate uniformly spaced subcarriers and thereby enable robust fine initial time/frequency synchronization [4]-[9] and accurate channel estimation [9]-[14] at the receiver combating frequency-selective channel dispersion. When exact or near orthogonality is sustained among sequences, multiple CA training sequences are also adopted to facilitate the identification of different system parameters for establishing initial connection, including the identification of cell/sector/antenna, random access (RA) channel, duplex mode, guard ratio, etc. [1]-[3], [15]-[17]. Two typical applications based on system parameter identification (SPI) are RA channel identification [17]-[21] and multiple-input multiple-output (MIMO) channel sounding [14], [22]-[24]. Specifically, the received OFDM waveforms carrying different CA sequences in frequency domain are identified by cross-correlating [17]-[21] and despreading [14], [22]-[24] the received frequency-domain samples with all possible identification sequences, thereby enabling RA channel identification [17]-[21] and simultaneous channel estimation for multiple MIMO channels [14], [22]-[24], respectively. In such applications, multiple orthogonal CA sequences are generally preferred since better sequence identification can be achieved to ensure less false identification in RA channel identification and mitigate the effect of pilot contamination in simultaneous MIMO channel estimation.
In practice, Zadoff-Chu (ZC) sequences [25]-[26] are commonly used as such training/identification/ sounding sequences due to their features of CA and zero periodic autocorrelation (ZAC) in both time and frequency domains [8], [27]. Particularly, cyclically-shiftable ZC sequences are popular in SPI applications due to the ZAC-enabled feasibility by generating all orthogonal ZC sequences through cyclically shifting the inverse discrete-Fourier-transform (DFT) of a single-root ZC sequence with a cyclic shifting distance (CSD) \(\varpi_{\text{ZC}}\). However, adjacent cyclically-shiftable ZC sequences with a small \(\varpi_{\text{ZC}}\) can not be unambiguously identified by the receiver in the presence of multiple received path signals and the timing uncertainty under which the start time of the received useful leading path signal is practically synchronized only within the front portion of a CP subinterval [18], [28]. Due to such sequence identification ambiguity [28], not every cyclically-shiftable single-root ZC sequence can be adopted for SPI in the uplink cellular environment since a minimum CSD \(\varpi_{\text{min}}\) is required to differentiate distinct received cyclically-shiftable sequences sent from uplink transmitters in different locations [1]-[3], [17]-[21], [28]. As the cell radius is increased, a larger \(\varpi_{\text{min}}\) is required to avoid such sequence identification ambiguity [28]. The latter issue results in the shortage of adoptable orthogonal ZC
sequences in many standard preamble/pilot signaling formats for SPI [1]-[3]. For example, a total of \(64\) ZC sequences are required for RA channel identification in uplink 5G-NR [2, Section 6.3.3.1], [17]-[21]. Among the various adopted pairs of sequence length and minimum CSD, the numbers of adoptable orthogonal ZC sequences are upper bounded by the ratio of sequence length to minimum CSD and turn out be much smaller than \(64\). Since fewer orthogonal ZC sequences are available, RA channel identification suffers from larger false-identification error (FIE) in multipath environments exhibiting longer-delay channel profiles, thus entailing worse false identification [17]-[18]. As another example in 5G-NR [2, Section 6.4.1.4.1], a MIMO system is designed to receive uplink pilot waveforms from at most \(12\) transmit antennas concurrently, and thus requires up to \(12\) cyclically-shiftable (single-root) ZC sequences with \(\varpi_{\text{ZC}}\geq\varpi_{\text{min}}\) to identify and separate different uplink channels in order to achieve high estimation accuracy in simultaneous channel estimation (SCE) [14], [22], [24]. Under this setup, \(\varpi_{\text{min}}\) is specified by \(N/12\) for the adopted cyclically-shiftable ZC sequences of different sequence lengths \(N\)[2, Section 6.4.1.4.3]. Since at most \(12\) cyclically-shiftable ZC sequences are available for all adopted pairs of sequence length and minimum CSD [2, Section 6.4.1.4], cyclically-shiftable ZC sequences generated from different (relatively prime) roots are adopted in neighboring sectors or cells in practical cellular environments. Unfortunately, ZC sequences generated form different root indices are nonorthogonal and entail heavy inter-pilot interference to SCE in cellular environments [22]. This causes the pilot contamination problem [14], [22]-[24]. To alleviate the effect of pilot contamination in the multiple cells/sectors environment, Yu-Lee (YL) sequences are constructed in [14] from phase-rotating cyclically-shiftable ZC sequences generated from a single root index appropriately, and shown to outperform multiple-root ZC sequences in SCE. However, the SCE performance can be further enhanced since YL sequences are not all orthogonal.
Although efficient to implement, rectangularly-pulsed OFDM waveforms exhibit large power spectral sidelobes due to discontinuity at pulse edges and thus cause strong interference to adjacent channels [9], [29]-[31]. Specifically, rectangularly-pulsed OFDM waveforms carrying ZC sequences have been shown to render widely spread waveform spectrum with baseband spectral sidelobes decaying asymptotically as \(f^{-2}\)[8]-[11]. Although highly compact training waveform spectrum can be composed by suppressing spectral sidelobes through delicate signal processing techniques [29]-[34], the feature of frequency-domain CA is altered in the transmitted waveform after sidelobe suppression, thus compromising the performance characteristics of initial synchronization, channel estimation, and SPI at the receiver. To resolve the problem, several order-\(I\) CA sequences have been recently developed in [8]-[11] to render extremely small baseband power spectral sidelobes decaying asymptotically as \(f^{-2I-2}\) with sidelobe-decaying (SD) order \(I\geq 1\), and thus compose spectrally compact training waveforms for robust fine initial synchronization [8]-[9] and accurate channel estimation [9]-[11]. The larger the SD order \(I\) is, the higher spectral compactness the corresponding training waveform can achieve. Since frequency-domain CA is sustained, order-\(I\) CA sequences enable the same performance characteristics as ZC sequences in initial synchronization and channel estimation, while yielding much higher spectral compactness [8]-[11]. In [10]-[11], order-\(I\) CA sequences \(\mathcal{G}_{I}\) and \(\mathcal{I}_{I}\) were first developed for a large number of sequence lengths. For all composite and prime sequence lengths larger than \(11\), order-\(I\) CA sequences \(\widehat{\mathcal{G}}_{I}\) and \(\widehat{\mathcal{I}}_{I}\) were further developed in [9] and shown to provide the SD order not smaller than order-\(\widetilde{I}\) CA sequences \(\mathcal{G}_{\widetilde{I}}\) and \(\mathcal{I}_{\widetilde{I}}\). To meet the needs of various SPI applications, four families containing mutually orthogonal order-\(I\) CA sequences were also developed in [9] for respective sequence types \(\mathcal{G}_{I}\), \(\mathcal{I}_{I}\), \(\widehat{\mathcal{G}}_{I}\), and \(\widehat{\mathcal{I}}_{I}\), based on the method of phase model assigning (PMA), and denoted hereinafter by families \(\mathcal{G}_{I}^{\text{(pma)}}\), \(\mathcal{I}_{I}^{\text{(pma)}}\), \(\widehat{\mathcal{G}}_{I}^{\text{(pma)}}\), and \(\widehat{\mathcal{I}}_{I}^{\text{(pma)}}\) for convenience. Nevertheless, the numbers of permissible orthogonal sequences provided by these families are still insufficient for some SPI applications requiring a large number of orthogonal CA sequences (like RA channel identification and MIMO channel sounding) [1]-[2]. This paper is thus motivated to develop new families with an attempt to providing more orthogonal order-\(I\) CA sequences.
Based on the methods of degenerate PMA and augmented PMA, several modified PMA sequence families are constructed herein to provide more orthogonal order-\(\widetilde{I}\) CA sequences (\(\mathcal{G}_{\widetilde{I}}\), \(\mathcal{I}_{\widetilde{I}}\), \(\widehat{\mathcal{G}}_{\widetilde{I}}\), and \(\widehat{\mathcal{I}}_{\widetilde{I}}\)) than families \(\mathcal{G}_{I}^{\text{(pma)}}\), \(\mathcal{I}_{I}^{\text{(pma)}}\), \(\mathcal{I}_{I}^{\text{(pma)}}\), and \(\widehat{\mathcal{I}}_{I}^{\text{(pma)}}\) by possibly trading off the SD order \(\widetilde{I}\leq I\). All developed order-\(\widetilde{I}\) sequences can still provide much higher spectral compactness than ZC, YL, and pseudorandom-noise (PN) sequences for the composed OFDM preamble/pilot waveforms. Since sequences \(\mathcal{I}_{\widetilde{I}}\) and \(\widehat{\mathcal{I}}_{\widetilde{I}}\) can be similarly constructed like sequences \(\mathcal{G}_{\widetilde{I}}\) and \(\widehat{\mathcal{G}}_{\widetilde{I}}\), only the new families composed of order-\(\widetilde{I}\) CA sequences \(\mathcal{G}_{\widetilde{I}}\) and \(\widehat{\mathcal{G}}_{\widetilde{I}}\) are elaborated in the following. The contribution of the paper is addressed as follows.1
Footnote 1: _Notations:_ Boldface lower-case and upper-case letters denote column vectors and matrices, respectively. Superscripts \(t\), \(\ast\), and \(h\) denote transpose, complex conjugate, and conjugate transpose, respectively. \(\mathcal{Z}^{\ast}\), \(\mathcal{Z}_{K}\) and \(\mathcal{Z}_{K}^{\ast}\) are the set of nonnegative integers, \(\{0,1,...,K-1\}\) and \(\{1,2,...,K\}\), respectively. By default, \(\mathcal{Z}_{0}^{\text{is}}\) and \(\mathcal{Z}_{1}\) are empty set. We also use \([x_{k};k\in\mathcal{Z}_{K}]\) to represent a \(K\times 1\) vector with \(x_{k}\) being the \(k\)-entry, \(\min\{x,y\}\), by the smaller between \(x\) and \(y\), \(((n))\) the modulo-\(N\) value of \(n\), \(|\mathbf{x}|\) the Frobenius norm of vector \(\mathbf{x}\), \([x\,]\) the smallest integer that is not smaller than \(x\), and \(\lfloor x\rfloor\) the largest integer that is not larger than \(x\). We let \(\omega_{K}\triangleq\exp\{-j\frac{2\pi}{K}\}\) and denote \(\mathbf{W}_{K}\triangleq[K^{-1/2}\omega_{K}^{\text{mk}};m\in\mathcal{Z}_{K},k \in\mathcal{Z}_{K}]\) as a \(K\times K\) unitary DFT matrix with normalized columns and rows. \(\mathcal{E}\{\cdot\}\) denotes the expectation operator. \(j\triangleq\sqrt{-1}\) is the imaginary unit.
* Degenerate PMA sequence families \(\mathcal{G}_{\text{max},\widetilde{I}}^{\text{(dpma,\kappa)}}\) and \(\mathcal{\widetilde{G}}_{\text{max},\widetilde{I}}^{\text{(dpma,\kappa)}}\) with sequence length \(N\) are constructed respectively under a proper level-\((\Omega(N)-\kappa)\) factorization of \(N\) and under a near-proper level-\((\Omega(N)-\kappa)\) factorization of \(N\) for \(\kappa\in\mathcal{Z}_{\Omega(N)-1}^{+}\), where \(\Omega(N)\) is the prime omega value of \(N\) and denotes the multiplicity in the prime factorization of \(N\). Families \(\mathcal{G}_{\text{max},\widetilde{I}}^{\text{(dpma,\kappa)}}\)
and \(\widetilde{\mathcal{G}}^{(\text{dpma},\kappa)}_{\max,\widetilde{I}}\) can provide more orthogonal order-\(\widetilde{I}\) CA sequences than the PMA sequence family \(\mathcal{G}^{(\text{pmn})}_{I}\), with or without trading off SD order \(\widetilde{I}\leq I\). When \(\widetilde{\Omega}(N)>\Omega(N)\), degenerate PMA sequence families \(\widetilde{\mathcal{G}}^{(\text{dpma},\kappa)}_{\max,\widetilde{I}}\) are accordingly constructed under a combined proper level-\((\widetilde{\Omega}(N)-\kappa)\) factorization of \(N\) for \(\kappa\in\mathcal{Z}^{+}_{\widetilde{\Omega}(N)-1}\), where \(\widetilde{\Omega}(N)\) is the modified prime omega (MPO) value defined in [9, eqs. 14-15] and denotes the increased multiplicity provided by all prime factorizations of the properly decomposed values from \(N\). Families \(\widetilde{\mathcal{G}}^{(\text{dpma},\kappa)}_{\max,\widetilde{I}}\) can provide more orthogonal order-\(\widetilde{I}\) CA sequences than the PMA sequence family \(\widetilde{\mathcal{G}}^{(\text{pmn})}_{I}\), with or without trading off SD order \(\widetilde{I}\leq I\).
* When \(N\) meets \(\widetilde{\Omega}(N)>\Omega(N)\), the augmented PMA sequence family \(\widehat{\mathcal{G}}^{(\text{apma})}_{I}\) is constructed by virtue of phase-rotating every existing sequence in family \(\widehat{\mathcal{G}}^{(\text{pmn})}_{I}\) to generate more mutually orthogonal sequence members, and thus provides double the number of orthogonal order-\(I\) CA sequences in family \(\widetilde{\mathcal{G}}^{(\text{pmn})}_{I}\) while maintaining the same SD order. Based on the same phase-rotating method, augmented degenerate PMA sequence family \(\widetilde{\mathcal{G}}^{(\text{dpma},\kappa)}_{\max,\widetilde{I}}\) is constructed from family \(\widetilde{\mathcal{G}}^{(\text{dpma},\kappa)}_{\max,\widetilde{I}}\) for a given \(\kappa\in\widetilde{\mathcal{Z}}^{+}_{\widetilde{\Omega}(N)-1}\) and provides double the family size of \(\widetilde{\mathcal{G}}^{(\text{dpma},\kappa)}_{\max,\widetilde{I}}\) without trading off the SD order.
* In comparison with ZC, YL, and PN sequence families, modified PMA sequence families \(\mathcal{G}^{(\text{dpma},\kappa)}_{\max,\widetilde{I}}\), \(\widetilde{\mathcal{G}}^{(\text{dpma},\kappa)}_{\max,\widetilde{I}}\), \(\widetilde{\mathcal{G}}^{(\text{dpma},\kappa)}_{\max,\widetilde{I}}\), \(\widetilde{\mathcal{G}}^{(\text{dpma},\kappa)}_{\max,\widetilde{I}}\), and \(\widetilde{\mathcal{G}}^{(\text{dpma},\kappa)}_{\max,\widetilde{I}}\) are demonstrated to enhance the performance characteristics in uplink RA channel identification over indoor and urban Rayleigh multipath environments exhibiting short-delay channel profiles, thanks to the provision of more orthogonal CA sequences and thus the mitigation of false identification. Meanwhile, the preamble waveforms carrying order-\(\widetilde{I}\) CA sequences from modified PMA sequence families are attributed with much higher spectral compactness than those carrying ZC, YL, and PN sequences.
The paper is organized as follows. Section II provides a review on order-\(I\) CA sequences \(\mathcal{G}_{I}\), \(\widetilde{\mathcal{G}}_{I}\), \(\mathcal{I}_{I}\), and \(\widetilde{\mathcal{I}}_{I}\)[9]-[11]. Section III develops family \(\mathcal{G}^{(\text{dpma},\kappa)}_{\max,\widetilde{I}}\) under a proper level-\((\Omega(N)-\kappa)\) factorization and family \(\widetilde{\mathcal{G}}^{(\text{dpma},\kappa)}_{\max,\widetilde{I}}\) under a near-proper level-\((\Omega(N)-\kappa)\) factorization, both for \(\kappa\in\mathcal{Z}^{+}_{\Omega(N)-1}\). When \(\widetilde{\Omega}(N)>\Omega(N)\), family \(\widehat{\mathcal{G}}^{(\text{apma})}_{I}\) is constructed in Section IV by the phase-rotating method. Families \(\widetilde{\mathcal{G}}^{(\text{dpma},\kappa)}_{\max,\widetilde{I}}\) and \(\widetilde{\mathcal{G}}^{(\text{dpma},\kappa)}_{\max,\widetilde{I}}\) are also constructed under a combined proper level-\((\widetilde{\Omega}(N)-\kappa)\) factorization for \(\kappa\in\mathcal{Z}^{+}_{\widetilde{\Omega}(N)-1}\). In Section V, the OFDM systems employing various CA sequence families are compared for RA channel identification and spectral compactness. Section VI concludes the paper.
## II Order-\(I\) Constant-Amplitude Sequences
Consider the rectangularly-pulsed OFDM waveform carrying a sequence of \(N\) complex symbols. In the nominal time interval of length \(T\), these symbols are modulated into \(N\) uniformly-spaced subcarriers interleaved among \(\gamma N\) subcarriers with a positive-integer-valued interleaving factor \(\gamma\). The time interval is partitioned into a guard CP subinterval of length \(T_{\text{g}}\) followed by a useful signaling subinterval of length \(T_{\text{d}}=T-T_{\text{g}}\), where \(T_{\text{g}}=\alpha T_{\text{d}}\) and \(\alpha\) is the guard ratio with \(0<\alpha<1\). Denote \(\mathbf{q}\triangleq[q[n];n\in\mathcal{Z}_{N}]\) as the sequence in frequency domain and as its inverse DFT with \(\left\|\mathbf{q}\right\|^{2}=\left\|\mathbf{\widetilde{q}}\right\|^{2}=1\). Throughout, \(\mathbf{q}\) is restricted to have CA symbols with \(|q[n]|^{2}=1/N\), and thus its inverse DFT \(\mathbf{\widetilde{q}}\) possesses the ZAC property, i.e., \(\sum_{m\in\mathcal{Z}_{N}}\widetilde{q}[((m-n))_{N}](\widetilde{q}[m])^{*}=0\) for all integers \(((n))_{N}\neq 0\)[8, 27].
Rectangularly-pulsed OFDM preamble/pilot waveforms are discontinuous if identification symbols are not properly restricted and thus render large baseband power spectral sidelobes decaying asymptotically as \(f^{-2}\). In practical OFDM systems, rectangularly pulsed preamble/pilot waveforms carrying PN and ZC sequences render widely spread waveform spectrum with baseband spectral sidelobes decaying asymptotically as \(f^{-2}\)[8]-[11]. By properly restricting identification symbols, various order-\(I\) CA sequences have been recently developed in [8]-[11] to render extremely small baseband power spectral sidelobes decaying asymptotically as \(f^{-2I-2}\) and thus the corresponding baseband power spectrum exhibits \(I\)-decaying sidelobes. Due to fast sidelobe decaying, these order-\(I\) CA sequences enhance the spectral compactness of the corresponding OFDM preamble/pilot waveforms, while achieving accurate channel estimation and robust fine initial time and frequency synchronization owing to dual sequence properties of frequency-domain CA and time-domain ZAC [8]-[9]. Particularly in [9], four types of order-\(I\) CA sequences \(\mathcal{G}_{I}\), \(\mathcal{I}_{I}\), \(\widetilde{\mathcal{G}}_{I}\), and \(\widetilde{\mathcal{I}}_{I}\) with sequence length \(N\) have been developed in explicit expressions for all composite sequence lengths and all prime sequence lengths larger than \(11\) under all parametric conditions on \(\alpha\gamma\). In what follows, sequences \(\mathcal{G}_{I}\), \(\mathcal{I}_{I}\), \(\widetilde{\mathcal{G}}_{I}\), and \(\widetilde{\mathcal{I}}_{I}\) are briefly reviewed.
For convenience, an order-\(I\) CA sequence \(\mathbf{q}=[N^{\frac{-1}{2}}\left(-1\right)^{n\gamma}\chi\left[n\right];n\in \mathcal{Z}_{N}]\) is described by a CA sequence \(\boldsymbol{\chi}=[\chi[n];n\in\mathcal{Z}_{N}]\) with \(\left|\chi\left[n\right]\right|=1\) for all \(n\in\mathcal{Z}_{N}\), and presented in two separate conditions, namely _Condition A_ that \(\alpha\gamma\) is an integer and _Condition B_ that \(\alpha\gamma\) is not an integer [9]. Under _Condition A_, if \(\mathbf{q}\) satisfies
_Constraint A:_\(\boldsymbol{\mu}_{\beta}^{t}\boldsymbol{\chi}=0\) for all \(\beta\in\mathcal{Z}_{I}\) but \(\boldsymbol{\mu}_{I}^{t}\boldsymbol{\chi}\neq 0\)
for a positive integer \(I\in\mathcal{Z}_{N-1}^{+}\) where \(\boldsymbol{\mu}_{\beta}\triangleq[n^{\beta};n\in\mathcal{Z}_{N}]\), the corresponding baseband power spectrum exhibits \(I\)-decaying sidelobes. Under _Condition B_, if \(\mathbf{q}\) satisfies
_Constraint B:_\(\boldsymbol{\mu}_{\beta}^{t}\boldsymbol{\chi}=0\) and \(\widetilde{\boldsymbol{\mu}}_{\beta}^{t}\boldsymbol{\chi}=0\) for all \(\beta\in\mathcal{Z}_{I}\)
but \(\boldsymbol{\mu}_{I}^{t}\boldsymbol{\chi}\neq 0\) or \(\widetilde{\boldsymbol{\mu}}_{I}^{t}\boldsymbol{\chi}\neq 0\)
for a positive integer \(I\in\mathcal{Z}_{\left\lfloor(N-1)/2\right\rfloor}^{+}\) where \(\widetilde{\boldsymbol{\mu}}_{\beta}\triangleq[e^{-j2\pi n\alpha\gamma}n^{\beta};n \in\mathcal{Z}_{N}]\), the corresponding baseband power spectrum exhibits spectrum exhibits \(I\)-decaying sidelobes. Throughout, we consider the prime factorization \(N=\prod_{m=0}^{\Omega(N)-1}P_{m}\) where prime integers \(P_{m}\) may not be all distinct. Due to the
constraints, the largest possible family size \(\Psi_{\text{max}}(N)\) is limited by \(\Psi_{\text{max}}(N)=N-I\) under _Condition A_ and \(\Psi_{\text{max}}(N)=N-2I\) under _Condition B_, for any sequence family containing mutually orthogonal sequences of length \(N\).
_A) Sequence \(\mathcal{G}_{I}\):_ Arrange prime factors \(P_{m}\) in descending order \(P_{0}\geq P_{1}\geq...\geq P_{\Omega(N)-1}\). Define \(\phi_{m}\triangleq\prod_{k=0}^{m-1}P_{k}\) for \(m\in\mathcal{Z}_{\Omega(N)-1}^{+}\) and \(\phi_{0}=1\). An order-\(I\) CA sequence \(\mathcal{G}_{I}\) is described as
\[\chi[\sum\nolimits_{m\in\mathcal{Z}_{\Omega(N)}}l_{m}\phi_{m}]=\exp\{j\sum \nolimits_{m\in\mathcal{Z}_{\Omega(N)}}\theta_{m}[l_{m}]\} \tag{1}\]
for all \(l_{0}\in\mathcal{Z}_{P_{0}}\), \(l_{1}\in\mathcal{Z}_{P_{1}}\),..., \(l_{\Omega(N)-1}\in\mathcal{Z}_{P_{\Omega(N)-1}}\) under _Condition A_, and
\[\chi[\sum\nolimits_{m\in\mathcal{Z}_{\Omega(N)}}l_{m}\phi_{m}]=\exp\{j\sum \nolimits_{m\in\mathcal{Z}_{\Omega(N)}}\theta_{m}[l_{m}] \tag{2}\] \[+j2\pi\alpha\gamma\sum\nolimits_{n\in\mathcal{Z}_{[\Omega(N)/2]} }l_{2n+1}\phi_{2n+1}\}\]
for all \(l_{0}\in\mathcal{Z}_{P_{0}}\), \(l_{1}\in\mathcal{Z}_{P_{1}}\),..., \(l_{\Omega(N)-1}\in\mathcal{Z}_{P_{\Omega(N)-1}}\) under _Condition B_. Here, the phases \(\theta_{m}[l_{m}]\) are restricted by
\[\sum\nolimits_{l_{m}\in\mathcal{Z}_{P_{m}}}\exp\{j\theta_{m}[l_{m}]\}=0\text { for all }m\in\mathcal{Z}_{\Omega(N)}. \tag{3}\]
For a given \(N\), sequence \(\mathcal{G}_{I}\) yields the SD order \(I\geq\Omega(N)\) under _Condition A_ and \(I\geq[\Omega(N)/2]\) under _Condition B_.
Orthogonal sequence family \(\mathcal{G}_{I}^{(\text{pmn})}\) have been obtained from the PMA method in [9, Subsection III.B]. For a given index vector \(\mathbf{\nu}=[v_{m};m\in\mathcal{Z}_{\Omega(N)}]\) with \(v_{m}\in\mathcal{Z}_{P_{m}-1}^{+}\) for all \(m\in\mathcal{Z}_{\Omega(N)}\), a sequence in family \(\mathcal{G}_{I}^{(\text{pmn})}\) can be uniquely specified by \(\mathbf{\nu}\) and formed by assigning
\[\theta_{m}\left[l_{m}\right]=\frac{2\pi\nu_{m}l_{m}}{P_{m}}\text{ for all }l_{m}\in\mathcal{Z}_{P_{m}}\text{ and }m\in\mathcal{Z}_{\Omega(N)} \tag{4}\]
under either _Condition A_ or _Condition B_. By varying \(\mathbf{\nu}\) exclusively, family \(\mathcal{G}_{I}^{(\text{pmn})}\) can be constructed accordingly and it contains \(\Psi(N)\triangleq\prod_{m=0}^{\Omega(N)-1}(P_{m}-1)\) orthogonal order-\(I\) CA sequences. Apparently, all order-\(I\) sequences in \(\mathcal{G}_{I}^{(\text{pmn})}\) are mutually orthogonal, i.e., \(\mathbf{q}_{l}^{\text{h}}\mathbf{q}_{k}-\widetilde{\mathbf{q}}_{l}^{\text{h} }\widetilde{\mathbf{q}}_{k}=0\) for any two different sequences \(\mathbf{q}_{l}\) and \(\mathbf{q}_{k}\) in \(\mathcal{G}_{I}^{(\text{pmn})}\).
Consider a leader sequence \(\mathbf{q}_{\text{lead}}\) in \(\mathcal{G}_{I}^{(\text{pmn})}\), specified by \(\mathbf{\nu}=[v_{0},v_{1},...,v_{\Omega(N)-1}]^{t}\). Denote \(\widetilde{\mathbf{q}}_{\text{lead}}^{(k)}=\left[\widetilde{\mathbf{q}}_{ \text{lead}}\left[((i+k))_{N}\right];i\in\mathcal{Z}_{N}\right]\) as the \(k\)-cyclically-shifted version of \(\widetilde{\mathbf{q}}_{\text{lead}}\) (i.e., the inverse DFT of \(\mathbf{q}_{\text{lead}}\)) and \(\mathbf{q}_{\text{lead}}^{(k)}=\left[q_{\text{lead}}n\right]\exp\{j2\pi nk/N\};n \in\mathcal{Z}_{N}\)] as its DFT. According to [9], the set of admissible cyclic shifts for which \(\mathbf{q}_{\text{lead}}^{(k)}\) is still an order-\(I\) CA sequence in \(\mathcal{G}_{I}^{(\text{pmn})}\) is specified by \(\mathcal{U}(\nu_{0})\triangleq\{lN/P_{\max}|l\in\mathcal{Z}_{P_{\max}}\text{ but }l\neq P_{\max}-v_{0}\}\) where \(P_{\max}\triangleq\max_{m\in\mathcal{Z}_{\Omega(N)}}P_{m}\) is the largest prime factor and \(v_{0}\) is the leading entry in \(\mathbf{\nu}\). From \(\mathbf{q}_{\text{lead}}\), we can thus specify the cyclic-shift (CS) CA sequence subfamily \(\mathcal{G}_{I}^{(\text{cs})}(\mathbf{q}_{\text{lead}})\) which contains all cyclically-shiftable order-\(I\) CA sequences obtained by cyclically shifting \(\widetilde{\mathbf{q}}_{\text{lead}}\) with shifts in \(\mathcal{U}(\nu_{0})\), as
\[\mathcal{G}_{I}^{(\text{cs})}(\mathbf{q}_{\text{lead}})\triangleq\{\mathbf{q}_{ \text{lead}}^{(k)}|k\in\mathcal{U}(\nu_{0})\}\text{ if }\mathbf{q}_{\text{lead}}\in\mathcal{G}_{I}^{(\text{pmn})} \tag{5}\]
under either _Condition A_ or _Condition B_. The factor \(N/P_{\max}\) defining \(\mathcal{U}(\nu_{0})\) is the family CSD for generating cyclically-shiftable CA sequence subfamily \(\mathcal{G}_{I}^{(\text{cs})}(\mathbf{q}_{\text{lead}})\). Notably, \(\mathcal{G}_{I}^{(\text{cs})}(\mathbf{q}_{\text{lead}})\) contains \(P_{\max}-1\) different sequences in \(\mathcal{G}_{I}^{(\text{pmn})}\) which are specified by identical indices \(v_{1},v_{2},...,v_{\Omega(N)-1}\). Therefore, by varying \(v_{1},v_{2},...,v_{\Omega(N)-1}\), we can obtain \(\Psi(N)/(P_{\max}-1)\) mutually exclusive subfamilies \(\mathcal{G}_{I}^{(\text{cs})}(\mathbf{q}_{\text{lead}})\) constructed from all permissible subfamily leaders \(\mathbf{q}_{\text{lead}}\) specified by different index subvectors \([v_{1},v_{2},...,v_{\Omega(N)-1}]^{t}\). In \(\mathcal{G}_{I}^{(\text{cs})}(\mathbf{q}_{\text{lead}})\), all orthogonal order-\(I\) CA sequences can be easily obtained by cyclically shifting the inverse DFT of a subfamily leader \(\mathbf{q}_{\text{lead}}\).
_B) Sequence \(\mathcal{I}_{I}\):_ Arrange prime factors \(P_{m}\) in ascending order \(P_{0}\leq P_{1}\leq...\leq P_{\Omega(N)-1}\). Define \(\psi_{m}=N/\phi_{m+1}\) for \(m\in\mathcal{Z}_{\Omega(N)-1}\) and \(\psi_{\Omega(N)-1}=1\). An order-\(I\) CA sequence \(\mathcal{I}_{I}\) is defined similarly to sequence \(\mathcal{G}_{I}\) as in (1)-(4) with \(\phi_{m}\rightarrow\psi_{m}\) for \(m\in\mathcal{Z}_{\Omega(N)}\). With sequence length \(N\), sequence \(\mathcal{I}_{I}\) yields the SD order \(I\geq\Omega(N)\) under _Condition A_ and \(I\geq[\Omega(N)/2]\) under _Condition B_. By varying the index vector \(\mathbf{\nu}\) exclusively, the orthogonal sequence family \(\mathcal{I}_{I}^{(\text{pmn})}\) can be likewise constructed and it contains \(\Psi(N)\) mutually orthogonal order-\(I\) CA sequences. For a given \(\mathbf{q}_{\text{lead}}\) in \(\mathcal{I}_{I}^{(\text{pmn})}\) specified by \(\mathbf{\nu}=[v_{0},v_{1},...,v_{\Omega(N)-1}]\), the cyclic-shift sequence subfamily \(\mathcal{I}_{I}^{(\text{cs})}(\mathbf{q}_{\text{lead}})\) can be obtained as
\[\mathcal{I}_{I}^{(\text{cs})}(\mathbf{q}_{\text{lead}})\triangleq\{\mathbf{q}_{ \text{lead}}^{(k)}|k\in\mathcal{U}(\nu_{\Omega(N)-1})\}\text{ if }\mathbf{q}_{\text{lead}}\in\mathcal{I}_{I}^{(\text{pmn})} \tag{6}\]
under either _Condition A_ or _Condition B_. Thus, \(\mathcal{I}_{I}^{(\text{cs})}(\mathbf{q}_{\text{lead}})\) contains \(P_{\max}-1\) different sequences in \(\mathcal{I}_{I}^{(\text{pmn})}\), which are specified by identical indices \(v_{0},v_{1},...,v_{\Omega(
MPO value \(\widetilde{\Omega}(N)\), defined in [9, eqs. 14-15] as
\[\widetilde{\Omega}(N)=\max_{L\in\mathcal{Z}_{[N/2]}^{+}\frac{\widetilde{N}^{( \Theta)}\geq\widetilde{N}^{(1)}\geq\cdots\geq\widetilde{N}^{(L-1)}\geq 2}{ \widetilde{N}^{(0)}+\widetilde{N}^{(1)}+\ldots+\widetilde{N}^{(L-1)}=N}}\min_{ \rho\in\mathcal{E}_{L}}\Omega(\widetilde{N}^{(\rho)}). \tag{7}\]
Notably, the proper decomposition is not necessarily unique for arbitrary lengths \(N\) and can assure \(\widetilde{\Omega}(N)\geq\Omega(N)\). Particularly, \(\widetilde{\Omega}(N)>\Omega(N)\) is guaranteed if and only if (iff) \(N\) is not any one of the following forms
\[N = 2^{a}\times 3^{b}\times 5^{c} \tag{8}\] \[N = 2^{a}\times 3^{b}\times 7^{d}\] (9) \[N = 2^{a}\times 3^{b}\times 11^{e} \tag{10}\]
where the nature numbers \(a\), \(b\), \(c\), \(d\) and \(e\) are restricted to \(a\), \(b\in\mathcal{Z}^{*}\), \(c\in\mathcal{Z}_{4}\), \(d\in\mathcal{Z}_{3}\), and \(e\in\mathcal{Z}_{2}\)[9, _Property 5_]. In the case, order-\(I\) CA sequences \(\widetilde{\mathcal{G}}_{I}\) and \(\widetilde{\mathcal{I}}_{I}\) can yield higher SD order than order-\(\widetilde{I}\) CA sequences \(\mathcal{G}_{\widetilde{I}}\) and \(\mathcal{I}_{\widetilde{I}}\). Conversely, when \(N\) is one of the above three forms, sequences \(\mathcal{G}_{\widetilde{I}}\) and \(\mathcal{I}_{\widetilde{I}}\) can provide comparable SD order to sequences \(\widetilde{\mathcal{G}}_{I}\) and \(\widetilde{\mathcal{I}}_{I}\) due to \(\widetilde{\Omega}(N)=\Omega(N)\). For a given \(N\), a proper decomposition and the associated \(\widetilde{\Omega}(N)\) can be efficiently sought from _Procedure 1_ and _Property 4_ in [9], where some examples for medium and large \(N\) values are also listed in [9, Tables I and II].
Orthogonal sequence family \(\widehat{\mathcal{G}}_{I}^{(\text{pmn})}\) has also been obtained from the PMA method [9]. Consider the prime factorizations \(\widetilde{N}^{(\rho)}=\prod_{m=0}^{\Omega(N^{(\rho)})-1}P_{m}^{(\rho)}\) for all \(\rho\in\mathcal{Z}_{L}\). For the given index vectors \(\boldsymbol{\nu}^{(\rho)}=[v_{m}^{(\rho)};m\in\mathcal{Z}_{\Omega(\widetilde{ N}^{(\rho)})}]\) with \(v_{m}^{(\rho)}\in\mathcal{Z}_{P_{m}^{(\rho)}-1}^{+}\) for all \(\rho\in\mathcal{Z}_{L}\), all subsequences of the corresponding sequence \(\widehat{\mathcal{G}}_{I}\) in family \(\widehat{\mathcal{G}}_{I}^{(\text{pmn})}\) can be thus specified by these \(\boldsymbol{\nu}^{(\rho)}\) and formed from the phase assignment in (4). By varying \(\boldsymbol{\nu}^{(\rho)}\) exclusively and concurrently for all \(\rho\in\mathcal{Z}_{L}\), family \(\widehat{\mathcal{G}}_{I}^{(\text{pmn})}\) is constructed accordingly and it contains \(\widehat{\Psi}(N)\triangleq\min_{\rho\in\mathcal{Z}_{L}}\Psi(\widetilde{N}^{ (\rho)})\) orthogonal order-\(I\) CA sequences, where \(\Psi(\widetilde{N}^{(\rho)})=\prod_{m=0}^{\Omega(\widetilde{N}^{(\rho)})-1}(P _{m}^{(\rho)}-1)\) for all \(\rho\in\mathcal{Z}_{L}\). Notably, the orthogonal sequence family \(\widehat{\mathcal{G}}_{I}^{(\text{pmn})}\) is not necessarily unique for a given length \(N\) since the proper decomposition for \(N\) is not necessarily unique. When \(N\) is not any one of the forms in (8)-(10), any orthogonal sequence in family \(\widehat{\mathcal{G}}_{I}^{(\text{pmn})}\) can not be obtained from cyclically shifting the inverse DFT of another sequence in the family, due to the proper decomposition of \(N\)[9]. Without limitation by minimum CSD, the latter feature permits the use of all orthogonal sequences in such family \(\widehat{\mathcal{G}}_{I}^{(\text{pmn})}\) and its modified families (to be developed below) for SPI applications in the uplink cellular environment.
The orthogonal sequence family \(\widehat{\mathcal{I}}_{I}^{(\text{pmn})}\) is likewise constructed. Due to the similarity between \(\widehat{\mathcal{G}}_{I}^{(\text{pmn})}\) and \(\widehat{\mathcal{I}}_{I}^{(\text{pmn})}\), only modified PMA families from \(\widehat{\mathcal{G}}_{I}^{(\text{pmn})}\) are elaborated herein.
When the sequence length \(N\) is not any one of the forms in (8)-(10) and has \(\widetilde{\Omega}(N)>\Omega(N)\), we can construct family \(\widehat{\mathcal{G}}_{I}^{(\text{pmn})}\) with larger SD order than family \(\mathcal{G}_{\widetilde{I}}^{(\text{pmn})}\). Different from family \(\mathcal{G}_{\widetilde{I}}^{(\text{pmn})}\), all sequences in family \(\widehat{\mathcal{G}}_{I}^{(\text{pmn})}\) can not be obtained through cyclically shifting the inverse DFT of another sequence. Conversely, when \(N\) follows any one of the forms in (8)-(10) and exhibits \(\widetilde{\Omega}(N)=\Omega(N)\), family \(\mathcal{G}_{\widetilde{I}}^{(\text{pmn})}\) can yield comparable SD order to family \(\widehat{\mathcal{G}}_{I}^{(\text{pmn})}\) and is composed of \(\Psi(N)/(P_{\max}-1)\) mutually exclusive subfamilies \(\mathcal{G}_{\widetilde{I}}^{(\text{cs})}(\mathbf{q_{lead}})\) for all permissible subfamily leaders \(\mathbf{q_{lead}}\), as shown in Subsection II.\(A\). Each subfamily \(\mathcal{G}_{\widetilde{I}}^{(\text{cs})}(\mathbf{q_{lead}})\) contains \(P_{\max}-1\) sequences generated by cyclically shifting the inverse DFT of a subfamily leader \(\mathbf{q_{lead}}\) in family \(\mathcal{G}_{\widetilde{I}}^{(\text{pmn})}\) with family CSD \(N/P_{\max}\).
The following sections are devoted to the development of two types of new orthogonal sequence families, namely degenerate PMA sequence families \(\mathcal{G}_{\widetilde{I}}^{(\text{dpma},\kappa)}\), \(\widehat{\mathcal{G}}_{\widetilde{I}}^{(\text{dpma},\kappa)}\) and augmented PMA sequence families \(\widehat{\mathcal{G}}_{I}^{(\text{apma})}\), \(\widehat{\mathcal{G}}_{\widetilde{I}}^{(\text{dpma},\kappa)}\). For a composite length \(N\), family \(\mathcal{G}_{\widetilde{I}}^{(\text{dpma},\kappa)}\) contains orthogonal order-\(\widetilde{I}\) CA sequences \(\mathcal{G}_{\widetilde{I}}\) with the larger family size than family \(\mathcal{G}_{I}^{(\text{pmn})}\) by sacrificing the SD order in some cases. When \(N\) meets \(\widetilde{\Omega}(N)>\Omega(N)\), family \(\widehat{\mathcal{G}}_{I}^{(\text{apma})}\) exhibits double the family size as family \(\widehat{\mathcal{G}}_{I}^{(\text{pmn})}\) while maintaining the same SD order. Moreover, degenerate PMA sequence family \(\widehat{\mathcal{G}}_{\widetilde{I}}^{(\text{dpma},\kappa)}\) and augmented degenerate PMA sequence family \(\widehat{\mathcal{G}}_{\widetilde{I}}^{(\text{dpma},\kappa)}\) are also developed from degenerating families \(\widehat{\mathcal{G}}_{I}^{(\text{pmn})}\) and \(\widehat{\mathcal{G}}_{I}^{(\text{apma})}\), respectively, by trading off the SD order.
III Families \(\mathcal{G}_{\max,\widetilde{I}}^{(\text{dpma},\kappa)}\) and \(\widehat{\mathcal{G}}_{\max,\widetilde{I}}^{(\text{dpma},\kappa)}\)
Consider a composite length \(N\) with the prime factorization \(N=\prod_{m=0}^{\Omega(N)-1}P_{m}\) and \(\Omega(N)>2\). With a given \(\kappa\in\mathcal{Z}_{\Omega(N)-1}^{+}\), many families \(\mathcal{G}_{\widetilde{I}}^{(\text{dpma},\kappa)}\) can be degenerated from family \(\mathcal{G}_{I}^{(\text{pmn})}\) with identical or less SD order. Based on a particular level-\((\Omega(N)-\kappa)\) factorization \(N=\prod_{m=0}^{\Omega(N)-\kappa-1}A_{m}\) where factors \(A_{m}\) may not be all primes and are arranged in descending order, a family \(\mathcal{G}_{\widetilde{I}}^{(\text{dpma},\kappa)}\) can be constructed by the same PMA method constructing family \(\mathcal{G}_{I}^{(\text{pmn})}\). Specifically, such family \(\mathcal{G}_{\widetilde{I}}^{(\text{dpma},\kappa)}\) contains \(\prod_{m=0}^{\Omega(N)-\kappa-1}(A_{m}-1)\) orthogonal order-\(\widetilde{I}\) CA sequences \(\mathcal{G}_{\widetilde{I}}\) by varying the index vector \(\boldsymbol{\nu}=[v_{m};m\in\mathcal{Z}_{\Omega(N)-\kappa}]\) exclusively with \(v_{m}\in\mathcal{Z}_{A_{m}-1}^{+}\) for all \(m\in\mathcal{Z}_{\Omega(N)-\kappa}\), and exhibits the SD order \(\widetilde{I}\geq\Omega(N)-\kappa\) under _Condition A_ and \(\widetilde{I}\geq\lfloor(\Omega(N)-\kappa)/2\rfloor\) under _Condition B_. As \(\Omega(N)\) is odd, any family \(\
\(\prod_{m=0}^{\Omega(N)-\kappa-1}A_{m}^{(\kappa)}\) is said to be a _proper_ level-\((\Omega(N)-\kappa)\) factorization if \(\Psi^{(\text{dpm},\kappa)}(N)\triangleq\prod_{m=0}^{\Omega(N)-\kappa-1}(A_{m}^{( \kappa)}-1)\) is the achievable largest family size among all possible families \(\mathcal{G}_{\widetilde{I}}^{(\text{dpm},\kappa)}\). Such proper level-\((\Omega(N)-\kappa)\) factorization may not be unique. Under a proper factorization, the corresponding family \(\mathcal{G}_{\widetilde{I}}^{(\text{dpm},\kappa)}\) is dubbed \(\mathcal{G}_{\text{max},\widetilde{I}}^{(\text{dpm},\kappa)}\) for notational convenience. Family \(\mathcal{G}_{\text{max},\widetilde{I}}^{(\text{dpm},\kappa)}\) consists of \(\Psi^{(\text{dpm},\kappa)}(N)/(A_{\text{max}}^{(\kappa)}-1)\) mutually exclusive CS sequence subfamilies \(\mathcal{G}_{\text{max},\widetilde{I}}^{(\text{cs},\kappa)}(\textbf{q}_{ \text{lead}})\) for all permissible subfamily leaders \(\textbf{q}_{\text{lead}}\), where \(A_{\text{max}}^{(\kappa)}\triangleq\max_{m\in\mathcal{Z}_{\Omega(N)-\kappa}}A _{m}^{(\kappa)}\). Each subfamily \(\mathcal{G}_{\text{max},\widetilde{I}}^{(\text{cs},\kappa)}(\textbf{q}_{ \text{lead}})\) contains \(A_{\text{max}}^{(\kappa)}-1\) sequences generated by cyclically shifting the inverse DFT of a subfamily leader \(\textbf{q}_{\text{lead}}\) in family \(\mathcal{G}_{\text{max},\widetilde{I}}^{(\text{dpm},\kappa)}\) with family CSD \(\varpi_{\mathcal{G}}^{(\kappa)}\triangleq N/A_{\text{max}}^{(\kappa)}\), Below, proper level-\((\Omega(N)-\kappa)\) factorizations with \(\kappa=1\) and \(\kappa=2\) are first developed in closed-form expressions. An exclusive search procedure is then proposed to find proper level-\((\Omega(N)-\kappa)\) factorizations with all \(\kappa\in\mathcal{Z}_{\Omega(N)-2}^{+}\). Last, for sequence lengths \(N\) with \(\Omega(N)>4\), near-proper level-\((\Omega(N)-\kappa)\) factorizations \(N=\prod_{m=0}^{\Omega(N)-\kappa-1}\widetilde{A}_{m}^{(\kappa)}\) for all \(\kappa\in\mathcal{Z}_{\Omega(N)-2}^{+}-\mathcal{Z}_{2}^{+}\) are presented in closed-form expressions to construct another degenerate PMA sequence family \(\widetilde{\mathcal{G}}_{\text{max},\widetilde{I}}^{(\text{dpm},\kappa)}\), which also gives a larger family size than \(\mathcal{G}_{I}^{(\text{pm})}\).
For presentation convenience, prime factors in \(N=\prod_{m=0}^{\Omega(N)-1}P_{m}\) are arranged below in ascending order \(P_{0}\leq P_{1}\leq...\leq P_{\Omega(N)-1}\) for the development of proper and near-proper level-\((\Omega(N)-\kappa)\) factorizations and the developed factors in \(N=\prod_{m=0}^{\Omega(N)-\kappa-1}A_{m}^{(\kappa)}\) and \(N=\prod_{m=0}^{\Omega(N)-\kappa-1}\widetilde{A}_{m}^{(\kappa)}\) are not arranged in any order. Notably, to construct order-\(\widetilde{I}\) CA sequences in families \(\mathcal{G}_{\text{max},\widetilde{I}}^{(\text{dpm},\kappa)}\) and \(\widetilde{\mathcal{G}}_{\text{max},\widetilde{I}}^{(\text{dpm},\kappa)}\), the developed factors have to be rearranged beforehand in descending order, i.e., \(A_{0}^{(\kappa)}\geq A_{1}^{(\kappa)}\geq...\geq A_{\Omega(N)-\kappa-1}^{( \kappa)}\) and \(\widetilde{A}_{0}^{(\kappa)}\geq\widetilde{A}_{1}^{(\kappa)}\geq...\geq \widetilde{A}_{\Omega(N)-\kappa-1}^{(\kappa)}\).
_A) Proper Level-\((\Omega(N)-1)\) Factorization for \(\Omega(N)>2\):_ With \(N=\prod_{m=0}^{\Omega(N)-1}P_{m}\) and \(\Omega(N)>2\), \(N\) can be factorized into \(\Omega(N)-1\) factors only when two specific prime factors \(P_{i}\) and \(P_{n}\) are chosen from \(\{P_{m};m\in\mathcal{Z}_{\Omega(N)}\}\) and merged into one composite factor \(P_{i}P_{n}\). Under such factorization, one family \(\mathcal{G}_{\widetilde{I}}^{(\text{dpm},1)}\) can be formed with the family size \(\Psi(N)\times f([P_{i},P_{n}])\), where the function \(f(\mathbf{a}^{t})\) is defined by
\[f(\mathbf{a}^{t})=\frac{\prod_{m\in\mathcal{Z}_{M}}a_{m}-1}{\prod_{m\in \mathcal{Z}_{M}}(a_{m}-1)} \tag{11}\]
with \(\mathbf{a}=[a_{m};m\in\mathcal{Z}_{M}]\) being an \(M\)-tuple argument with all integer-valued entries \(a_{m}>1\). This family size can be maximized by choosing \(P_{i}\) and \(P_{n}\) properly based on _Lemma 1_, which is proven in _Appendix A_.
**Lemma 1**: _Consider two integer-valued \(M\)-tuples \(\mathbf{a}=[a_{m};m\in\mathcal{Z}_{M}]\) and \(\mathbf{b}=[b_{m};m\in\mathcal{Z}_{M}]\). If \(1<a_{m}\leq b_{m}\) for all \(m\in\mathcal{Z}_{M}\), then \(f(\mathbf{a}^{t})\geq f(\mathbf{b}^{t})\). Moreover, \(f(\mathbf{a}^{t})>f(\mathbf{b}^{t})\) if \(1<a_{n}<b_{n}\) for some \(n\in\mathcal{Z}_{M}\) and \(1<a_{m}\leq b_{m}\) for all the other \(m\in\mathcal{Z}_{M}-\{n\}\)._
From _Lemma 1_, the smallest two prime factors should be merged to compose a proper level-\((\Omega(N)-1)\) factorization \(N=\prod_{m=0}^{\Omega(N)-2}A_{m}^{(1)}\) with \(A_{0}^{(1)}=P_{0}P_{1}\) and \(A_{m}^{(1)}=P_{m+1}\) for \(m\in\mathcal{Z}_{\Omega(N)-2}^{+}\). This proper factorization results in the largest family size \(\Psi^{(\text{dpm},1)}(N)=(P_{0}P_{1}-1)\prod_{m=2}^{\Omega(N)-1}(P_{m}-1)\). The corresponding family \(\mathcal{G}_{\text{max},\widetilde{I}}^{(\text{dpm},1)}\) can provide mutually orthogonal order-\(\widetilde{I}\) CA sequences with \(\widetilde{I}\geq\Omega(N)-1\) under _Condition A_ and \(\widetilde{I}\geq\lfloor(\Omega(N)-1)/2\rfloor\) under _Condition B_.
_B) Proper Level-\((\Omega(N)-2)\) Factorization for \(\Omega(N)>3\):_ With \(N=\prod_{m=0}^{\Omega(N)-1}P_{m}\) and \(\Omega(N)>3\), there are two mutually exclusive methods to factorize \(N=\prod_{m=0}^{\Omega(N)-3}A_{m}\) in order to obtain a level-\((\Omega(N)-2)\) factorization. _Method 1_ is to choose any three prime factors from \(\{P_{m};m\in\mathcal{Z}_{\Omega(N)}\}\) and merge them into one composite factor. _Method 2_ is to choose any four prime factors from \(\{P_{m};m\in\mathcal{Z}_{\Omega(N)}\}\) and merge them into two composite factors in pairs. Both methods are detailed below._
**Method 1**: _From _Lemma 1_, the smallest three prime factors should be merged in order to maximize the family size when a level-\((\Omega(N)-2)\) factorization is obtained by merging three prime factors. This results in the family size \((P_{0}P_{1}P_{2}-1)\prod_{m=3}^{\Omega(N)-1}(P_{m}-1)\). Thus, one candidate family for \(\mathcal{G}_{\text{max},\widetilde{I}}^{(\text{dpm},2)}\) is based on the candidate factorization \(A_{0}=P_{0}P_{1}P_{2}\) and \(A_{m}=P_{m+2}\) for \(m\in\mathcal{Z}_{\Omega(N)-3}^{+}\)._
**Method 2**: _When a level-\((\Omega(N)-2)\) factorization is obtained by merging four prime factors in pairs, the family size can be maximized by choosing and pairing four prime factors properly based on _Lemma 2_, as proven in _Appendix B_._
**Lemma 2**: _Consider four integers \(P_{a}\), \(P_{b}\), \(P_{c}\) and \(P_{d}\). If \(1<P_{a}\leq P_{b}\leq P_{c}\leq P_{d}\), then \(f([P_{a},P_{d}])\times f([P_{b},P_{c}])\geq f([P_{a},P_{c}])\times f([P_{b},P_{d}]) \geq f([P_{a},P_{b}])\times f([P_{c},P_{d}])\)._
From _Lemma 1_ and _Lemma 2_, the smallest four prime factors should be merged in pairs to form the candidate factorization \(A_{0}=P_{0}P_{3}\), \(A_{1}=P_{1}P_{2}\) and \(A_{m+1}=P_{m+3}\) for \(m\in\mathcal{Z}_{\Omega(N)-4}^{+}\), in order to maximize the family size when a level-\((\Omega(N)-2)\) factorization is obtained by merging four prime factors in pairs. Such factorization results in the other candidate family for \(\mathcal{G}_{\text{max},\widetilde{I}}^{(\text{dpm},2)}\) having the family size \((P
\(A^{(2)}_{m+1}=P_{m+3}\) for \(m\in\mathcal{Z}^{+}_{\Omega(N)-4}\) (i.e., _Method 2_) otherwise. This proper factorization results in the largest family size \(\Psi^{(\text{dpm},2)}(N)=\max\{(P_{0}P_{1}P_{2}-1)\prod_{m=3}^{\Omega(N)-1}(P_{m} -1),(P_{0}P_{3}-1)(P_{1}P_{2}-1)\prod_{m=4}^{\Omega(N)-1}(P_{m}-1)\}\). Based on the proper level-\((\Omega(N)-2)\) factorization, family \(\mathcal{G}^{(\text{dpm},2)}_{\text{max}\widetilde{J}}\) is constructed by the PMA method and provides mutually orthogonal order-\(\widetilde{I}\) CA sequences with \(\widetilde{I}\geq\Omega(N)-2\) under _Condition A_ and \(\widetilde{I}\geq\lfloor(\Omega(N)-2)/2\rfloor\) under _Condition B_.
#### Iii-B3 Exclusively Searching a Proper Level-\((\Omega(N)-\kappa)\) Factorization
For \(\kappa\in\{3,4,...,\Omega(N)-2\}\), it is difficult to find proper level-\((\Omega(N)-\kappa)\) factorizations in closed-form expressions. An exclusive search procedure is proposed instead to find such proper factorizations.
To obtain a proper level-\((\Omega(N)-\kappa)\) factorization, we need to (i) find all possible factor sets \(\{A_{m};m\in\mathcal{Z}_{\Omega(N)-\kappa}\}\) satisfying \(\prod_{m=0}^{\Omega(N)-\kappa-1}A_{m}=\prod_{m=0}^{\Omega(N)-1}P_{m}\) by partitioning the prime factor set \(\{P_{m};m\in\mathcal{Z}_{\Omega(N)}\}\) into \(\Omega(N)-\kappa\) groups first and then taking all group products in the exclusive manner, and (ii) search for a proper factor set \(\{A^{(\kappa)}_{m};m\in\mathcal{Z}_{\Omega(N)-\kappa}\}\) which yields the largest family size \(\Psi^{(\text{dpm},\kappa)}(N)=\prod_{m=0}^{\Omega(N)-\kappa-1}(A^{(\kappa)}_ {m}-1)\) among all factor sets. In each partitioning, we denote \(\omega_{m}\) as the number of prime factors in the \(m\)-th group, i.e., \(\omega_{m}=\Omega(A_{m})\) for \(m\in\mathcal{Z}_{\Omega(N)-\kappa}\). Thus, each factor set \(\{A_{m};m\in\mathcal{Z}_{\Omega(N)-\kappa}\}\) is characterized by the corresponding omega pattern \(\mathbf{\omega}\triangleq[\omega_{m};m\in\mathcal{Z}_{\Omega(N)-\kappa}]\) with \(\sum_{m=0}^{\Omega(N)-\kappa-1}\omega_{m}=\Omega(N)\). The exclusive partitioning can be conducted by searching for all possible omega patterns first and then finding all possible groupings for each pattern \(\mathbf{\omega}\). To avoid repetitive search, \(\mathbf{\omega}\) is limited to have descending entries \(\omega_{0}\geq\omega_{1}\geq...\geq\omega_{\Omega(N)-\kappa-1}\) in the exclusive partitioning. In the following, an exclusive search procedure is proposed accordingly to find a proper level-\((\Omega(N)-\kappa)\) factorization.
_Step 1_: Obtain and store all admissible patterns for \(\mathbf{\omega}\) under the constraints \(\sum_{m=0}^{\Omega(N)-\kappa-1}\omega_{m}=\Omega(N)\) and \(\omega_{0}\geq\omega_{1}\geq...\geq\omega_{\Omega(N)-\kappa-1}\geq 1\) by the process of integer partitioning in [35, Section 1.1]-[36].
_Step 2_: Transform the prime factor set \(\{P_{m};m\in\mathcal{Z}_{\Omega(N)}\}\) into all possible factor sets \(\{A_{m};m\in\mathcal{Z}_{\Omega(N)-\kappa}\}\) characterized by each admissible pattern \(\mathbf{\omega}\) exclusively from Gosper's Hack algorithm [37, Section 7.1.3]-[38]. Compute family sizes \(\prod_{m=0}^{\Omega(N)-\kappa-1}(A_{m}-1)\) for all sought factor sets \(\{A_{m};m\in\mathcal{Z}_{\Omega(N)-\kappa}\}\). Store one candidate factor set which provides the largest family size among all sought factor sets characterized by each admissible pattern \(\mathbf{\omega}\).
_Step 3_: Find a proper level-\((\Omega(N)-\kappa)\) factor set \(\{A^{(\kappa)}_{m};m\in\mathcal{Z}_{\Omega(N)-\kappa}\}\) by choosing one candidate factor set which yields the largest family size \(\Psi^{(\text{dpm},\kappa)}(N)=\prod_{m=0}^{\Omega(N)-\kappa-1}(A^{(\kappa)}_ {m}-1)\) among all stored factor sets in _Step 2_.
In _Step 1_, the process of integer partitioning in [35, Section 1.1]-[36] finds all possible patterns for \(\mathbf{\omega}\) by dividing the all-one \(\Omega(N)\)-tuple \([1,1,...,1]^{t}\) into the admissible \((\Omega(N)-\kappa)\)-tuple \(\mathbf{\omega}\) in the exclusive manner. For example, the process finds \([3,1,1]^{t}\) and \([2,2,1]^{t}\) by dividing \([1,1,1,1,1]^{t}\) into \([\omega_{0},\omega_{1},\omega_{2}]^{t}\) for \(\Omega(N)=5\) and \(\kappa=2\). In _Step 2_, Gosper's Hack algorithm transforms an omega pattern to all possible binary codewords without repetition in the bitwise manner [38, Algorithm 3.1], as detailed in _Appendix D_. In _Step 3_, a proper level-\((\Omega(N)-\kappa)\) factor set \(\{A^{(\kappa)}_{m};m\in\mathcal{Z}_{\Omega(N)-\kappa}\}\) is found from all stored candidate factor sets stored in _Step 2_ by identifying the largest family size. This completes the exclusive search procedure.
Iii-B4 Near-Proper Level-\((\Omega(N)-\kappa)\) Factorization for \(\Omega(N)>4\) and \(\kappa\in\{3,4,...,\Omega(N)-2\}\)
A near-proper level-\((\Omega(N)-\kappa)\) factorization \(N=\prod_{m=0}^{\Omega(N)-\kappa-1}\widetilde{A}^{(\kappa)}_{m}\) with \(\widetilde{A}^{(\kappa)}_{0}\leq\widetilde{A}^{(\kappa)}_{1}\leq...\leq \widetilde{A}^{(\kappa)}_{\Omega(N)-\kappa-1}\) for all \(\kappa\in\mathcal{Z}^{+}_{\Omega(N)-2}-\mathcal{Z}^{+}_{2}\) is proposed here to construct another degenerate PMA sequence family \(\widetilde{\mathcal{G}}^{(\text{dpm},\kappa)}_{\text{max}\widetilde{J}}\) based on the construction method of _Proper Level-\((\Omega(N)-2)\) Factorization_ Subsection III.\(B\). Under such near-proper factorization, family \(\widetilde{\mathcal{G}}^{(\text{dpm},\kappa)}_{\text{max}\widetilde{J}}\) exhibits the family size \(\widetilde{\Psi}^{(\text{dpm},\kappa)}(N)\triangleq\prod_{m=0}^{\Omega(N)- \kappa-1}(\widetilde{A}^{(\kappa)}_{m}-1)\). Despite \(\widetilde{\Psi}^{(\text{dpm},\kappa)}(N)\leq\Psi^{(\text{dpm},\kappa)}(N)\), the near-proper level-\((\Omega(N)-\kappa)\) factorization can be obtained simply in a closed-form expression without resort to exclusive searching.
Following Subsection III.\(B\), a _near-proper_ level-\((\Omega(N)-\kappa)\) factorization for \(N=\prod_{m=0}^{\Omega(N)-\kappa-1}\widetilde{A}^{(\kappa)}_{m}\) is obtained from a given level-\((\Omega(N)-\kappa+2)\) factorization \(N=\prod_{m=0}^{\Omega(N)-\kappa+1}\widetilde{A}^{(\kappa-2)}_{m}\) with the arranged order \(\widetilde{A}^{(\kappa-2)}_{0}\leq\widetilde{A}^{(\kappa-2)}_{1}\leq...\leq \widetilde{A}^{(\kappa-2)}_{\Omega(N)-\kappa+1}\). Specifically, a near-proper level-\((\Omega(N)-\kappa)\) factorization for \(N=\prod_{m=0}^{\Omega(N)-\kappa-1}\widetilde{A}^{(\kappa)}_{m}\) is obtained by setting \(\widetilde{A}^{(\kappa)}_{0}=\widetilde{A}^{(\kappa-2)}_{0}\widetilde{A}^{( \kappa-2)}_{1}\widetilde{A}^{(\kappa-2)}_{2}\) and \(\widetilde{A}^{(\kappa)}_{m}=\widetilde{A}^{(\kappa-2)}_{m+2}\) for \(m\in\mathcal{Z}^{+}_{\Omega(N)-\kappa-1}\) if \(\widetilde{A}^{(\kappa-2)}_{1}\widetilde{A}^{(\kappa-2)}_{2}<\widetilde{A}^{( \kappa)}_{3}\), and by setting \(\widetilde{A}^{(\kappa)}_{0}=\widetilde{A}^{(\kappa-2)}_{0}\widetilde{A}^{( \kappa-2)}_{3}\), \(\widetilde{A}^{(\kappa)}_{1}=\widetilde{A}^{(\kappa-2)}_{1}\widetilde{A}^{( \kappa-2)}_{2}\widetilde{A}^{(\kappa-2)}_{2}\) and \(\widetilde{A}^{(\kappa)}_{m+1}=\widetilde{A}^{(\kappa-3)}_{m+3}\) for \(m\in\mathcal{Z}^{+}_{2}\), otherwise. For \(\kappa\in\mathcal{Z}^{+}_{2}\), \(\{\widetilde{A}^{(\kappa)}_{m};m\in\mathcal{Z}_{\Omega(N)-\kappa}\}\) is initially assigned by \(\widetilde{A}^{(\kappa)}_{m}=A^{(\kappa)}_{m}\) where \(N=\prod_{m=0}^{\Omega(N)-\kappa-1}A^{(\kappa)}_{m}\) is a proper level-\((\Omega(N)-\kappa)\) factorization with the arranged order \(A^{(\kappa)}_{0}\leq A^{(\kappa)}_{1}\leq...\leq A^{(\kappa)}_{\Omega(N)- \kappa-1}\).
Notably, families \(\widetilde{\mathcal{G}}^{(\text{dpm},\kappa)}_{\text{max}\
family size, the supporting factor set, and the family CSD are demonstrated for each family. As shown, \(\widetilde{\mathcal{G}}_{\max,\widetilde{\widetilde{\widetilde{\widetilde{I}}}}}^ {(\text{dpma},\kappa)}\) and \(\mathcal{G}_{\max,\widetilde{\widetilde{\widetilde{\widetilde{I}}}}}^{(\text{ dpma},\kappa)}\) provide much larger family sizes than \(\mathcal{G}_{I}^{(\text{pma})}\) and offer the larger family sizes as \(\kappa\) increases, but they may entail reduced SD order \(\widetilde{\widetilde{I}}\geq\Omega(N)-\kappa\) under _Condition A_ and \(\widetilde{\widetilde{I}}\geq\lfloor(\Omega(N)-\kappa)/2\rfloor\) under _Condition B_. For a fixed \(\kappa\in\{3,4,...,\Omega(N)-2\}\), the family size of \(\widetilde{\mathcal{G}}_{\max,\widetilde{\widetilde{
\(\widetilde{N}^{(L-1)}\), respectively. Such family \(\widehat{\mathcal{G}}^{(\text{qdma},\kappa)}_{\max,\widetilde{I}}\) has the family size \(\widehat{\Psi}^{(\text{qdma},\kappa)}(N)\triangleq\min_{\rho\in\mathcal{Z}_{L}} \prod_{m=0}^{\prod(N^{(\rho)}-\kappa-1}(A_{m}^{(\rho,\kappa)}-1)\). Notably, \(\widehat{\mathcal{G}}^{(\text{qdma},\kappa)}_{\max,\widetilde{I}}\) tends to offer the larger family size as \(\kappa\) is increased, but it may reduce the SD order \(\widetilde{I}\geq\widetilde{\Omega}(N)-\kappa\) under _Condition A_ and \(\widetilde{I}\geq\left\lfloor(\widetilde{\Omega}(N)-\kappa)/2\right\rfloor\) under _Condition B_.
Iii-A2 Augmented PMA Sequence Families \(\widehat{\mathcal{G}}^{(\text{qdma})}_{\max,\widetilde{I}}\) and \(\widehat{\mathcal{G}}^{(\text{qdma},\kappa)}_{\max,\widetilde{I}}\) for \(\kappa\in\mathcal{Z}^{+}_{\widetilde{\Omega}(N)-1}\): Augmented PMA sequence family \(\widehat{\mathcal{G}}^{(\text{qdma})}_{I}\) expands from family \(\widehat{\mathcal{G}}^{(\text{pma})}_{I}\) with the same SD order and a double family size, by virtue of phase-rotating every existing sequence in family \(\widehat{\mathcal{G}}^{(\text{qdma})}_{I}\) to generate more orthogonal sequence members. Similarly, augmented degenerate PMA sequence family \(\widehat{\mathcal{G}}^{(\text{qdma},\kappa)}_{\max,\widetilde{I}}\) expands from \(\widehat{\mathcal{G}}^{(\text{qdma},\kappa)}_{\max,\widetilde{I}}\) and offers a double family size while sustaining the same SD order. In what follows, the phase-rotating method constructing family \(\widehat{\mathcal{G}}^{(\text{qdma})}_{I}\) is described in detail, and such method is also applied to construct family \(\widehat{\mathcal{G}}^{(\text{qdma},\kappa)}_{\max,\widetilde{I}}\).
Consider one sequence \(\widehat{\mathcal{G}}_{I}\) in family \(\widehat{\mathcal{G}}^{(\text{qdma})}_{I}\), which is described by \(\boldsymbol{\chi}=[\boldsymbol{\chi}_{0}^{t},\boldsymbol{\chi}_{1}^{t},..., \boldsymbol{\chi}_{L_{\widetilde{\widetilde{\widetilde{\widetilde{\widetilde{ \widetilde{\widetilde{\widetilde{\widetilde{\widetilde{\widetilde{\widetilde{\widetilde{ \widetilde{\widetilde{\widetilde{\widetilde{{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\)\)\)\) \ \ \ \ \ \ \\\\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\
for each family. As shown, family \(\widehat{\mathcal{G}}_{\max,\bar{T}}^{(\text{dpma},\kappa)}\) offers the larger family size as \(\kappa\) increases and much larger family size than \(\widehat{\mathcal{G}}_{I}^{(\text{pma})}\). Nevertheless, \(\widehat{\mathcal{G}}_{\max,\bar{T}}^{(\text{dpma},\kappa)}\) may entail reduced SD order \(\widetilde{I}\geq\widetilde{\Omega}(N)-\kappa\) under _Condition A_ and \(\widetilde{I}\geq\left\lfloor(\widetilde{\Omega}(N)-\kappa)/2\right\rfloor\) under _Condition B_. For a fixed \(\kappa\in\mathcal{Z}_{\Omega(N)-1}^{+}\), family \(\widehat{\mathcal{G}}_{\max,\bar{T}}^{(\text{dpma},\kappa)}\) exhibits double the size of family \(\widehat{\mathcal{G}}_{\max,\bar{T}}^{(\text{dpma},\kappa)}\) while sustaining the same SD order. Moreover, the family sizes of \(\widehat{\mathcal{G}}_{\max,\bar{T}}^{(\text{dpma},\kappa)}\) with large \(\kappa\) values approach a good portion of the largest achievable size \(\Psi_{\max}(N)\). The latter reveals the advantage of augmented degenerate PMA sequence families.
_Remark 1:_ Orthogonal Zadoff-Chu sequences are desirable for the RA application in 5G NR. Although \(N\) orthogonal ZC sequences of length \(N\) can be easily generated by cyclically shifting the inverse DFT of a given ZC sequence with an admissible root index \(\varsigma\) (relatively prime to \(N\)), a large minimum CSD \(\varpi_{\text{min}}\) is generally required to identify received orthogonal ZC sequences transmitted from transmitters located in various locations in the same cell. The larger the cell radius, the larger the required \(\varpi_{\text{min}}\). As a result, nonorthogonal ZC sequences with different admissible root indices are commonly employed for RA requiring a large number of short-length identification sequences under a limited \(\varpi_{\text{min}}\). In 5G NR, there are \(64\) RA identification sequences required in each cell, and many large values for \(\varpi_{\text{min}}\) are specified in [2, Tables 5-7 in Section 6.3.3.1] for the adopted ZC sequences of different lengths \(N=139\), \(571\), \(839\), and \(1151\). In these specifications, the maximum number of orthogonal ZC sequences is limited to \(\lfloor N/\varpi_{\text{min}}\rfloor<64\) for many specified pairs \((N,\varpi_{\text{min}})\). For example with \(N,\varpi_{\text{min}})=(839,26)\), only \(32\) orthogonal ZC sequences can be generated by cyclically shifting the inverse DFT of a given ZC sequence with an admissible root index \(\varsigma_{1}\). In this case, additional nonorthogonal ZC sequences are added in [2, Section 6.3.3.1] by cyclically shifting the inverse DFT of another ZC sequence with an admissible root index \(\varsigma_{2}\) so that all \(64\) sequences are collected. As shown in Table II(c), families \(\widehat{\mathcal{G}}_{\max,\bar{T}}^{(\text{dpma},\kappa)}\) with \(\kappa\in\mathcal{Z}_{3}^{+}\) and families \(\widehat{\mathcal{G}}_{\max,\bar{T}}^{(\text{dpma},\kappa)}\) with \(\kappa\in\{2,3\}\) can provide more than \(64\) orthogonal order-\(\widetilde{I}\) CA sequences and thus outperform the adopted ZC sequences [2, Section 6.3.3.1] in RA performance while providing the higher spectral compactness.
## V Random-Access Channel Identification
This section demonstrates the performance characteristics of uplink RA channel identification based on the reception of the OFDM preamble waveforms carrying identification sequences from various CA sequence families, including modified PMA, ZC, YL, and PN sequence families, over Rayleigh multipath channels. Here, the interleaving factor \(\gamma=1\) is considered. Spectral compactness of various OFDM preamble waveforms are also shown to justify the spectral compactness achieved by use of order-\(I\) CA sequences.
Consider the scenario that a single user terminal transmits a sequence \(\mathbf{q}_{k}\triangleq[q_{k}[n];n\in\mathcal{Z}_{N}]\) from the family of \(J\) CA sequences \(\{\mathbf{q}_{i};i\in\mathcal{Z}_{J}\}\) for identifying the availability of the
\(k\)-th access channel [1]-[2]. After applying down-conversion, CP removal, and DFT to the received OFDM preamble signal, the basestation receiver observes the frequency-domain vector \(\mathbf{r}\triangleq[r[n];n\in\mathcal{Z}_{N}]\) modeled as [17, 19]
\[r[n]=N^{1/2}q_{k}[n]h[n]+z[n]. \tag{13}\]
Here, \(\mathbf{z}\triangleq[z[n];n\in\mathcal{Z}_{N}]\) contains independent and identically distributed circularly symmetric complex Gaussian (CSCG) noise samples with mean zero and variance \(\mathcal{E}\{|z[n]|^{2}\}=1/\varphi\), where \(\varphi\) is the received signal-to-noise power ratio (SNR). \(\mathbf{h}\triangleq[h[n];n\in\mathcal{Z}_{N}]\) is the channel frequency response (CFR) vector corresponding to the channel impulse response (CIR) \(\{\widetilde{h}[l],\tau_{l};l\in\mathcal{Z}_{L_{h}}\}\) with \(L_{h}\) resolvable paths, given by
\[h[n]=\sum_{l\in\mathcal{Z}_{L_{h}}}\widetilde{h}[l]e^{-j2\pi\triangle fn\tau_{l }}\text{ for all }n\in\mathcal{Z}_{N} \tag{14}\]
where \(\triangle f=1/T_{\text{d}}\) is subcarrier frequency spacing and \(\tau_{l}\) denotes the \(l\)-th path delay value with \(0\lessneq\tau_{0}<\tau_{1}<...<\tau_{L_{h}-1}\leq T_{\text{g}}\). Moreover, all path gains \(\{h[l];l\in\mathcal{Z}_{L_{h}}\}\) are modeled to be independent CSCGs having common mean zero and path powers \(\mathcal{E}\{|\widetilde{h}[l]|^{2}\}=\sigma_{l}^{2}\) for \(l\in\mathcal{Z}_{L_{h}}\) with \(\sum_{l\in\mathcal{Z}_{L_{h}}}\sigma_{l}^{2}=1\), and also independent of all noise samples \(\{z[n];n\in\mathcal{Z}_{N}\}\). The RA channel identification is based on the correlations \(\{\mathbf{q}_{i}^{h}\mathbf{r};i\in\mathcal{Z}_{J}\}\), with
\[\mathbf{q}_{i}^{h}\mathbf{r} = N^{1/2}\sum_{l\in\mathcal{Z}_{L_{h}}}\widetilde{h}[l]\sum_{n\in \mathcal{Z}_{N}}q_{i}^{*}[n]q_{k}[n]e^{-j2\pi\triangle fn\tau_{l}} \tag{15}\] \[+\sum_{n\in\mathcal{Z}_{N}}q_{i}^{*}[n]z[n].\]
To identify \(\mathbf{q}_{k}\), the squared correlation magnitudes \(Y(\mathbf{q}_{i})=|\mathbf{q}_{i}^{h}\mathbf{r}|^{2}\) are measured and compared with a positive threshold \(\beta\) for all \(i\in\mathcal{Z}_{J}\). When \(Y(\mathbf{q}_{i})\) is greater than \(\beta\), the \(i\)-th access channel is considered as a requested one [17]-[18, 20]-[21].
For \(i\in\mathcal{Z}_{J}\), \(\mathbf{q}_{i}^{h}\mathbf{r}\) is a CSCG having zero mean and variance \(\mathcal{E}\{|\mathbf{q}_{i}^{h}\mathbf{r}|^{2}\}=\frac{1}{\varphi}+\sigma_{ \text{ie}}^{2}(i,k)\) if \(i\neq k\) and \(\mathcal{E}\{|\mathbf{q}_{i}^{h}\mathbf{r}|^{2}\}=\frac{1}{\varphi}+\sigma_{ \text{c}}^{2}\) otherwise, where \(\sigma_{\text{ie}}^{2}(i,k)\triangleq N\sum_{l\in\mathcal{Z}_{L_{h}}}\sigma_{ l}^{2}|\sum_{n\in\mathcal{Z}_{N}}q_{i}^{*}[n]q_{k}[n]e^{-j2\pi\triangle fn\tau_{l}}|^{2}\) is the variance of the FIE term occurring when \(\mathbf{q}_{i}\) does not match the identification sequence \(\mathbf{q}_{k}\) and \(\sigma_{\text{c}}^{2}=\frac{1}{N}\sum_{l\in\mathcal{Z}_{L_{h}}}\sigma_{l}^{2}| \sum_{n\in\mathcal{Z}_{N}}e^{-j2\pi\triangle fn\tau_{l}}|^{2}\) is the signaling variance when \(\mathbf{q}_{i}\) matches \(\mathbf{q}_{k}\) correctly. Given the statistic of \(\mathbf{q}_{i}^{h}\mathbf{r}\), \(Y(\mathbf{q}_{i})\) is a central chi-square random variable with two degrees of freedom [40].
Three measures \(P_{\text{fa}}\), \(P_{\text{fid},k}\), and \(P_{\text{c}}\) are defined herein to quantify the performance of the threshold-based identification scheme. The false alarm probability \(P_{\text{fa}}\) denotes the probability of misidentifying \(\mathbf{q}_{i}\) when there is no request (i.e., \(r[n]=z[n]\) for all \(n\in\mathcal{Z}_{N}\)), defined by \(P_{\text{fa}}\triangleq\Pr\{Y(\mathbf{q}_{i})>\beta|\text{no request}\}\) for some \(i\in\mathcal{Z}_{J}\) and given by \(P_{\text{fa}}=e^{-\beta\varphi}\), which is invariant with \(\mathbf{q}_{i}\). The average false identification probability \(P_{\text{fid},k}\) is the average probability of identifying the request of an access channel other than the \(k\)-th channel that was actually requested [20, Subsection IV._D_], and given by
\[P_{\text{fid},k} \triangleq \frac{1}{J-1}\sum_{i\in\mathcal{Z}_{J},i\neq k}\Pr\{Y(\mathbf{q} _{i})>\beta|\mathbf{q}_{k}\text{ was requested}\} \tag{16}\] \[= \frac{1}{J-1}\sum_{i\in\mathcal{Z}_{J},i\neq k}e^{-\beta\varphi/(1+ \varphi\sigma_{\text{ie}}^{2}(i,k))}.\]
From the union bound argument, \((J-1)P_{\text{fid},k}\) is also an upper bound to the probability of identifying the request of _any_ access channel other than the \(k\)-th channel that was actually requested [21, Subsection III._D_]. The correct identification probability \(P_{\text{c}}\) is the average probability of identifying the request of the \(k\)-th access channel correctly, defined by \(P_{\text{c}}\triangleq\Pr\{Y(\mathbf{q}_{k})>\beta|\mathbf{q}_{k}\text{ was requested}\}\) and given by \(P_{\text{c}}=e^{-\beta\varphi/(1+\varphi\sigma_{\text{ie}}^{2})}\), which is irrelevant with \(\mathbf{q}_{k}\). The identification scheme performs well when \(P_{\text{c}}\) is made as large as possible while \(P_{\text{fa}}\) and all \(P_{\text{fid},k}\) are restricted to be small. This can be achieved by properly setting the threshold \(\beta\) since \(P_{\text{fi}}\), \(P_{\text{fid},k}\), and \(P_{\text{c}}\) increase as \(\beta\) is decreased for a given SNR \(\varphi\). When the channel is flat fading (i.e., \(h[n]=\bar{h}[0]\) for all \(n\in\mathcal{Z}_{N}\), or equivalently \(L_{h}=1\), \(\tau_{0}=0\), and \(\sigma_{0}^{2}=1\)), \(\mathbf{q}_{i}^{h}\mathbf{r}\) for \(i\neq k\) simplifies to a CSCG with mean zero and variance \(\mathcal{E}\{|\mathbf{q}_{i}^{h}\mathbf{r}|^{2}\}=\frac{1}{\varphi}+\widetilde{ \sigma}_{\text{fe}}^{2}(i,k)\), where \(\widetilde{\sigma}_{\text{fe}}^{2}(i,k)=N|\sum_{n\in\mathcal{Z}_{N}}q_{i}^{*}[n]q _{k}[n]|^{2}\). In this case, \(P_{\text{fid},k}\) in (16) achieves the minimum \(P_{\text{fid},\text{min}}=e^{-\beta\varphi}\) when all sequences in the family \(\{\mathbf{q}_{i};i\in\mathcal{Z}_{J}\}\) are mutually orthogonal. Moreover, \(P_{\text{c}}\) achieves the maximum \(P_{\text{c},\text{max}}=e^{-\beta\varphi/(1+\varphi N)}\). When the coherence bandwidth \(B_{\text{c}}\approx 1/(5\sigma_{\text{rms}})\)[41, Chapter 4, eq. 39] is much larger than the signaling bandwidth \(N\gamma/T_{d}\) (i.e., \(\tau_{0}=0\) and \(\sigma_{l}^{2}\ll\sigma_{0}^{2}\) for all \(l\neq 0\)), \(\sigma_{\text{ie}}^{2}(i,k)\) approaches to \(\widetilde{\sigma}_{\text{ie}}^{2}(i,k)\) for \(i\neq k\) and \(P_{\text{fid},k}\) is expected to get close to \(P_{\text{fid},\text{min}}\) if all sequences in \(\{\mathbf{q}_{i};l\in\mathcal{Z}_{J}\}\) are orthogonal, where \(\sigma_{\text{rms}}\) is the root mean square delay spread in CIR. As thus implied, smaller \(P_{\text{fid},k}\) values can be achieved when there are more orthogonal sequences in \(\{\mathbf{q}_{i};i\in\mathcal{Z}_{J}\}\) available for RA channel identification over multipath channels with large coherence bandwidth, or equivalently short-delay channel profiles.
A total of \(64\) ZC sequences are required for RA channel identification in uplink 5G-NR [2, Section 6.3.3.1]. To avoid sequence identification ambiguity, a minimum CSD \(\varpi_{\text{min}}\) is required to extract cyclically-shiftable ZC sequences through cyclically shifting the inverse DFT of a single-root ZC sequence. As mentioned in _Remark 1_, this causes the short-age of adoptable cyclically-shiftable ZC sequences for most specified \((N,\varpi_{\text{min}})\) pairs. In [14], orthogonal YL sequences are constructed from phase-rotating the ZC sequences generated from cyclically shifting
sequence can be used to generate orthogonal YL sequences. The latter limits the number of adoptable orthogonal YL sequences as well in order to avoid sequence identification ambiguity. For example, we consider a particular RA system parameter profile in Table III [2] which adopts the sequence length \(N=839\) and the minimum CSD limit \(26\). In this case, at most \(\lfloor 839/26\rfloor=32\) cyclically-shiftable ZC and YL sequences can be respectively adopted and thus nonorthogonal sequences have to be augmented in [2, Section 6.3.3.1] since \(64\) RA channels are to be identified. The characteristics of the average false identification probability \(\frac{1}{J}\sum_{k\in\mathcal{Z}_{J}}{f_{\text{bd},k}}\) versus SNR \(\varphi\) are demonstrated in Fig. 1 by simulating the threshold-based RA channel identification using such ZC and YL sequence families under three different channel profiles, namely TDL-B urban micro street-canyon (UMI-SC) short-delay profile (exhibiting \(\sigma_{\text{rms}}=65\) ns and \(B_{\text{c}}\approx 2.93\times N\gamma/T_{\text{d}}\)) and TDL-B indoor (IND) short-delay profile (exhibiting \(\sigma_{\text{rms}}=20\) ns and \(B_{\text{c}}\approx 9.54\times N\gamma/T_{\text{d}}\)) in [42, Section 7.7.2] as well as the benchmarking flat fading (FF) channel profile (exhibiting an infinitely large \(B_{\text{c}}\)). Notably, the UMI-SC short-delay profile exhibits a longer delay spread than the IND short-delay profile, and thus results in a smaller coherence bandwidth. Also compared in Fig. 1 are RA channel identification systems using orthogonal sequence families \(\widehat{\mathcal{G}}_{\text{max},\widetilde{T}}^{(\text{adqma},1)}\) and \(\widehat{\mathcal{G}}_{\text{max},\widetilde{T}}^{(\text{doma},2)}\), and a nonorthogonal PN sequence family. All \(64\) PN sequences are constructed from the generator polynomial \(X^{15}+X^{14}+1\) with minimum CSD \(26\)[3, Section 9.7.1]. As described in Table II(c), families \(\widehat{\mathcal{G}}_{\text{max},\widetilde{T}}^{(\text{adqma},1)}\) and \(\widehat{\mathcal{G}}_{\text{max},\widetilde{T}}^{(\text{doma},2)}\) can provide \(96\) and \(112\) orthogonal sequences, respectively, and \(64\) sequences are randomly chosen from them in the simulation. To achieve an extremely small \(P_{\text{fa}}=10^{-5}\), the threshold value is set to \(\beta=\frac{5}{\varphi}\ln 10\) for a given SNR \(\varphi\) and in this case the correct identification probability is equivalent to \(P_{\text{c}}=10^{-5/(1+\varphi\sigma_{\text{c}}^{2})}\). For the SNR range demonstrated in Fig. 1, \(1-P_{\text{c}}\) falls in the ranges \([8.85\times 10^{-4},8.42\times 10^{-2}]\), \([8.67\times 10^{-4},8.25\times 10^{-2}]\), and \([8.65\times 10^{-4},8.23\times 10^{-2}]\) for UMI-SC, IND, and FF channel profiles, respectively. Due to the adoption of nonorthogonal sequences, RA channel identification suffers from large FIe (i.e., larger \(\sigma_{\text{fe}}^{2}(i,k)\)) and thus entails serious false identification for the systems using ZC, YL, and PN sequence families. On the contrary, false identification is less severe for the systems using orthogonal sequence families \(\widehat{\mathcal{G}}_{\text{max},\widetilde{T}}^{(\text{doma},2)}\) and \(\widehat{\mathcal{G}}_{\text{max},\widetilde{T}}^{(\text{adqma},1)}\), particularly in the multipath channels exhibiting larger coherence bandwidths.
Fig. 2 compares the spectral compactness characteristics of all the OFDM preamble waveforms adopted in Fig. 1. To compare the spectral compactness among various waveforms, the average out-of-band power fraction is defined as
\[\eta\triangleq 10\log_{10}(\frac{1}{J}\sum_{i\in\mathcal{Z}_{J}}(\int_{|f|>B/2}S _{B}^{(i)}(f)df/\int_{-\infty}^{\infty}S_{B}^{(i)}(f)df))\]
where \(S_{B}^{(i)}(f)\) is the baseband power spectrum of the waveform carrying \(\mathbf{q}_{i}\)[9]. The results on \(\eta\) are presented with respect to the normalized bandwidth \(BT_{\text{d}}/(\gamma N)\). For a predetermined \(\eta\) (say \(-50\) dB), the smaller the required bandwidth, the higher the spectral compactness. As shown, preamble waveforms carrying order-\(\widetilde{I}\) CA sequence families \(\widehat{\mathcal{G}}_{\text{max},\widetilde{T}}^{(\text{adqma},1)}\) (yielding SD order \(\widetilde{I}\geq 2\)) and \(\widehat{\mathcal{G}}_{\text{max},\widetilde{T}}^{(\text{dpma},2)}\) (yielding SD order \(\widetilde{I}\geq 1\)) can provide much higher spectral compactness than preamble waveforms carrying ZC, YL, PN sequence families.
## VI Conclusion
Several modified PMA sequence families are constructed in the paper to provide more orthogonal order-\(I\) CA sequences for SPI applications, while facilitating the composition of spectrally compact OFDM preamble/pilot waveforms. The higher the sidelobe-decaying order, the higher spectral compactness the preamble/pilot waveform exhibits. By use of the developed orthogonal order-\(I\) CA sequences, the SPI system requiring a large number of identification/sounding sequences can achieve the better performance in multipath channels exhibiting short-delay channel profiles, while exhibiting high spectral compactness. Specifically, degenerate PMA sequence families \(\mathcal{G}_{\text{max},\widetilde{T}}^{(\text{doma},\kappa)}\), \(\widehat{\mathcal{G}}_{\text{max},\widetilde{T}}^{(\text{doma},\kappa)}\), and \(\widehat{\mathcal{G}}_{\text{max},\widetilde{T}}^{(\text{dpma},\kappa)}\) are constructed by properly factorizing the sequence length in the construction of PMA sequences with or without reducing the sidelobe-decaying order. Augmented PMA sequence families \(\widehat{\mathcal{G}}_{I}^{(\text{apma})}\) and \(\widehat{\mathcal{G}}_{\text{max},\widetilde{T}}^{(\text{doma},\kappa)}\) are further constructed to double the family size by augmenting the phase-rotated replicas of all PMA sequences in families \(\widehat{\mathcal{G}}_{I}^{(\text{pma})}\) and \(\widehat{\mathcal{G}}_{\text{max},\widetilde{T}}^{(\text{doma},\kappa)}\), respectively, without trading off the sidelobe-decaying order. When compared with conventional Zadoff-Chu, Yu-Lee, and pseudorandom-noise CA sequence families, these modified
Fig. 1: The characteristics of the average false identification probability versus SNR among the RA channel identification systems using various sequence families under TDL-B UMI-SC short-delay channel profile, TDL-B IND short-delay channel profile, and FF channel profile.
PMA sequence families are shown to provide noticeable performance improvement in random-access channel identification over indoor and urban multipath environments exhibiting short-delay channel profiles. Meanwhile, the preamble/pilot waveforms carrying order-\(I\) CA sequences in these modified PMA sequence families are attributed with much higher spectral compactness than those carrying conventional CA sequences.
## Appendix
_A) Proof of Lemma 1:_ Consider two \(M\)-tuples \(\mathbf{1}_{m}\) and \(\mathbf{x}=[x_{m};m\in\mathcal{Z}_{M}]\) where all entries \(x_{m}\) are integers greater than one and \(\mathbf{1}_{m}\) contains one at the \(m\)-th entry and \(M-1\) zeros elsewhere. With (11), \(f(\mathbf{x}^{t}+k_{m}\mathbf{1}_{m}^{t})-f(\mathbf{x}^{t})\) is given by
\[f(\mathbf{x}^{t}+k_{m}\mathbf{1}_{m}^{t})-f(\mathbf{x}^{t})\] \[= \frac{(x_{m}+k_{m})\prod_{i\neq m}x_{i}-1}{(x_{m}+k_{m}-1)\prod_{i \neq m}(x_{i}-1)}-\frac{\prod_{i}x_{i}-1}{\prod_{i}(x_{i}-1)}\] \[= \frac{1}{\prod_{i\neq m}(x_{i}-1)}[\frac{(x_{m}+k_{m})\prod_{i \neq m}x_{i}-1}{x_{m}+k_{m}-1}-\frac{\prod_{i}x_{i}-1}{x_{m}-1}]\] \[= \frac{k_{m}(1-\prod_{i\neq m}x_{i})}{(x_{m}+k_{m}-1)\prod_{i}(x_{i} -1)}\]
for \(m\in\mathcal{Z}_{M}\) and it is negative when \(k_{m}\) is a positive integer. Thus, \(f(\mathbf{x}^{t}+k_{m}\mathbf{1}_{m}^{t})<f(\mathbf{x}^{t})\) if the integer \(k_{m}\) is positive and obviously \(f(\mathbf{x}^{t}+k_{m}\mathbf{1}_{m}^{t})=f(\mathbf{x}^{t})\) if \(k_{m}=0\).
Next, define another \(M\)-tuple \(\mathbf{k}=\mathbf{b}-\mathbf{a}\) and express \(\mathbf{b}\) in terms of \(\mathbf{a}\) and \(\mathbf{k}\) as
\[\mathbf{b}=\mathbf{a}+\mathbf{k}=\mathbf{a}+\sum\nolimits_{m=0}^{M-1}k_{m} \mathbf{1}_{m}\]
where all integer-valued entries \(k_{m}\) in \(\mathbf{k}=[k_{m};m\in\mathcal{Z}_{M}]\) are nonnegative and all integer-valued entries \(a_{m}\) and \(b_{m}\) in \(\mathbf{a}=[a_{m};m\in\mathcal{Z}_{M}]\) and \(\mathbf{b}=[b_{m};m\in\mathcal{Z}_{M}]\) are greater than one. With \(f(\mathbf{x}^{t}+k_{m}\mathbf{1}_{m}^{t})<f(\mathbf{x}^{t})\) for a positive \(k_{m}\), we have\(f(\mathbf{b}^{t})\leq f(\mathbf{a}^{t}+\sum\nolimits_{m=0}^{M-2}k_{m} \mathbf{1}_{m}^{t})\leq...\leq f(\mathbf{a}^{t}+k_{0}\mathbf{1}_{0}^{t})\leq f (\mathbf{a}^{t})\). Thus, \(f(\mathbf{a}^{t})\geq f(\mathbf{b}^{t})\) if \(1<a_{n}\leq b_{n}\) for all \(n\in\mathcal{Z}_{M}\), and \(f(\mathbf{a}^{t})>f(\mathbf{b}^{t})\) if \(1<a_{n}<b_{n}\) for some \(n\in\mathcal{Z}_{M}\) and \(1<a_{m}\leq b_{m}\) for all \(m\in\mathcal{Z}_{M}-\{n\}\). This completes the proof.
_B) Proof of Lemma 2:_ With (11), \(f([P_{a},P_{d}])\times f([P_{b},P_{c}])-f([P_{a},P_{c}])\times f([P_{b},P_{d}])\) is given by
\[\frac{(P_{a}P_{d}-1)(P_{b}P_{c}-1)-(P_{a}P_{c}-1)(P_{b}P_{d}-1)}{( P_{a}-1)(P_{b}-1)(P_{c}-1)(P_{d}-1)} \tag{17}\] \[= \frac{P_{a}P_{c}+P_{b}P_{d}-P_{a}P_{d}-P_{b}P_{c}}{(P_{a}-1)(P_{b }-1)(P_{c}-1)(P_{d}-1)}\] \[= \frac{(P_{b}-P_{a})(P_{d}-P_{c})}{(P_{a}-1)(P_{b}-1)(P_{c}-1)(P_{ d}-1)}.\]
Similarly, \(f([P_{a},P_{c}])\times f([P_{b},P_{d}])-f([P_{a},P_{b}])\times f([P_{c},P_{d}])\) is given by
\[\frac{(P_{d}-P_{a})(P_{c}-P_{b})}{(P_{a}-1)(P_{b}-1)(P_{c}-1)(P_{d}-1)}. \tag{18}\]
When \(1<P_{a}\leq P_{b}\leq P_{c}\leq P_{d}\), (17) and (18) are both nonnegative. This completes the proof.
_C) Proof of Lemma 3:_ With (11), \(f([P_{a},P_{b},P_{c}])-f([P_{a},P_{d}])\times f([P_{b},P_{c}])\) is given by
\[\frac{P_{a}P_{b}P_{c}-1}{(P_{a}-1)(P_{b}-1)(P_{c}-1)}\] \[-\frac{(P_{a}P_{d}-1)(P_{b}P_{c}-1)}{(P_{a}-1)(P_{b}-1)(P_{c}-1) (P_{d}-1)}\] \[= \frac{P_{a}P_{d}+P_{b}P_{c}-P_{a}P_{b}P_{c}-P_{d}}{(P_{a}-1)(P_{b} -1)(P_{c}-1)(P_{d}-1)} \tag{19}\] \[= \frac{P_{a}P_{c}+P_{b}P_{d}-P_{a}P_{d}-P_{b}P_{c}}{(P_{a}-1)(P_{b }-1)(P_{c}-1)(P_{d}-1)}\]
When \(1<P_{a}\leq P_{b}\leq P_{c}\leq P_{d}\), (17) and (18) are both nonnegative. This completes the proof.
_C) Proof of Lemma 3:_ With (11), \(f([P_{a},P_{b},P_{c}])-f([P_{a},P_{d}])\times f([P_{b},P_{c}])\) is given by
\[\frac{P_{a}P_{b}P_{c}-1}{(P_{a}-1)(P_{b}-1)(P_{c}-1)}\] \[-\frac{(P_{a}P_{d}-1)(P_{b}P_{c}-1)}{(P_{a}-1)(P_{b}-1)(P_{c}-1)(P _{d}-1)}\] \[= \frac{P_{a}P_{d}+P_{b}P_{c}-P_{a}P_{b}P_{c}-P_{d}}{(P_{a}-1)(P_{b }-1)(P_{c}-1)(P_{d}-1)}\] \[= \frac{(P_{a}-1)(P_{d}-P_{b}P_{c})}{(P_{a}-1)(P_{b}-1)(P_{c}-1)(P _{d}-1)}. \tag{20}\]
Fig. 3: Gosper’s Hack algorithm.
Fig. 2: Average out-of-band power fraction characteristics for OFDM preamble waveforms carrying various CA sequence families.
which is nonnegative when \(P_{b}P_{c}\leq P_{d}\) and \(1<P_{a}\leq P_{b}\leq P_{c}\leq P_{d}\). This completes the proof.
_D) Gosper's Hack Algorithm:_ Gosper's Hack algorithm in [37]-[38] can assist in finding all possible factor sets \(\{A_{m};m\in\mathcal{Z}_{\Omega(N)-\kappa}\}\) which satisfy \(\prod_{m=0}^{\Omega(N)-\kappa-1}A_{m}=\prod_{m=0}^{\Omega(N)-1}P_{m}\) and are all characterized by an admissible pattern \(\mathbf{\omega}=[\omega_{m};m\in\mathcal{Z}_{\Omega(N)-\kappa}]\) with \(\omega_{m}=\Omega(A_{m})\). To find all possible factor sets \(\{A_{m};m\in\mathcal{Z}_{\Omega(N)-\kappa}\}\), we aim to (i) first find all possible partitions of \(\{P_{m};m\in\mathcal{Z}_{\Omega(N)}\}\) into \(\Omega(N)-\kappa\) prime factor subsets \(\{P_{m}^{(n)};m\in\mathcal{Z}_{\omega_{n}}\}\) for \(n\in\mathcal{Z}_{\Omega(N)-\kappa}\), where \(P_{0}^{(n)}\leq P_{1}^{(n)}\leq...\leq P_{\omega_{n-1}}^{(n)}\), with the aid of Gosper's Hack algorithm and (ii) then compose all possible factor sets by computing \(A_{n}=\prod_{m=0}^{\omega_{n}-1}P_{m}^{(n)}\) accordingly. To describe step (i), we define \(\widetilde{\mathbf{\omega}}=[\widetilde{\omega}_{i};n\in\mathcal{Z}_{\Omega(N)- \kappa}]\) with \(\widetilde{\omega}_{n}\triangleq\sum_{m=n}^{\Omega(N)-\kappa-1}\omega_{m}\) and \(\mathbf{b}^{(n)}\triangleq[b_{m}^{(n)};m\in\mathcal{Z}_{\widetilde{\omega}_{ n}}]\) as a binary codeword with length \(\widetilde{\omega}_{n}\) and Hamming weight \(\omega_{n}\).3 For a given \(\mathbf{\omega}\), there are a total of \(\prod_{n\in\mathcal{Z}_{\Omega(N)-\kappa}}\binom{\widetilde{\omega}_{n}}{ \omega_{n}}\) possible binary codeword sets for \(\{\mathbf{b}^{(n)};n\in\mathcal{Z}_{\Omega(N)-\kappa}\}\) and they can be exclusively obtained by Gosper's Hack algorithm in Fig. 3[38, Algorithm 3.1]. To obtain a partition of \(\{P_{m};m\in\mathcal{Z}_{\Omega(N)}\}\) for each given \(\{\mathbf{b}^{(n)};n\in\mathcal{Z}_{\Omega(N)-\kappa}\}\), a binary codeword set \(\{\widetilde{\mathbf{b}}^{(n)};n\in\mathcal{Z}_{\Omega(N)-\kappa}\}\) is converted from \(\{\mathbf{b}^{(n)};n\in\mathcal{Z}_{\Omega(N)-\kappa}\}\) by the proposed codeword conversion algorithm in Fig. 4, in a way that each codeword \(\widetilde{\mathbf{b}}^{(n)}\triangleq[\widetilde{b}_{m}^{(n)};m\in\mathcal{Z }_{\Omega(N)}]\) contains \(\Omega(N)\) entries and the same Hamming weight as \(\mathbf{b}^{(n)}\). Notably, there are a total of \(\Omega(N)\) ones in \(\{\widetilde{\mathbf{b}}^{(n)};n\in\mathcal{Z}_{\Omega(N)-\kappa}\}\). From \(\{\widetilde{\mathbf{b}}^{(n)};n\in\mathcal{Z}_{\Omega(N)-\kappa}\}\), a partition of \(\{P_{m};m\in\mathcal{Z}_{\Omega(N)}\}\) into \(\Omega(N)-\kappa\) prime factor subsets \(\{P_{\widetilde{m}}^{(n)};\widetilde{m}\in\mathcal{Z}_{\omega_{n}}\}\) can be thus specified by
Footnote 3: Notably, \(\widetilde{\omega}_{0}=\Omega(N)\) and all Hamming weights \(\omega_{m}\) sum to \(\Omega(N)\).
\[P_{\varepsilon_{m}^{(n)}}^{(n)}=P_{m}\text{ if }\widetilde{b}_{m}^{(n)}=1 \tag{20}\]
for \(n\in\mathcal{Z}_{\Omega(N)-\kappa}\) and \(m\in\Omega(N)\), where \(\varepsilon_{m}^{(n)}=\sum_{m^{\prime}=0}^{m}\widetilde{b}_{m^{\prime}}^{(n)}-1\). Accordingly, all possible partitions of \(\{P_{m};m\in\mathcal{Z}_{\Omega(N)}\}\) and thereby all possible factor sets for \(\{A_{m};m\in\mathcal{Z}_{\Omega(N)-\kappa}\}\) can be found in steps (i) and (ii) from \(\prod_{n\in\mathcal{Z}_{\Omega(N)-\kappa}}\binom{\widetilde{\omega}_{n}}{ \omega_{m}}\) possible codeword sets for \(\{\mathbf{b}^{(n)};n\in\mathcal{Z}_{\Omega(N)-\kappa}\}\).
Consider the example with \(\Omega(N)=6\), \(\kappa=3\), and a given pattern \(\mathbf{\omega}=[3,2,1]^{t}\). Such \(\mathbf{\omega}\) determines \(\widetilde{\mathbf{\omega}}=[6,3,1]^{t}\) uniquely and thus fixes the lengths \(6,3,1\) and Hamming weights \(3,2,1\) of the binary codeword set \(\{\mathbf{b}^{(0)},\mathbf{b}^{(1)},\mathbf{b}^{(2)}\}\) accordingly. From Gosper's Hack algorithm, there are \(\binom{6}{3}\binom{2}{3}\binom{1}{1}=60\) possible codeword sets meeting such length and weight distributions. For example, \(\mathbf{b}^{(0)}=[0,1,0,\\ 1,1,0]^{t}\), \(\mathbf{b}^{(1)}=[0,1,1]^{t}\) and \(\mathbf{b}^{(2)}=[1]\) form one possible codeword set. From the codeword conversion algorithm, the corresponding codeword set \(\{\widetilde{\mathbf{b}}^{(n)};n\in\mathcal{Z}_{\Omega(N)-\kappa}\}\) is obtained as \(\widetilde{\mathbf{b}}^{(0)}=[1,0,1,1,0]^{t}\), \(\widetilde{\mathbf{b}}^{(1)}=[0,0,1,0,0,1]^{t}\) and \(\widetilde{\mathbf{b}}^{(2)}=[1,0,0,0,0,0,0]^{t}\). In turns, such \(\{\widetilde{\mathbf{b}}^{(n)};n\in\mathcal{Z}_{\Omega(N)-\kappa}\}\) determines a partition of \(\{P_{m};m\in\mathcal{Z}_{\Omega(N)}\}\) into \(\{P_{m}^{(0)};m\in\mathcal{Z}_{\omega_{0}}\}=\{P_{1},P_{3},P_{4}\}\), \(\{P_{m}^{(1)};m\in\mathcal{Z}_{\omega_{1}}\}=\{P_{2},P_{5}\}\), and \(\{P_{m}^{(2)};m\in\mathcal{Z}_{\omega_{2}}\}=\{P_{0}\}\). The corresponding \(\{A_{m};m\in\mathcal{Z}_{\Omega(N)-\kappa}\}\) becomes \(\{P_{1}P_{3}P_{4},P_{2}P_{5},P_{0}\}\). All \(60\) possible partitions can be thus obtained from \(60\) codeword sets \(\{\mathbf{b}^{(0)},\mathbf{b}^{(1)},\mathbf{b}^{(2)}\}\) exclusively obtained by Gosper's Hack algorithm.
|
2310.03426 | G189.6+03.3: the first complete X-ray view provided by SRG/eROSITA | Context. G189.6+03.3 and IC443 are two examples of supernova remnants located
in a region rich of gas and dust, spatially close to the HII region S249. So
far, the actual shape of IC443 is believed to be given by the past action of
multiple supernova explosions, while a third unrelated might have originated
G189.6+03.3.
Aims. If the IC443 nebula has been extensively observed in several bands, in
opposite there is an almost complete lack of observations on the nearby and
much weaker supernova remnant G189.6+03.3, discovered in 1994 with ROSAT. Given
the relatively large extent of this second remnant, the new dataset provided by
the X-ray telescope eROSITA onboard the Spectrum Roentgen Gamma (SRG) mission
gives a unique opportunity to characterize it more in depth.
Methods. We provide a full spectral characterization of G189.6+03.3 emission
for the first time, together with new images covering the whole remnant. Since
one of the leading hypothesis is that its emission partially overlaps with the
emission of IC443, we test this scenario dividing the remnant in several
regions from which we extracted the spectra.
Results. The new X-ray images provided by eROSITA show an elongated
structure. Together with the detection of supersolar abundances of O, Mg, Ne
and Si and subsolar abundance of Fe, these features could be an indication of a
faint supernova explosion. The X-ray spectra also highlight the presence of a
0.7 keV plasma component across all the regions together with a column density
almost uniform.
Conclusions. The ubiquitous presence of the 0.7 keV plasma component is a
strong indication for G189.6+03.3 overlapping completely with IC443. We propose
the progenitors of G189.6+03.3 and IC443 could have been hosted in a binary or
multiple system, originating two explosions at different times in different
positions. | Francesco Camilloni, Werner Becker | 2023-10-05T10:05:27Z | http://arxiv.org/abs/2310.03426v1 | # G189.6+03.3: the first complete X-ray view provided by SRG/eROSITA
###### Abstract
Context: G189.6+03.3 and IC443 are two examples of supernova remnants located in a region rich of gas and dust, spatially close to the HII region S249. So far, the actual shape of IC443 is believed to be given by the past action of multiple supernova explosions, while a third unrelated might have originated G189.6+03.3.
Aims:If the IC443 nebula has been extensively observed in several bands, in opposite there is an almost complete lack of observations on the nearby and much weaker supernova remnant G189.6+03.3, discovered in 1994 with _ROSAT_. Given the relatively large extent of this second remnant, the new dataset provided by the X-ray telescope eROSITA onboard the Spectrum Roentgen Gamma (_SRG_) mission gives a unique opportunity to characterize it more in depth.
Methods:We provide a full spectral characterization of G189.6+03.3 emission for the first time, together with new images covering the whole remnant. Since one of the leading hypothesis is that its emission partially overlaps with the emission of IC443, we test this scenario dividing the remnant in several regions from which we extracted the spectra.
Results:The new X-ray images provided by eROSITA show an elongated structure. Together with the detection of supersolar abundances of O, Mg, Ne and Si and subsolar abundance of Fe, these features could be an indication of a faint supernova explosion. The X-ray spectra also highlight the presence of a 0.7 keV plasma component across all the regions together with a column density almost uniform.
Conclusions:The ubiquitous presence of the 0.7 keV plasma component is a strong indication for G189.6+03.3 overlapping completely with IC443. We propose the progenitors of G189.6+03.3 and IC443 could have been hosted in a binary or multiple system, originating two explosions at different times in different positions.
## 1 Introduction
The majority of the known supernova remnants (SNRs) have been discovered in and identified by their radio band. That a supernova remnant has to be radio bright was therefore considered to be a necessary condition before a diffuse and extended source, e.g. detected at X-ray energies, was generally accepted as being an SNR. An example of such a source is G189.6+03.3 which has remained largely uncharted until today. It was clearly identified as an SNR by Asaoka & Aschenbach (1994) using data from the _ROSAT_ observatory obtained during the first ever imaging X-ray All-Sky Survey (RASS). However, the lack of its radio counterpart for quite some years lead to an SNR candidate status which still holds on in the literature till today, regardless that its radio emission was detected by Leahy (2004) some years after its discovery.
Following studies on the much brighter supernova remnant G189.1+03.0 (IC443), located at the western edge of G189.6+03.3, often did not even mention the latter. The most likely explanation for this is probably its very low surface brightness: indeed, its shape is barely visible in the early images shown by Asaoka & Aschenbach (1994), which for many years were the only ones available from the remnant. The remnant has an extent of about 0.75 degrees radius. The distance estimated for G189.6+03.3 is 1.5 kpc, while its age estimate is \(3\times 10^{4}\) years (Asaoka & Aschenbach 1994).
Nevertheless, Asaoka & Aschenbach (1994) provided a comprehensive overview of the physical properties of G189.6+03.3 and a clear detection at radio wavelength was provided by Leahy (2004) some years after its discovery. Although the sensitivity and spectral resolution of _ROSAT_ was modest when compared with more recent observatories, the authors managed to estimate the mean plasma temperature of G189.6+03.3 to be at the order of 0.14 keV, with a column density between \(0.6-1.3\times 10^{22}\) cm\({}^{-2}\) (90% confidence level). They also were the first who proposed that an optical filamentary structure located north of G189.6+03.3 might trace the interaction of the remnant with the HII emitting region S249 (Fesen 1984; Braun & Strom 1986). In addition, from the spatial distribution of the column density in some selected regions, Asaoka & Aschenbach (1994) suggest that G189.6+03.3 is overlapping with nearly half of IC443, interpreting the dark lane characterizing the images of IC443 as a result of the overlap of cold material, possibly associated to G189.6+03.3. This interpretation was suggested by IC443 being located in a very gas rich region (Fesen & Kirshner 1980; Fesen 1984; Braun & Strom 1986) with its progenitor probably belonging to a group of massive stars called Gem OB1 association (Humphreys 1978). While Denoyer (1978), Troja et al. (2006), Troja et al. (2008) demonstrated how IC443 is interacting with an atomic cloud, other studies (Cornett et al. 1977; Burton et al.
1988; Claussen et al., 1997) have shown that also a molecular cloud is interacting with IC443. This confirms that the environment surrounding the remnant is very rich of gas. According to Braun & Strom (1986), strong winds and X-ray emission from these massive stars probably carved a system of cavities where at least one massive star exploded, forming IC443. Therefore, it is not unlikely that another massive star belonging to this association formed G189.6+03.3 well before IC443 was formed.
The only other X-ray observation of a part of G189.6+03.3 has been obtained with _Suzaku_(Mitsuda et al., 2007) on a bright knot located near the north-eastern part of the remnant, almost on the opposite side of where the spectral analysis of Asaoka & Aschenbach (1994) was performed. The findings of Yamaguchi et al. (2020) are particularly important because they find evidence of Radiative Recombination Continuum (RRC) emission around 2.5 keV. This demonstrated the presence of a recombining plasma, similarly of what was discovered by Yamaguchi et al. (2009) for the nearby SNR IC443.
Therefore, the launch of the eROSITA telescope in July 2019 (Predehl et al., 2021) onboard the Spectrum Roentgen Gamma (_SRG_) mission (Sunyaev et al., 2021) provided a new opportunity to study extended sources like G189.6+03.3 due to the telescope's unlimited field of view (FOV) in its all-sky survey mode. At the date of February 2022, eROSITA has completed about four and a third all-sky surveys. The instrument is made by seven independent telescope modules (TMAs), providing a large effective area (see for details Predehl et al., 2021). Moreover, the almost \(\sim 1^{\circ}\) field view and a spectral resolution superior to that of the EPIC-PN camera onboard _XMM-Newton_, coupled with the all-sky mode, makes this instrument unique for the study of SNRs and many other extended sources. We therefore employed the dataset provided by eROSITA to perform the first spatial and spectral analysis of G189.6+03.3 in its full extent.
We divide the paper as follows: in Section 2 we describe the reduction and processing of the data, in Sections 3 and 4 we present the results obtained by analyzing the images and the spectra of G189.6+03.3. In Section 4.2 we describe the diffuse emission observed around two nearby star clusters, M35 and NGC 2175. In Section 5 we discuss our results in the light of the current supernova explosion models in order to pinpoint the relation between G189.6+03.3 and IC443 finally.
## 2 Data Reduction
The location of G189.6+03.3 was in the eROSITA field of view during all four sky surveys, i.e. between 2020 April 1st - 8th, October 3rd - 12th, 2021 March 29th - April 7th as well as September 29th - October 9th. These observations yield an un-visented exposure time for G189.6+03.3 of 830s when including all seven telescope modules. In contrast, the deadtime and vignetting corrected observing time is computed to be slightly less than half of that, i.e. 390s only. To perform the data reduction, the creation of images and the extraction of energy spectra, we employed the eROSITA scientific analysis software (eSASS) version 211214 and the instrument calibration files included in this software (Brunner et al., 2022)1. Within the eSASS pipeline, X-ray data of the eRASS sky are divided into 4700 partly overlapping sky tiles of 3\(\fdg\)6 \(\times\) 3\(\fdg\)6 each. These are numbered using six digits, three for RA and three for Dec, representing the sky tile center position in degrees. Employing the command evto01, we started merging the sky tiles numbered 092066, 094069 and 096066 from the eRASS-4 dataset (events from all four scans) to produce a merged single event file. The eSASS command radeczxy was applied to align the images on the center region of G189.6+03.3, i.e. to the position RA:06h18m30s DEC:+22d10m00s. For imaging we used the events from all seven telescope modules (TM 1-2-3-4-5-6-7) with the CCD detection PATTERN=15 and filtering on the good time intervals (absence of solar flaring). We then extracted spectrum, background, Redistribution Matrix File (RMF) and Ancillary Response Files (ARF) applying the command srctool on data from TM 1-2-3-4-6 only. A light leak was detected soon after the launch of eROSITA for the telescope modules TM 5 and 7 (Predehl et al., 2021). As this light leak introduces an extra uncertainty when it comes to the calibration of these detectors, data from TM 5 and 7 are considered to be less suitable for spectroscopic studies. In consequence, events from these telescope modules were not included in the spectral analysis.
Footnote 1: The software version is denoted according to the date when it was released, i.e. 14.12.2021.
## 3 Spatial Analysis
As a first step, we produced an RGB image from the eRASS:4 dataset reduced as described in Section 2. We applied the adaptive smoothing algorithm of Ebeling et al. (2006) to enhance the diffuse emission of the G189.6+03.3 and IC443 complex. Comparing to the original image of Asaoka & Aschenbach (1994), a considerable higher number of details appear in the eROSITA RGB image shown in Figure 1. The shape of the remnant from Figure 1 is slightly asymmetric, appearing elongated in the South-East direction, but also, in the West part if we consider region 'D' as part of G189.6+03.3. In this case, the shape of the remnant becomes more symmetric, with 'ear-like' feature (see for example Grichener & Soker (2017) and references therein for recent discussion about ear-like structure in SNR).
We notice two interesting features inside the shell-like structure of G189.6+03.3. One feature is a very dim diffuse emission almost at the center, which could origin from an unresolved central source. We derived a simple test to see whether this source is extended or not, comparing its radial profile with the one of a nearby region extracted inside the remnant and the one of a known point source (nearby star V398 Gem): the result is shown in Figure 2, demonstrating this is not a point source. Nevertheless, the profile seems very similar to the one extracted for another region inside the SNR, suggesting this might just be an over density. In Section 4 we describe the spectral analysis we carried out on this object.
A second interesting feature is an unknown bright source located at RA:6:18:53.7, DEC:+21:45:49.8 (indicated by the orange circle in Figure 3). Inspecting each single eRASS survey, it appears only in eRASS 3. Therefore, we identified the transient as described in Salvato et al. (2022) using CatWISE and Gaia EDR3 catalogs separately. The object is identified as Gaia EDR3 3376739988615000320 and AllWISE source J061853.77+214551.5 (Medan et al., 2021) which is an M-type star at roughly 70 pc (Zhong et al., 2019), ruling out the possibility that it is associated with the remnant.
In the same region there are also two compact objects: a first object is the fast moving neutron star CXOU J061705.3+22212 ensrouded by a pulsar wind nebula, which was extensively observed with _XMM-Newton_ and (Keohane et al., 1997; Bocchino & Bykov, 2001; Olbert et al., 2001; Gaensler et al., 2006; Swartz et al., 2015; Greco et al., 2018). It is located at RA (J2000):06h17m05.18s and DEC (J2000):+22:21:27.6 (see Figure 3). If we assume RA (J2000):06h17m0s and DEC (J2000):+22:34:00 as center of IC443, the fast moving neutron
star is 12.6 arcmin separated from it while the displacement from the center of G189.6+03.3 is 37'. CXOU J061705.2+22212 has not yet been detected as a radio pulsar. Its location was covered in the FAST GPPS survey2 which observed the source location for 18000s with the PSR backend and a limiting sensitivity of 10 \(\mu\)Jy (Han et al. 2021). The second nearby compact object is the radio pulsar PSR B0611+22 located at RA(J2000):06h14m17s DEC(J2000):+22:29:56.848, 1.2deg from the center of G189.6+03.3 and 32deg from the center of IC443. When Davies et al. (1972) discovered PSR B0611+22 in the radio band, G189.6+03.3 had not been discovered by then so that they associated the pulsar to IC443. However, today we firmly detect two compact objects and two supernova remnants spatially close to each other.
Footnote 2: [http://zmtt.bao.ac.cn/GPPS/GPPSnewPSR.html](http://zmtt.bao.ac.cn/GPPS/GPPSnewPSR.html)
Therefore, the immediate question is whether the two compact objects as well as IC443 and G189.6+03.3 are all at the
Figure 1: False color image (Red: 0.2-0.7 keV, Green: 0.7-1.1 keV, Blue: 1.1-10 keV) of the supernova remnant IC443 and supernova remnant G189.6+03.3 obtained with eRASS:4 dataset. For each of the three images, we applied the adaptive smoothing algorithm of Ebeling et al. (2006) with a minimum significance of the signal S/N=3 and 5 as maximum. The minimum scale of smoothing is pixel size, while the maximum is 8 pixels. The scale of the colors has been particularly stretched to highlight the diffuse emission. In orange we display the extraction regions employed for the spectral analysis. The red circle is not used for spectral analysis purposes and it just indicative for the suggested extension of G189.6+03.3 (the red cross marks the center of the circle at RA:06h19m40.8s, DEC:+21:58:03). The magenta cross indicates the center of IC443 at RA:06h17m0s and DEC:+22:34:00.
same distance. If this turns out to be the case, we would probably be witnessing the stellar endpoint of a binary system formed by two massive stars. To test this scenario, we queried the Australia Telescope National Facility Pulsar Catalogue3(Manchester et al., 2005) to obtain the dispersion measure (DM) value, age, proper motion in RA (PMRA) and DEC (PMDEC) for PSR B0611+22. The dispersion measure can be correlated to the column density measured in X-rays. The dispersion measure for B0611+22 is 96.91 cm\({}^{-3}\) pc, and it corresponds to a distance of 1.74 kpc or 3.5 kpc, depending on the model assumed for the dispersion measure (see Yao et al., 2017, for a recent discussion). Assuming the relation between dispersion measure and column density given by He et al. (2013)
Footnote 3: [https://www.atnf.csiro.au/research/pulsar/psrcat/](https://www.atnf.csiro.au/research/pulsar/psrcat/)
\[\mathrm{N_{H}}(10^{20}\mathrm{cm}^{-2})=0.30^{+0.13}_{-0.09}\mathrm{\;DM\;(pc \;cm^{-3})} \tag{1}\]
the equivalent value for the NH measured in X-rays is \(2.9^{+1.3}_{-0.9}\cdot 10^{21}\) cm\({}^{-2}\). In the next Section, we will compare this value to the column density derived from the spectral fits of different regions of G189.6+03.3.
In order to better visualize the amount of material in the region, we over plotted the X-ray contours of our observation to the _WISE_ archival data. The result is shown in Figure 4: the emission of G189.6+03.3 partially overlaps with the nearby S249 HII region, which is bright in this image. We recall how Fesen (1984) showed this region is interacting with IC443.
Observing Figure 4, we expected different absorption values across G189.6+03.3 and IC443. Therefore, we looked at the optical extinction data provided by Lallement et al. (2019) and available online5. This database uses the parallax derived distance of _Gaia_ and the optical extinction measured with the same instrument to estimate the distance of the dust. In Figure 5 are shown the extinction data from three different spots, one from the central region of G189.6+03.3, one in the direction of IC443 and one in the direction of the HII region S249 (indicated in Figure 4). The optical extinction curves are quite similar to each other, indicating that the three regions are absorbed by the same amount of dust and hence are likely located at similar distance from us. It is however unclear whether the arc like structure visible in optical in the North is compressed material by the shock-wave originating from G189.6+03.3 (as proposed by Asaoka & Aschenbach (1994)) or if it is still part of IC443 (Fesen, 1984). Even though optical extinction can be related to X-ray column density value (see for example Predehl & Schmitt, 1995), we notice how deviations from this formula can easily be justified by dust being destroyed by the blast wave during the supernova explosion (Micelotta et al., 2016; Zhu et al., 2019). In addition, uncertainties in the _Gaia_ optical extinction measurements increase above 2kpc. Therefore, we present the profiles in Figure 5 only for a qualitative comparison.
Footnote 5: Identified with the source Gaia EDR3 3376739988615000320 and AIIWISE J061853.77+214551.5
Footnote 6: [https://astro.acri-st.fr/gaia_dev/](https://astro.acri-st.fr/gaia_dev/)
## 4 Spectral Analysis
The analysis of the X-ray spectra has been carried out with PyXSPEC, the Python interface of XSPEC (Arnaud, 1996), and the errors are expressed in \(1\sigma\) confidence level. We employed the Cash statistics (Cash, 1979), using the version implemented in XSPEC. The choice of using the ratio CSTAT/dof (dof=degree of freedom) to estimate the goodness of the fit is motivated by the fact that for sufficient number of counts, the C statistic approaches the \(\chi^{2}\) statistic (Kaastra, 2017). In order to estimate the errors in a robust way, we run an MCMC (Monte Carlo Markov Chain) based code using the Python library emcee(Foreman-Mackey et al., 2013), running it for 40000 steps. We considered only the last 2000 steps of the run with the purpose to get the largest possible number of chains converged. We initialized our walkers with a Gaussian distribution centered on the best fit parameters, employing logarithmically uniform priors on the model components which were left free to vary during the run. One of the main advantage of this approach over the traditional fitting technique is the capability to probe more in depth the space of parameters (see for example van Dyk et al., 2001, for a description of the advantaning of using an MCMC based approach in X-ray astronomy). We extracted the spectra from the regions shown in Figure 1. Before proceeding with the spectral analysis, we removed the point sources in both background and source regions.
We started modeling the background spectrum following the approach of Okon et al. (2021) and references therein. The background extraction region is indicated in Figure 1. The model consists of one powerlaw component representing the Cosmic X-ray Background with a fixed slope of 1.4 and three collisional equilibrium thermal model (APEC, Smith et al., 2001) components, each one with fixed temperature and solar abundances, but free normalization. The Local Hot Bubble is modeled with temperature of 0.105 keV, while the Galactic Halo is described by the other two APEC models respectively with \(kT=0.658\) keV and \(kT=1.22\) keV. We absorbed the Galactic Halo and Cosmic X-ray Background components with the TBabs model (Wilms et al., 2000). Given this modeling, we determined the best fitting background model, first running a fit on the background region. We then proceed fitting the background best fitting model simultaneously with the source model to the spectrum of each one of the regions shown in orange in Figure 1. Since from Figure 1 the background is not significantly variable across the regions, we fixed the shape of the background model, fitting only a global normalization parameter simultaneously with the source. As an additional test, we checked our results employing another background region extracted few degrees South of G189.6+03.3, obtaining the same results. The spectra are shown in Figure 6.
Figure 2: Radial profiles extracted respectively from the light blue circle region in Figure 3 (Red), a region of the same size located inside G189.6+03.3 (Green) and another one centered on the star V398 Gem (Blue).
Following the labeling in Figure 1, we started fitting region 'A' with a constant temperature plane parallel shock thermal model (VPSHOCK, Borkowski et al. 2001), absorbed with the model TBabs (Wilms et al. 2000). This region should contain only the contribution from G189.6+03.3. The goodness of the fit was not acceptable, so that we decided to add another thermal shock component (VPSHOCK) applying also a velocity shift (VASHIFT) to the initial model. We were motivated by the fact that from Figure 1 the inner region 'A' looks surrounded by a brighter emission, which we analyzed separately as region 'B'. With the two components model, we assume one component represents the inner emission of the remnant and the other one the emission from an expanding shell. In the expanding component (the one multiplied by the velocity shift) we left as free abundances only O, Mg, Ne and Fe, assuming that these elements are those mainly enriched, given also that are associated to the brightest lines. In the same component we have frozen to 0 the abundance values of C, N, Si and S to avoid excessive degeneracy with the parameters of the other component. In the rest component we instead set free to vary C, N, O, Mg, Ne, Fe, Si and S: this model definitely improves the fit statistic (see Table 1 for the goodness of fits). The abundances not mentioned above are left frozen at solar value.
Figure 4: Overplot of the X-ray contours obtained with eROSITA (light green) with the un_WISE_ color archival data (W2, 4.6 \(\mu\)m; W1, 3.4\(\mu\)m).
Figure 3: Close up of Figure 1 zoomed on IC443 and G189.6+03.3. The image has been less stretched to show better the details within the remnant. The blue circle indicates the position of a putative central source in G189.6+03.3, in orange the transient\({}^{\rm{4}}\). The green arrow indicates the proper motion direction of the pulsar B0611+22 while the light green the one of the fast moving neutron star CXOU J061705.3+22212. Both arrows are obtained multiplying the proper motion value for the age of each one of the compact objects. For displaying purpose, the length of the arrows has been magnified 10 times. The thin dotted yellow line highlights the elongated structure visible in eROSITA, while the orange thick line represents the jet direction suggested by Greco et al. (2018).
As mentioned in Section 1, starting from Yamaguchi et al. (2009) recombination was clearly detected in the spectra of IC443 by several studies, while it was found as a relevant process also in G189.6+03.3 (Yamauchi et al., 2020). Therefore, we also tested a recombining plasma model (VRNEI, Foster et al., 2017) as a second additive component. The abundances were set as with the double VPSHOCK model and the initial cooling temperature T\({}_{0}\) is set frozen to 5 keV. We decided to make the same assumption done by Okon et al. (2021): in this way, the O-S elements are fully ionized in the initial condition. Looking at the spectra, no extra residuals appear when a two component model is applied and the difference of the statistics (Table 1,2) between two VPSHOCK models and a single VPSHOCK plus a recombination component (VRNEI) is minimal. The ionization timescale parameter in both models is close to \(10^{12}\) cm\({}^{-3}\), indicating the gas is close to the ionization equilibrium. However, the column density is significantly higher in the double VPSHOCK model: we tried to investigate this feature, freezing some parameters in the velocity shifted VPSHOCK component to the same values of the same parameters in the VPSHOCK+VRNEI model. We found out that the double VPSHOCK retrieves a column density (\(0.50^{+0.10}_{-0.09}\cdot 10^{22}\) cm\({}^{-2}\)) compatible with \(0.41^{+0.10}_{-0.11}\cdot 10^{22}\) cm\({}^{-2}\) obtained with VPSHOCK+VRNEI if the ionization timescale and velocity shift are set equal to the values found in the VPSHOCK of the recombination model. We tried to add also a black body radiation model to model the faint emission from the inner part (10 km as diameter, distance of 1500 pc, 0.1 keV temperature) but the fit did not improve significantly.
In region 'B' we analyze the bright external part of G189.6+03.3 emission, possibly associated to a shell. We initially tested a single VPSHOCK model, but the fit retrieved strong residuals. The spectra are again better described by a two component shocked plasma. Either we use a VPSHOCK or VRNEI as a second component, the column density is always \(\sim 4.0\cdot 10^{21}\) cm\({}^{-2}\) with similar statistic values. If we use VPSHOCK as a second component on top of the first VPSHOCK, we find a first plasma component with temperature \(kT=2.3^{+2.0}_{-0.1}\) keV, with the second component showing \(kT=0.74^{+0.07}_{-0.08}\) keV. From Table 1, the double VPSHOCK model provides a CSTAT/dof ratio close to 1, making it the best fit comparing to all tests we carried out. Therefore, we find an additional evidence for our initial hypothesis: the hot 2.3 keV expanding component can be associated to an expanding shell, while the inner emission is cooler with a temperature 0.7 keV. In Section 3 we observed how the north part of G189.6+03.3 is coincident with a dust structure visible in _WISE_ data: it seems therefore reasonable to argue the enhancement in the temperature of the plasma might be given by the compression of the shock against a denser medium. This would eventually imply that G189.6+03.3 and the HII region S249 are at the same distance. We also observe that the ionization timescale in the 0.7 keV component for the double VPSHOCK model is \(4\cdot 10^{11}\) cm s\({}^{-3}\) while in VPSHOCK plus VRNEI is \(3\cdot 10^{12}\) cm s\({}^{-3}\): since the statistics improves for the double VPSHOCK, we suggest the plasma is hot and recently shocked in this region. The higher value of \(\tau\) in region 'B' also in the 0.7 keV component comparing to the other regions (Table 1,2) is probably a consequence of the recent interaction of the gas with the nearby HII region.
We then moved to analyze the emission of regions 'C' and 'D' to understand whether the plasma emission of G189.6+03.3 overlaps with IC443, as initially proposed by Asaoka & Aschenbach (1994). Indeed, these two regions are those covering IC443. For region 'C' and 'D' we find as in region 'B' a double component model fits best the data. As visible in Table 1, if VPSHOCK is employed as second component, we detect a temperature close to \(kT\sim 0.7\) keV in both regions. Conversely, in region 'C' employing VRNEI as second component the fits retrieve a considerable improved statistics, but do not show hints of the 0.7 keV component (Table 2). Moreover, also the column density is not consistent with \(4.0\cdot 10^{21}\) cm\({}^{-2}\), a value found in several other regions. Region 'C' also displays a considerably high expansion velocity which is not detected in region 'D', which was supposed to belong to the same structure in the current literature. However, a justification for this difference is that the shock has been slowed down by the interaction with a molecular cloud in region D (Cornett et al., 1977; Ustamuijc et al., 2021). Observing Figure 1, the two regions have similar surface brightness, which is quite high especially in region 'C': therefore is possible that the dim 0.7 keV component is not resolved in the spectrum of this region. We tested this scenario, adding an APEC component to the models employed before. The idea behind this choice is that the material is contained by the molecular cloud, possibly reheated by the reverse shock generated by the shock wave impacting on the molecular cloud itself. The results are shown in Table 3.
With the addition of the APEC model to the VRNEI+VPSHOCK model, we obtain an improved statistic (\(\Delta\) CSTAT=23) and the 0.7 keV component appears again. Considering the double VPSHOCK description provides a considerably worse statistic, our conclusion is that the relaxed material (APEC) here detected belongs to G189.6+03.3 while the VSHOCK+VRNEI component is the shocked emission coming from IC443. This confirms the previous results that found recombination and overionized material from this region (Yamauchi et al., 2009; Greco et al., 2018).
Looking at region 'D', we find the 0.7 keV component with ionization timescale factor of the plasma \(\tau=400^{+300}_{-200}\cdot 10^{10}\) cm s\({}^{-3}\), again giving strongly evidence for a plasma close to the ionization equilibrium with physical properties very similar to those found in the other regions. The very low speed detected is an additional proof that this relaxed gas was probably slowed down in the past due to the interaction with a nearby molecular cloud.
We recall how \(\tau=10^{13}\) s cm\({}^{-3}\) is assumed as the upper limit of the ionization timescale parameter, implying collisional equi
Figure 5: Optical extinction in three different directions. _Red_: IC443. _Green_: G189.6+03.3 _Blue_: HII region S249. The dataset have been obtained from the Gaia2MASS extinction map available at [https://astro.ceri-st.fr/gaia_dev/](https://astro.ceri-st.fr/gaia_dev/). We also report the distance of \(1.5\pm 0.2\) pc adopted in the paper (Fesen, 1984; Welsh & Sallmen, 2003).
librium in the VPSHOCK model (Arnaud 1996; Borkowski et al. 1994, 2001).
In conclusion, in all the regions we find a constant temperature plasma component \(kT=0.7\) keV, suggesting the emission of G189.06+3.3 covers all the regions analyzed, including those whose emission is associated to IC443 (region C and region D).
### An unresolved source at the center of G189.6+03.3?
Employing the best fit obtained above and despite the very few counts available, we modeled the emission of the diffuse emission close to the center of G189.6+03.3 described in Section 3. The spectra are background subtracted, except for the instrumental component that we continued to model separately to describe the remaining high energy tail. We first tested an unabsorbed POWERLAW with photon index fixed to 2 and free normalization to derive a flux in 0.2-10 keV band from the light blue circle indicated in Figure 3. We left free to vary also the column density, obtaining 0.33\(\cdot 10^{22}\) cm\({}^{-2}\). The unabsorbed background subtracted flux is \(9.38^{+1.62}_{-1.18}\cdot 10^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\). For the same region, we also tested the best fit VPSHOCK+VRNEI model of region A (as discussed in Section 4 this has been proven to be more reliable than double VPSHOCK), obtaining an unabsorbed background subtracted flux of \(5.19^{+1.29}_{-0.57}\cdot 10^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\). The spectrum is shown in Figure 7. Despite we find a similar value of the CSTAT/DOF ratio (0.59) for both models, we find a considerable higher flux with the powerlaw model at 90% confidence level. Given its extended nature and the position almost at the center of G189.6+03.3, the object could be a pulsar wind nebula (PWN). Therefore, deeper observations with _Chandra_ or _XMM-Newton_ are needed to assess the nature of the object, also considering that no radio detection is associated to this position.
### Two star clusters in the neighborhood
Thanks to the almost unlimited field of view of eROSITA provided by the all-sky scan survey mode, we were able to image two open clusters in the same sky region of IC443 and G189.6+03.3 (Figure 1). These star clusters are respectively NGC 2175 and NGC 2168 (also known as M35) and are indicated in Figure 1 and in Figure 4.
Looking at the emission of M35 above 0.7keV, it is in accordance with the presence of many hot and blue massive stars in the cluster (O,B spectral type). Since many of these stars are massive, it is likely some could have already undergone a supernova explosion. Strong winds and supernova explosions from
Figure 6: Spectra from region A, B, C and D. The solid yellow line indicates the total model, the red dotted line represents the first additive component (VPSHOCK), the red dashed line the second additive component (VPSHOCK). The dashdot lines show the different background components: the horizontal magenta line is the most significant component of the particle background, the others represent the sky background made by the Local Hot Bubble (dark green), Cosmic X-ray Background (violet), Cosmic Halo 1 (light violet) and Cosmic Halo 2 (light green). The spectra have been rebinned for displaying purpose and the parameters of the model are the median values of the last 2000 steps of the en\(\rm\overline{e}\)ncee run.
massive stars could have easily wiped out the dust, which is instead present around NGC 2175. A similar idea was proposed for the Galactic starburst cluster, Westerlund 1 (Muno et al. 2006; Clark et al. 2008; Negueruela et al. 2010). We extracted the spectrum of the diffuse emission around M35, masking the point sources, which seems quite well (CSTAT/DOF=1.07) described by a non equilibrium shocked plasma (VPSHOCK model in XSPEC) with temperature kT=\(0.15\pm 0.1\) keV and N\({}_{H}\)=\(0.89^{+0.09}_{-0.10}\) cm\({}^{-2}\). In the fit, we left all the abundances free to vary, retrieving most of them as subsolar, except for Ne (Ne/Ne\({}_{\odot}=1.3^{+1.5}_{-0.7}\)). The low value for the ionization timescale (\(\tau=0.17^{+0.16}_{-0.08}\) cm\({}^{-3}\) s) points towards a plasma out of equilibrium condition (see Section 4 for a discussion about this parameter). This is most likely due to collisionally shocked plasma, essentially due to the strong winds coming from massive stars. However, it should be also considered the fact that is plasma is constantly illuminated by the same stars, so photoionisation effects should be important. Recently, Harer et al. (2023) discussed the importance of shocks in Westerlund 1 and in general for young star clusters in the context of TeV emission associated to Galactic cosmic ray acceleration (see also Vieu & Reville 2023). The presence of shocked gas in our spectra of M35 is in accordance with the turbulent environment needed to accelerate particles to TeV energies.
Moving to NGC 2175, a very faint diffuse emission can be observed in X-rays, which is found spatially coincident with a strong emitting HII region visible in infrared (Figure 4). We tried to extract the spectrum from the region indicated in Figure 1, but we found out it indistinguishable from the background. Additional observations are needed to provide a higher statistics to characterize the gas.
## 5 Discussion
In the previous Section we found indications for the existence of a ubiquitous 0.7 keV plasma component which is present in
\begin{table}
\begin{tabular}{c c c c c} Model & tbabs*(vashift*vpshock+vpshock) & & & \\ \hline Region & A & B & C & D \\ \hline factor & \(0.111^{+0.003}_{-0.002}\) & \(0.155^{+0.003}_{-0.003}\) & \(0.095^{+0.004}_{-0.003}\) & \(0.079^{+0.003}_{-0.002}\) \\ N\_H (\(10^{22}\) cm\({}^{-2}\)) & \(0.82^{+0.14}_{-0.15}\) & \(0.37^{+0.06}_{-0.06}\) & \(0.613^{+0.014}_{-0.020}\) & \(0.44^{+0.03}_{-0.03}\) \\ Velocity (km/s) & \(0^{+500}_{-0}\) & \(4400^{+1900}_{-2100}\) & \(2600^{+500}_{-500}\) & \(0^{+2800}_{-0}\) \\ kT (keV) & \(0.174^{+0.014}_{-0.012}\) & \(2.3^{+2.3}_{-1.0}\) & \(0.30^{+0.03}_{-0.03}\) & \(0.43^{+0.08}_{-0.06}\) \\ O/O\({}_{\odot}\) & \(3^{+2}_{-2}\) & \(0.6^{+0.4}_{-0.3}\) & \(0.049^{+0.012}_{-0.007}\) & \(0.08^{+0.03}_{-0.02}\) \\ Ne/Ne\({}_{\odot}\) & \(3.3^{+1.5}_{-1.5}\) & \(0.17^{+0.26}_{-0.14}\) & \(0.18^{+0.03}_{-0.03}\) & \(0.16^{+0.11}_{-0.10}\) \\ Mg/Mg\({}_{\odot}\) & \(2^{+5}_{-2}\) & \(0.04^{+0.06}_{-0.02}\) & \(0.12^{+0.14}_{-0.09}\) & \(0.03^{+0.07}_{-0.01}\) \\ Fe/Fe\({}_{\odot}\) & \(5^{+4}_{-3}\) & \(1^{+4}_{-1}\) & \(0.9^{+0.3}_{-0.3}\) & \(4^{+3}_{-2}\) \\ \(\tau_{u}\) (\(10^{10}\) cm\({}^{-3}\) s) & \(120^{+270}_{-80}\) & \(0.25^{+0.24}_{-0.09}\) & \(2.6^{+1.4}_{-0.6}\) & \(0.7^{+0.3}_{-0.2}\) \\ Normalization & \(0.12^{+0.08}_{-0.05}\) & \(0.014^{+0.007}_{-0.004}\) & \(8.0^{+1.4}_{-2.1}\) & \(0.6^{+0.2}_{-0.2}\) \\ kT (keV) & \(0.85^{+0.17}_{-0.17}\) & \(0.74^{+0.07}_{-0.08}\) & \(0.775^{+0.014}_{-0.016}\) & \(0.74^{+0.02}_{-0.03}\) \\ C/C\({}_{\odot}\) & \(1^{+6}_{-1}\) & \(0.03^{+0.04}_{-0.02}\) & \(0.06^{+0.24}_{-0.04}\) & \(0.2^{+3.0}_{-0.2}\) \\ N/N\({}_{\odot}\) & \(0.017^{+0.018}_{-0.006}\) & \(0.04^{+0.12}_{-0.03}\) & \(1^{+4}_{-1}\) & \(0.05^{+0.22}_{-0.03}\) \\ O/O\({}_{\odot}\) & \(3^{+5}_{-3}\) & \(2.1^{+1.9}_{-1.3}\) & \(7^{+2}_{-2}\) & \(5^{+3}_{-2}\) \\ Ne/Ne\({}_{\odot}\) & \(6^{+3}_{-3}\) & \(4^{+3}_{-3}\) & \(3.7^{+0.8}_{-1.1}\) & \(2.8^{+1.3}_{-0.8}\) \\ Mg/Mg\({}_{\odot}\) & \(3.7^{+2.3}_{-1.6}\) & \(2.7^{+1.8}_{-1.1}\) & \(1.8^{+0.3}_{-0.4}\) & \(1.9^{+1.0}_{-0.5}\) \\ Si/Si\({}_{\odot}\) & \(1.5^{+1.3}_{-1.0}\) & \(2.4^{+1.9}_{-1.0}\) & \(1.4^{+0.3}_{-0.4}\) & \(1.0^{+0.4}_{-0.3}\) \\ S/S\({}_{\odot}\) & \(1^{+4}_{-1}\) & \(0.04^{+0.06}_{-0.02}\) & \(1.5^{+0.4}_{-0.4}\) & \(0.04^{+0.09}_{-0.03}\) \\ Fe/Fe\({}_{\odot}\) & \(0.4^{+1.0}_{-0.4}\) & \(1.2^{+0.9}_{-0.6}\) & \(0.41^{+0.09}_{-0.09}\) & \(0.48^{+0.22}_{-0.13}\) \\ \(\tau_{u}\) (\(10^{10}\) cm\({}^{-3}\) s) & \(110^{+210}_{-70}\) & \(40^{+27}_{-11}\) & \(600^{+300}_{-200}\) & \(400^{+300}_{-200}\) \\ Normalization & \(0.011^{+0.006}_{-0.004}\) & \(0.008^{+0.006}_{-0.003}\) & \(1.0^{+0.3}_{-0.2}\) & \(0.22^{+0.07}_{-0.07}\) \\ \hline Background model & & & & \\ \hline factor & \(0.47^{+0.05}_{-0.04}\) & \(0.84^{+0.09}_{-0.10}\) & \(0.28^{+0.09}_{-0.08}\) & \(0.22^{+0.07}_{-0.06}\) \\ \hline Statistic & 927/865 & 868/865 & 1110/865 & 1077/865 \\ \end{tabular} Normalization is expressed as \(10^{-14}\frac{\int n_{e}n_{H}dV}{4\pi D^{2}}\) where n\({}_{e}\) is the electron density of the plasma (cm\({}^{-3}\)), n\({}_{H}\) is the hydrogen density (cm\({}^{-3}\)) and D (cm) is the distance of the source
\end{table}
Table 1: Results from the spectral fits of the different regions using a double VPSHOCK model.
all the regions analyzed. We find the column density is close to 4.0-10\({}^{21}\) cm\({}^{-2}\) in region 'A', 'B' and 'D' (only for the double VPSHOCK scenario). Some differences can arise between the two models in each region, resulting in slightly different absorption values, but the overall picture is a uniform absorber covering all the regions. For a detailed discussion for each region, see Section 4.
From these findings, we argue the ubiquitous emission at 0.7 keV plasma is associated to G189.6+03.3 which results to be a foreground object, as first proposed by Asaoka & Aschenbach (1994), placed in front of IC443. In this context, the higher column density measured in region 'C' when using a recombination model, together with high expansion velocity, might indicate IC443 is a background object which is emerging from below G189.6+03.3. In addition, we clarified this aspect, adding another thermal component to the original model and finding again the 0.7 keV. This eventually shows that this component was not fitted with the previous model, probably because it is too dim, but it is there. This supports the idea that G189.6+03.3 is in front of IC443.
In Ustamujic et al. (2021) it is described how the actual shape of IC443 might be the result of the interaction with two molecular clouds. Specifically, Figure 3 of Ustamujic et al. (2021) nicely fit the scenario of IC443 emerging from below an absorber. Moreover, from recent optical/UV data, Ritchey et al. (2020) propose the star HD 245755 may be absorbed by material in foreground, possibly associated to G189.6+03.3. However, while Asaoka & Aschenbach (1994) initially proposed only a part of G189.6+03.3 overlapped with IC443, our dataset suggests this may not be true, especially since we detect an almost uniform column density. Our spectral analysis suggests instead that G189.6+03.3 is present in all the regions analyzed, including those associated to IC443 (C,D). In Figure 1, we draw a red circle surrounding G189.6+03.3 to suggest this hypothesis (its center is shown as a red cross in Figure 3). Moreover, Greco et al. (2018) recently showed that part of our region 'D' is a shock
\begin{table}
\begin{tabular}{c c c c c} \hline Model & tbabs*(vashift*vpshock+vrnei) & & & \\ \hline Region & A & B & C & D \\ \hline factor & \(0.111^{+0.003}_{-0.003}\) & \(0.156^{+0.003}_{-0.002}\) & \(0.091^{+0.004}_{-0.003}\) & \(0.078^{+0.004}_{-0.003}\) \\ N\_H (10\({}^{22}\) cm\({}^{-2}\)) & \(0.41^{+0.10}_{-0.11}\) & \(0.38^{+0.05}_{-0.07}\) & \(0.71^{+0.03}_{-0.03}\) & \(0.32^{+0.02}_{-0.02}\) \\ Velocity (km/s) & \(0^{+160}_{-0}\) & \(800^{+1600}_{-800}\) & \(2300^{+6000}_{-800}\) & \(0^{+500}_{-0}\) \\ kT (keV) & \(1.6^{+1.7}_{-0.7}\) & \(0.8^{+0.5}_{-0.2}\) & \(0.25^{+0.03}_{-0.03}\) & \(0.77^{+0.03}_{-0.02}\) \\ O/O\({}_{\odot}\) & \(0.7^{+1.1}_{-0.4}\) & \(1.2^{+0.5}_{-0.4}\) & \(0.24^{+0.11}_{-0.08}\) & \(2.9^{+1.6}_{-1.1}\) \\ Ne/Ne\({}_{\odot}\) & \(2.0^{+1.6}_{-0.8}\) & \(1.5^{+0.9}_{-0.5}\) & \(0.8^{+0.3}_{-0.2}\) & \(1.7^{+1.1}_{-0.5}\) \\ Mg/Mg\({}_{\odot}\) & \(5^{+3}_{-4}\) & \(1.6^{+1.2}_{-0.8}\) & \(0.1^{+0.4}_{-0.1}\) & \(2.4^{+0.9}_{-0.8}\) \\ Fe/Fe\({}_{\odot}\) & \(4^{+3}_{-4}\) & \(0.5^{+0.5}_{-0.4}\) & \(7^{+2}_{-3}\) & \(1.6^{+0.5}_{-0.6}\) \\ \(\tau_{u}\) (10\({}^{10}\) cm\({}^{-3}\) s) & \(0.41^{+0.16}_{-0.11}\) & \(2.6^{+2.3}_{-1.0}\) & \(2.3^{+0.9}_{-0.7}\) & \(49^{+20}_{-15}\) \\ Normalization & \(0.004^{+0.003}_{-0.002}\) & \(0.008^{+0.002}_{-0.003}\) & \(4.3^{+1.8}_{-1.6}\) & \(0.07^{+0.04}_{-0.02}\) \\ kT (keV) & \(0.76^{+0.14}_{-0.16}\) & \(0.72^{+0.10}_{-0.10}\) & \(0.416^{+0.019}_{-0.019}\) & \(0.50^{+0.05}_{-0.05}\) \\ C/C\({}_{\odot}\) & \(0.03^{+0.04}_{-0.01}\) & \(0.07^{+0.24}_{-0.05}\) & \(0.3^{+2.4}_{-0.3}\) & \(1^{+6}_{-1.0}\) \\ N/N\({}_{\odot}\) & \(0.019^{+0.020}_{-0.007}\) & \(0.03^{+0.07}_{-0.02}\) & \(5^{+2}_{-2}\) & \(0.03^{+0.06}_{-0.02}\) \\ O/O\({}_{\odot}\) & \(5^{+3}_{-2}\) & \(0.05^{+0.16}_{-0.03}\) & \(1.0^{+0.3}_{-0.2}\) & \(0.2^{+0.7}_{-0.2}\) \\ Ne/Ne\({}_{\odot}\) & \(3.6^{+2.5}_{-1.2}\) & \(2.8^{+2.8}_{-1.2}\) & \(0.87^{+0.18}_{-0.15}\) & \(5.4^{+1.8}_{-1.3}\) \\ Mg/Mg\({}_{\odot}\) & \(2.3^{+1.7}_{-0.8}\) & \(0.6^{+0.6}_{-0.4}\) & \(0.88^{+0.16}_{-0.12}\) & \(0.06^{+0.19}_{-0.04}\) \\ Si/Si\({}_{\odot}\) & \(0.9^{+1.2}_{-0.7}\) & \(1.0^{+0.9}_{-0.4}\) & \(1.28^{+0.21}_{-0.17}\) & \(0.03^{+0.05}_{-0.02}\) \\ S/S\({}_{\odot}\) & \(1^{+5}_{-1}\) & \(0.2^{+0.8}_{-0.1}\) & \(1.1^{+0.3}_{-0.2}\) & \(0.03^{+0.05}_{-0.01}\) \\ Fe/Fe\({}_{\odot}\) & \(0.2^{+0.3}_{-0.1}\) & \(0.14^{+0.19}_{-0.12}\) & \(0.18^{+0.05}_{-0.03}\) & \(0.024^{+0.023}_{-0.010}\) \\ \(\tau\) (10\({}^{10}\) cm\({}^{-3}\) s) & \(200^{+300}_{-100}\) & \(300^{+500}_{-200}\) & \(56^{+2}_{-2}\) & \(0.06^{+0.35}_{-0.05}\) \\ Normalization & \(0.014^{+0.005}_{-0.005}\) & \(0.029^{+0.014}_{-0.011}\) & \(5.4^{+0.9}_{-1.1}\) & \(0.44^{+0.10}_{-0.10}\) \\ \hline Background model & & & & \\ \hline factor & \(0.43^{+0.06}_{-0.06}\) & \(0.88^{+0.11}_{-0.10}\) & \(0.38^{+0.08}_{-0.07}\) & \(0.20^{+0.07}_{-0.08}\) \\ \hline Statistic & 931/865 & 880/865 & 972/865 & 1184/865 \\ \hline \end{tabular} Normalization is expressed as \(10^{-14}\frac{\int n_{e}n_{H}dV}{4\pi D^{2}}\) where n\({}_{e}\) is the electron density of the plasma (cm\({}^{-3}\)), n\({}_{H}\) is the hydrogen density (cm\({}^{-3}\)) and D (cm) is the distance of the source
\end{table}
Table 2: Results from the spectral fits of the different regions using a VPSHOCK plus VRNEI model.
ejecta and its shape can be correlated with the direction of the proper motion of the fast moving neutron star in the South: they conclude this structure belongs to IC443 and the plasma is overionized. The same authors propose the structure can be associated to a jet feature, as we indicated in Figure 3. In the same Figure, we also observe that in the eROSITA image G189.6+03.3 stretches from West to East. It could be possible that two jet activities took place. The direction of the first is indicated as yellow line in Figure 3 which interestingly cross the putative position of the unresolved source inside G189.6+03.3: this jet should be associated to the progenitor of G189.6+03.3 with its W part not visible due the very intense emission of IC443. The second jet structure is the one investigated by Greco et al. (2018) and should be related to IC443.
Starting from this interesting jet scenario, we want to figure out the possible types of progenitors. According to Smartt (2009), the progenitors having jets are very massive stars with M\(>30\) M\({}_{\odot}\), specifically Luminous Blue Variable (LBV) stars. However, recent papers like Chiotellis et al. (2021) and Ustamujic et al. (2021b) demonstrated the influence of massive progenitors in shaping the Circumstellar Medium (CSM) through the action of winds. The final effect can be an elongated shape similar to what is expected to be created by a jet and enriched ejecta. As first observational fact, we observe supersolar abundances for O, Ne, Mg and Si in the 0.7 keV plasma component. These abun
\begin{table}
\begin{tabular}{c c c} \hline Region & C & \\ \hline Model & tbabs*(vashhift*vpshock+vmei+apec) & tbabs*(vashift*vpshock+vpshock+apec) \\ \hline factor & \(0.091^{+0.004}_{-0.003}\) & \(0.093^{+0.004}_{-0.004}\) \\ N\_H (10\({}^{22}\) cm\({}^{-2}\)) & \(0.75^{+0.03}_{-0.02}\) & \(0.55^{+0.03}_{-0.03}\) \\ Velocity (km/s) & \(1600^{+800}_{-800}\) & \(3000^{+500}_{-600}\) \\ kT (keV) & \(0.25^{+0.02}_{-0.02}\) & \(0.217^{+0.019}_{-0.018}\) \\ O/O\({}_{\odot}\) & \(0.4^{+0.3}_{-0.1}\) & \(0.4^{+0.4}_{-0.2}\) \\ Ne/Ne\({}_{\odot}\) & \(0.6^{+0.3}_{-0.2}\) & \(1.6^{+0.7}_{-0.5}\) \\ Mg/Mg\({}_{\odot}\) & \(2.8^{+1.1}_{-1.2}\) & \(0.05^{+0.12}_{-0.03}\) \\ Fe/Fe\({}_{\odot}\) & \(7^{+2}_{-2}\) & \(8^{+2}_{-2}\) \\ \(\tau\) (10\({}^{10}\) cm\({}^{-3}\) s) & \(3.9^{+1.8}_{-1.3}\) & \(13^{+13}_{-5}\) \\ Normalization & \(3.1^{+1.3}_{-1.1}\) & \(1.1^{+0.6}_{-0.5}\) \\ kT (keV) & \(0.288^{+0.014}_{-0.013}\) & \(0.76^{+0.02}_{-0.02}\) \\ C/C\({}_{\odot}\) & \(7^{+2}_{-5}\) & \(1^{+5}_{-1}\) \\ N/N\({}_{\odot}\) & \(6^{+3}_{-2}\) & \(7^{+2}_{-2}\) \\ O/O\({}_{\odot}\) & \(1.2^{+0.4}_{-0.3}\) & \(3.0^{+0.7}_{-0.5}\) \\ Ne/Ne\({}_{\odot}\) & \(2.0^{+0.6}_{-0.4}\) & \(1.9^{+0.4}_{-0.3}\) \\ Mg/Mg\({}_{\odot}\) & \(0.86^{+0.21}_{-0.16}\) & \(1.12^{+0.19}_{-0.14}\) \\ Si/Si\({}_{\odot}\) & \(1.5^{+0.3}_{-0.3}\) & \(0.88^{+0.15}_{-0.10}\) \\ S/S\({}_{\odot}\) & \(0.9^{+0.3}_{-0.2}\) & \(0.9^{+0.3}_{-0.2}\) \\ Fe/Fe\({}_{\odot}\) & \(0.02^{+0.03}_{-0.01}\) & \(0.32^{+0.05}_{-0.04}\) \\ \(\tau\) (10\({}^{10}\) cm\({}^{-3}\) s) & \(39^{+3}_{-3}\) & \(270^{+90}_{-60}\) \\ Normalization & \(4.6^{+0.8}_{-1.0}\) & \(1.42^{+0.15}_{-0.19}\) \\ kT (keV) & \(0.66^{+0.03}_{-0.04}\) & \(1.5^{+0.4}_{-0.6}\) \\ Abundance (Z\({}_{\odot}\)) & \(1.1^{+0.4}_{-0.3}\) & \(1.3^{+1.2}_{-0.9}\) \\ Normalization & \(0.50^{+0.22}_{-0.13}\) & \(0.09^{+0.28}_{-0.04}\) \\ \hline Background model & & \\ \hline factor & \(0.37^{+0.09}_{-0.07}\) & \(0.32^{+0.10}_{-0.09}\) \\ \hline Statistic & 949.29/862 & 1107.45/862 \\ \end{tabular} Normalization is expressed as \(10^{-14}\frac{\int n_{e}n_{H}dV}{4\pi D^{2}}\) where n\({}_{e}\) is the electron density of the plasma (cm\({}^{-3}\)), n\({}_{H}\) is the hydrogen density (cm\({}^{-3}\)) and D (cm) is the distance of the source
\end{table}
Table 3: Results from the spectral fits of region ’C’ using an additional APEC component on top of the model employed before.
dances are close to what is described in Ustamujic et al. (2021) for a Luminous Blue Variable case. On the contrary of what is predicted by this model, we detect subsolar iron abundances, but this can be easily explained by a poor modeling of the Fe L-shell lines, which are not resolved in eROSITA. Moreover, the effective area of eROSITA strongly decreases above 2 keV making almost unsuitable to observe the strong Fe lines expected to arise from iron rich ejecta. Nevertheless, the faint supernova explosion model presents many of the features we highlighted above. Specifically, such kind of supernovae are predicted to have high abundance ratios in the range [C/Fe]-[Al/Fe] as a consequence of a high amount of fallback material (Nomoto et al., 2013) and a jet-like structure. We observe indeed an elongated structure stretching from SE to NW in Figure 3. We therefore evaluated the ratio between the abundance of O, Ne, Mg, Si and S with Si per each region. Ideally, we would have employed Fe but since it is almost not resolved in our data, we decided to employ Si which is the element produced immediately before Fe during the explosion of a massive star. We show the results in Figure 8 and in Figure 9. We observe that for O, Ne and Mg the ratio is above 1 in several regions analyzed, as predicted with Fe in faint supernovae, for both the models tested.
Nevertheless, so far we have considered only single star explosion models from core collapse supernovae, but most of the stars are in binary systems. This is especially true for massive stars, which are likely to be born in crowded gas rich areas of the Galaxy. Therefore, supernova explosions driven by binary interaction can be a very common channel to explode stars (see for a recent discussion Lajalec et al., 2021). In Section 4, we showed how the enhancement of the temperature of the plasma in region B might indicate G189.6+03.3 is interacting with the HII region S249, especially looking at Figure 4. Since it is well known that also IC443 is interacting with this region (Fesen, 1984; Ambrocio-Cruz et al., 2017), this suggests the two remnants are interacting with the same HII region, i.e. they are at the same distance, regardless the progenitor was in a single or binary system. This would also be consistent with observing similar optical extinction measurement values in different points of the region, as described in Section 3. To test this scenario, one can assume a common distance (d=1500 pc) for the two remnants and estimate the velocity of a hypothetical compact object associated to G189.6+03.3. As an explosion site, we consider the center of the red circle (RA:06h18m37.3s, DEC:+22:14:41.3) in Figure 1 and shown as red cross in Figure 3. As a current position of the object, we employ the coordinates of the diffuse emission (RA:06h19m40.8s, DEC:+21:58:03) described in Section 4.1 and visible in Figure 3. Assuming 30 kyr as time of explosion, the resulting velocity is around 315km/s. This value could be tested in the future with dedicated pointed observation on the bright spot. Assuming the same explosion site for J061705.3+22212, the fast moving neutron star located close to the center of IC443, but assuming 3 kyr as age and the today position, we obtain a velocity of 3200 km/s. Considering the typical proper motion values found for other compact objects by Mayer & Becker (2021) are on average much lower than 3000 km/s, we argue that some different mechanism than a simple natal kick from the supernova explosion should be in place. The mechanism we propose to justify this very high proper motion value is a slingshot effect. Given the evidence provided for the existence of two separate remnants, it is appealing to consider a condition for a progenitor hosted in a system with three or more stars. Different works (see for example Thompson, 2011; Naoz, 2016; Hamers et al., 2022) have shown how the trajectories of the stars in such systems can be severely altered by the presence of a third star. The effect is even more chaotic when more stars are considered. Therefore, a slingshot effect might be a suitable the explanation for finding one progenitor star very far from the other and a compact object having a very high proper motion value.
The slingshot mechanism would also explain why the proper motion of J061705.3+2221 is not aligned with the center of IC443 (Swartz et al., 2015; Greco et al., 2018) nor with the one of G189.6+03.3. Moreover, as shown by Ustamujic et al. (2021), the center of the emission of IC443 determined by the maximum intensity of the X-ray emission is probably the result of a complex interplay between the expanding shock wave and a molecular cloud. Therefore, it is likely that the direction of the compact object is not aligned with it. Having two supernova explosions in two different regions would explain why the column density in region C is different comparing to the values measured in the other regions. However, it is possible also that two single stars belonging to the same association of stars, Gem OB1, originated IC443 and G189.6+03.3 respectively. In this respect, the simulations of Ustamujic et al. (2021) show how also the single explosion scenario can explain the observed shape of IC443.
We want also to discuss how several papers (Yamaguchi et al., 2009; Matsumura et al., 2017) have shown the presence of recombining plasma in the region of IC443. We also obtain good fits with such model superimposed a shocked material. Interestingly, Yamauchi et al. (2020) detect a two component recombining plasma in the North-East spot of G189.6+03.3 with _Suzaku_, one of which has a temperature of 0.7 keV. It is remarkable that values close to such temperature can be found also in all regions we tested and is even more remarkable for region 'D' which is quite far from the spot observed by Yamauchi et al. (2020). Comparing to Yamauchi et al. (2020), in this region we find an ionization timescale of the same order of magnitude with the
Figure 7: Spectrum of the diffuse emission close to the center of G189.6+03.3 (light blue circle in Figure 3) modeled with the V-SHOCK+VRNEI model. The data have been rebinned for displaying purpose. The two red lines indicate the two model components, while the magenta represents the instrumental background which we modeled.
VPSHOCK+VRNEI model, but with a poorer fit statistics comparing to our double VPSHOCK model. To explain such difference, we notice how Tanaka et al. (2022) recently underlined the contribution that charge exchange can have in supernova remnant X-ray spectra. To highlight such effect, the authors employ high resolution spectra, which have a resolving power that our instrument cannot reach. Therefore, differences in the composition and density between the clouds in region B and D might actually result in different contribution of the charge exchange that cannot be resolved in our spectra. Since charge exchange is likely to occur at the contact point between shock front and neutral material, this might explain the difference between our fit and those presented by Yamauchi et al. (2020). Nevertheless, considering eROSITA is more sensitive in soft X-rays and instead _Suzaku_ collects more photons in hard X-rays, the results can be considered quite consistent among themselves.
In the same area of the sky there are also two different compact objects which we tried to understand whether they can be associated to IC443 and G189.6+03.3. Starting from J061705.3+22212, the column density of 0.7-10\({}^{21}\) cm\({}^{-2}\) reported in Greco et al. (2018) is similar to what we obtain in region 'C' (Section 4). Therefore, the most straightforward conclusion is that J061705.3+22212 is probably associated to IC443, as shown by many previous studies (Keohane et al., 1997; Swartz et al., 2015; Greco et al., 2018). Regarding the pulsar PSR B0611+22, the proper motion direction rules out completely the possibility that this object is correlated to G189.6+03.3 or to IC443. Despite that, the column density derived in Section 3 is consistent with the column density measured for G189.6+03.3 (\(\sim 4.0\cdot 10^{21}\) cm\({}^{-2}\)). This would imply that the source is at least as distant as the remnants, which do not exclude the possibility that it is located much further away, as indicated from some models of the dispersion measure. Therefore, given the proper motion direction and the great uncertainties in the dispersion measure models, it is less speculative to assume that this pulsar is not associated to any of the two supernova remnants.
In addition to these two compact objects, Bykov et al. (2008) and Zhang et al. (2018) showed the presence of several point sources at E boundary of our region C, thanks to deep _Chandra_, _XMM-Newton_ and _Nustar_ observations. The nature of these objects is still quite unclear, especially since at least two of them are found to be variable. Moreover, we observe that one of these objects (dubbed'src1a' in Bykov et al. (2008)) looks like a neutron star embedded in a pulsar wind nebula. Further studies are needed to assess whether it can be associated to IC443 or G189.6+03.3.
## 6 Conclusions
In this work, we finally confirmed that G189.6+03.3 is a supernova remnant. We find its emission can be represented by a two thermal component plasma, one of which is represented by a 0.7 keV temperature gas in equilibrium, which is found in all the regions analyzed. If we consider also that we detect a uniform absorption over the entire remnant close to \(\sim 4.0\cdot 10^{21}\) cm\({}^{-2}\), we argue a unique diffuse emission covers the whole system. The high surface brightness of region C (Figure 1) complicated the detection of the covering 0.7 keV component, which we manage to find with an additional component APEC component to our models. Given the ubiquitous presence of this plasma at the equilibrium, we conclude G189.6+03.3 completely overlaps with IC443.
Figure 8: Logarithmic ratio of the abundance of the element X (O, Ne, Mg, Si, S) with Si. The abundances are derived from a two component fit employing two shock models (VPSHOCK).
We obtain high abundance ratios of [O/Si], [Ne/Si] and [Mg/Si] in most of the regions and an elongated structure, all indications in favor of a faint supernova explosion. From observing an enhancement of the temperature in the second plasma component of region B we consider the possibility that G189.6+03.3 is interacting with the HII region S249, as it is doing IC443. In this case, the two remnants should be placed at the same distance, presenting two possibilities: in one scenario, two isolated massive stars belonging to the group Gem OB1 generated the two remnants. An alternative and intriguing scenario, is given instead by two objects belonging to a multiple system.
Given these two hypotheses, we discuss the association to the remnants of the two nearby compact objects, confirming CXOU J061705.3+22212 can be associated to IC443 while the pulsar PSR B0611+22 is unrelated to any of the two remnants. However, we suggest a third compact object could be in the field, seen as unresolved faint emission near the center of G189.6+03.3 (Section 4.1).
Nevertheless, we also report how several unidentified point sources were observed in the past in the East part of IC443 and how it may be also possible that one of them is a compact object associated to one of the two remnants.
We conclude, underlining the necessity of new pointed observations on G189.6+03.3, given the large amount of new features shown in this work. A pointed observation toward the center would be really striking to assess the existence of a compact object, helping to shed light on whether the progenitor was indeed a faint supernova or the explosion happened via a different channel.
###### Acknowledgements.
We thank the anonymous referee for the useful comments and suggestions that helped to improve the quality of the manuscript. We would like to thank all the eROSITA team for the helpful discussions and suggestions provided during the realization of the paper. The image in Figure 4 has been extracted with Aladin Desktop (Bonnarel et al. 2000) tool and exploited using astropy. FC acknowledges support from the Deutsche Forschungsgemeinschaft through the grant BE 1649/11-and from the International Max-Planck Research School on Astrophysics at the Ludwig-Maximilians University (IMPRS). FC thanks Hans-Thomas Janka for the useful discussion about supernova explosion models and abundance yields. WB thanks James Turner for pointing out the observation of the Vaccolo 1061705.3+22212 in the FAST GPPS. This work is based on data from ROSITA, the soft X-ray instrument aboard _SRG_, a joint Russian-German science mission supported by the Russian Space Agency (Rosksonmos), in the interests of the Russian Academy of Sciences represented by its Space Research Institute (IKI), and the Deutsche Zentrum fur Luftft- und Raumfulart (DLR). The _SRG_ spacecraft was built by Lavochkin Association (NPOL) and its subcontractors, and is operated by NPOL with support from the Max Planck Institute for Extraterrestrial Physics (MPE). The development and construction of the eROSITA X-ray instrument was led by MPE, with contributions from the Dr. Karl Remes Observatory Bamberg & ECAP (FAU Erlangen-Naernberg), the University of Hamburg Observatory, the Leibniz Institute for Astrophysics Postsdam (AIP) and the Institute for Astronomy and Astrophysics of the University of Tubingen, with the support of DLR and the Max Planck Society. The Argelander Institute for Astronomy of the University of Bonn and the Ludwig Maximis Universiteitat Munich also participated in the science preparation for eROSITA. The eROSITA data shown here were processed using the eASS software system developed by the German eROSITA consortium. This work makes use of the Astropy Python package6 (Astropy Collaboration et al. 2013, 2018). A particular mention goes to the in-development coordinated package of Astropy for region handling called Regions7. We acknowledge also the use of Python packages Matplotlib (Hunter 2007), PyLaTeX8 and NumPy (Harris et al. 2020).
Footnote 7: [https://www.astropy.org/](https://www.astropy.org/)
Footnote 8: [https://github.com/astropy/regions](https://github.com/astropy/regions)
Figure 9: Logarithmic ratio of the abundance of the element X (O, Ne, Mg, Si, S) with Si. The abundances are derived from a two component fit employing one single temperature parallel shock model (VPSHOCK) and a recombination additive model (VRNEI). We do not show the plot of region C since the 0.7 keV components is absent when VRNEI is employed |
2310.15244 | Regulating star formation in a magnetized disk galaxy | We use high-resolution MHD simulations of isolated disk galaxies to
investigate the co-evolution of magnetic fields with a self-regulated,
star-forming interstellar medium (ISM). The simulations are conducted using the
Ramses AMR code on the standard Agora initial condition, with gas cooling, star
formation and feedback. We run galaxies with a variety of initial magnetic
field strengths. The fields grow rapidly and achieve approximate saturation
within 500 Myr, but at different levels. The galaxies reach a quasi-steady
state, with slowly declining star formation due to both gas consumption and
increases in the field strength at intermediate ISM densities. We connect this
behaviour to differences in the gas properties and overall structure of the
galaxies. In particular, strong fields limit feedback bubbles. Different cases
support the ISM using varying combinations of magnetic pressure, turbulence and
thermal energy. Magnetic support is closely linked to stellar feedback in the
case of initially weak fields but not for initially strong fields. The spatial
distribution of these supports is also different in each case, and this is
reflected in the stability of the gas disk. We relate this back to the overall
distribution of star formation in each case. We conclude that a weak initial
field can grow to produce a realistic model of a local disk galaxy, but
starting with typical field strengths will not. | Hector Robinson, James Wadsley | 2023-10-23T18:01:04Z | http://arxiv.org/abs/2310.15244v2 | # Regulating star formation in a magnetized disk galaxy
###### Abstract
We use high-resolution MHD simulations of isolated disk galaxies to investigate the co-evolution of magnetic fields with a self-regulated, star-forming interstellar medium (ISM). The simulations are conducted using the Ramses AMR code on the standard Agora initial condition, with gas cooling, star formation and feedback. We run galaxies with a variety of initial magnetic field strengths. The fields grow rapidly and achieve approximate saturation within 500 Myr, but at different levels. The galaxies reach a quasi-steady state, with slowly declining star formation due to both gas consumption and increases in the field strength at intermediate ISM densities. We connect this behaviour to differences in the gas properties and overall structure of the galaxies. In particular, strong fields limit feedback bubbles. Different cases support the ISM using varying combinations of magnetic pressure, turbulence and thermal energy. Magnetic support is closely linked to stellar feedback in the case of initially weak fields but not for initially strong fields. The spatial distribution of these supports is also different in each case, and this is reflected in the stability of the gas disk. We relate this back to the overall distribution of star formation in each case. We conclude that a weak initial field can grow to produce a realistic model of a local disk galaxy, but starting with typical field strengths will not.
keywords: Methods: numerical - MHD - ISM: magnetic fields - Galaxies: star formation
## 1 Introduction
Magnetic fields have been detected at all scales in astrophysics (Han, 2017) and are predicted to play important roles in galaxy evolution. A key question is what role magnetic fields play in regulating galactic star formation. The most straightforward consideration is that magnetic fields provide an additional pressure which can support gas against gravitational collapse on the scale of a galactic disk, but they can also affect galaxy-scale dynamics, the properties of turbulence, the effectiveness of stellar feedback, and the formation of molecular clouds. Importantly, turbulence acts to amplify and reshape the magnetic field, so in practice we must study the joint evolution of the magnetic field and the ISM together.
Detecting magnetic fields in galaxies is challenging. A diverse set of methods allows us to infer differing aspects of the field. These include synchrotron emission, the Zeeman effect, Faraday rotation, polarised thermal emission of magnetically aligned dust grains (Pattle et al., 2023), and polarised emission of starlight due to extinction by aligned dust grains.
Of primary interest is the field strength. Synchrotron observations can be used to estimate total magnetic field strengths by assuming energy equipartition between magnetic fields and cosmic ray particles. Fields strengths measured in spiral galaxies with this method tend to be around \(\sim\)10 \(\mu\)G and decrease slowly with galactocentric radius (Fletcher et al., 2011; Basu & Roy, 2013; Beck & Wielebinski, 2013). The dependence on the poorly constrained cosmic ray distribution means both the field strength and gradient are highly uncertain. Due to the diffusive nature and resulting large scale height of cosmic rays, it is expected to probe a thick volume around the galactic disk (Zweibel, 2017).
In the Milky-Way and a few nearby galaxies, line-of-sight magnetic field strengths have also been measured via the Zeeman Effect (Crutcher et al., 2010), which causes emission lines to split when molecules are in the presence of line of-sight magnetic fields. Zeeman observations in the Milky-Way have found fields strengths of \(\sim\) 10 \(\mu\)G at number densities of 10-100 cm\({}^{-3}\), In gas at number densities \(\gtrsim\) 1000 cm\({}^{-3}\) the upper envelope to the field strength scales with number density as \(B\propto n^{0.5-0.7}\)(Crutcher et al., 2010). There is considerable spread in these measurements for 10-1000 cm\({}^{-3}\). It is commonly assumed that typical field strengths remain flat at lower densities.
The remaining techniques mostly indicate the field morphology. Many observations find large-scale spiral patterns in galactic magnetic fields, even in galaxies that do not have optical spiral structures (Chyzy & Buta, 2008; Beck et al., 2019; Lopez-Rodriguez et al., 2023). Down to \(\sim\) 100 pc scales, the fields tend to be aligned parallel with the structure, such as low density filaments (Goldsmith et al., 2008; Sugiemi et al., 2011). These results have been confirmed more recently by synchrotron maps of molecular clouds in the Milky way (Planck Collaboration et al., 2016).
There are many theoretical predictions for how magnetic fields should evolve and affect their host galaxies. Tiny cosmic seed fields are amplified exponentially to detectable levels within a few Gyr (Geach et al., 2023). On small galactic scales, the turbulent dynamo is expected be ubiquitous. It can exponentially amplify field strengths over timescales of \(\lesssim 10\) Myr, saturating at level that is a fraction of the turbulent energy (Federrath et al., 2011; Rieder & Teyssier, 2016). The \(\alpha-\Omega\) dynamo is expected to order the small scale turbulent fields into disk scale regular fields over Gyr timescales (Brandenburg & Subramanian, 2005). At intermediate scales there is also the gravitational-instability dynamo associated with spiral structures (Riols & Latter, 2019). Amplification rates in simulations are still highly dependent on numerical resolution and feedback methods (Rieder
& Teyssier, 2016), which can make comparisons between different simulations difficult, however the level they saturate at appears to be independent of those effects. Thus saturated fields present an appealing target for study that is less dependent on numerical method differences.
After saturation, the energy density of magnetic fields relative to other sources can vary depending on the phase of gas. The diffuse medium can have significant magnetic support, meaning that the magnetic pressure is comparable to thermal pressure (plasma \(\beta~{}\sim\) 1). At higher densities, where gas is colder, the primary support is turbulent and the clouds are typically magnetically supercritical (E\({}_{\rm mag}<\) E\({}_{\rm grav}\sim\) E\({}_{\rm turb}\)) (see review by Beck & Wielebinski, 2013).
Magnetic fields play are expected to play key role in how gas transitions between phases within the ISM (Krumholz & Federrath, 2019). This manifests as a difference in the distribution of gas densities in the ISM, which are created by turbulent compression and expansion. Turbulence predicts a lognormal probability density function (PDF) that is also seen in observations (Kainulainen et al., 2009). At high densities when the gas becomes gravitationally unstable, it diverges from the lognormal (Burkhart, 2018). Magnetic pressure narrows the width of the PDF by resisting turbulence's ability to compress gas. Simulations on cloud and kpc scales have shown that this effect can reduce star formation rates by a factor of 2-3 (Federrath & Klessen, 2012; Padoan et al., 2012; Girichidis et al., 2018; Krumholz & Federrath, 2019; Kim et al., 2023). Turbulence also plays a role in the magnetic field strength vs. gas density scaling relations. The 0.5-0.7 power law was originally thought to come from gravitational contraction, but cloud-scale simulations by Cao & Li (2023) retrieve the same scaling without self gravity and the authors argue it comes from turbulent compression instead. A key takeaway from this discussion is that the role of magnetic fields on smaller scales is complex and under intense study. Working on slightly larger, galactic scales, allows for a simpler treatment of star formation.
MHD simulations are a powerful tool for studying magnetic fields on many scales. On cosmological scales, magnetic fields are generally not dynamically dominant but some simulations have begun to include them (Grand et al., 2017; Hopkins et al., 2018; Steinwandel et al., 2022; Martin-Alvarez et al., 2018). Cosmological seed fields are the origin of all galactic fields; thus cosmological simulations can be used to test models of magnetic field amplification and produce toroidally dominated fields similar to those seen in observations (Rieder & Teyssier, 2017). One difficulty with cosmological simulations is that magnetic fields amplify strongly during the poorly resolved and highly chaotic infall and merging phases. These processes apply ongoing, significant perturbations to the state of the galaxy. There is also the computational expense of simulating a large cosmological environment for the full history of the universe.
An alternative approach is high resolution simulations of individual galaxies, including the effects of magnetic fields. Kortgen et al. (2019) simulated an isolated disk galaxy and showed that magnetic fields can speed up disk fragmentation and drive outflows hundreds of parsecs above the disk even without stellar feedback. Galaxies that have both star formation and feedback tend to have lower star formation rates when MHD is also included and can magnetize their CGM with magnetic outflows (Steinwandel et al., 2019, 2020; Pakmor et al., 2020; Wissing & Shen, 2023), suggesting that MHD can help regulate star formation on disk scales. Recent simulations also suggest that the magnetic field strength continuously declines with density, effectively as a power law, extending to densities below the point where observationally inspired models suggest it should become constant (e.g. (Ponnada et al., 2022)).
Prior work has tended to focus on amplification rates and the final magnetic configuration (e.g. Rieder & Teyssier, 2017; Su et al., 2018). However, it is also important consider how magnetic fields affect the state of the gas in the multiphase interstellar medium and whether the star formation is regulated in the same way as we introduce progressively stronger fields.
In this paper we present a controlled study, simulating an isolated galaxy with a well-known initial condition, with and without magnetic fields, with several different initial field strengths. Thus we can focus on the development and co-evolution of magnetic fields due to self-regulated star formation and feedback in a galactic disk. By running such cases for several dynamical times, we expect to produce a steady-state, self regulated star-forming ISM to study. In addition, by using a standard setup and simple, well-tested star formation and feedback models, we aim to make the interpretation more straightforward.
The remainder of the paper is organized as follows: In section 2, we describe the simulation method and magnetized galaxy setup. In section 3.1, we analyse the evolution of the magnetic field in each galaxy, including the approach to a saturated state and comparing the strengths to observations. We examine the resulting visual appearance of each galaxy in section 3.2. In section 3.3, we examine the overall star formation and its radial distribution in each case. We then explore how this is reflected in their ISM. Section 3.4 examines the gas properties, seeking to determine the underlying drivers of the differences in star formation rates and their connection to the magnetic fields and other forms of support for the gas. In section 3.5, we study the combined effect of the different support mechanisms on the gravitational stability of the galactic disks. In section 4 we discuss our results and future work. Finally we summarize our conclusions in section 5.
## 2 Simulation method
We conduct magnetohydrodynamic (MHD) simulations of isolated galaxies using the adaptive mesh refinement (AMR) code Ramses (Teyssier, 2002) to solve the ideal MHD equations using an HLLD approximate Riemann solver (Miyoshi & Kusano, 2005). The solenoidal constraint (\(\nabla\cdot B=0\)) is enforced with the constrained transport method (Evans & Hawley, 1988). The dynamics of stars and dark matter are solved using the particle-mesh technique (Hockney & Eastwood, 1981). Gas cooling and heating is included via the Grackle chemistry and cooling library (Smith et al., 2017). Grackle uses metal cooling rates tabulated from output from the photo-ionization code Cloudy(Ferland et al., 2017). We also include a photoelectric heating rate of \(\zeta=4\times 10^{-26}\) erg cm\({}^{-3}\) s\({}^{-1}\), which allows for a two-phase ISM similar to that proposed by Wolfire et al. (2003).
### Initial Conditions and Refinement
The galaxies we simulate are all based on the medium-resolution isolated disk galaxy from the Agora Project (Kim et al., 2016), but with an initial magnetic field added. It has an active dark matter halo that follows an NFW profile, a stellar bulge that follows a Hernquist profile, and a disk that is 80% stars and 20% gas by mass with a density profile given by
\[\rho_{\rm gas}(r,z)=\rho_{0}e^{(-r/r_{d})}e^{(-|z|/z_{d})} \tag{1}\]
With \(\rho_{0}=M_{\rm gas}/(4\pi r_{d}^{2}z_{d})\), where \(r_{d}=3.432\) kpc and \(z_{d}=0.1r_{d}\). The stellar and dark matter components are modelled with collisionless particles, and the gas is initiated on a Ramses AMR Grid. Kim et al. (2016) contains fulls details about the disk setup.
This initial condition has been described as similar to 'Milky-Way like' spiral galaxy with a redshift of z\(\sim\)1. However, the dark matter halo and rotation curve are quite similar to a \(z\sim 0\) large disk galaxy, such as NGC 5055 (the Sunflower galaxy). The initial surface densities of gas and stars are also quite similar to NGC 5055 (as noted by Benincasa et al., 2020). In this sense it is actually a reasonable proxy for a nearby spiral galaxy.
The gas disk is initialized inside of a domain 600 kpc on a side that has a base grid of 64\({}^{3}\) cells, that is allowed to refine an additional 10 levels which gives a spatial resolution of 9.15 pc at the highest level. A cell will refine if it contains more than 10,000 M\({}_{\odot}\) of gas or if it contains more than 8 collisionless particles. The disk starts off with \(M_{\rm gas}=8.59\times 10^{9}\) M\({}_{\odot}\), which will decrease throughout the simulation as gas is converted into stars. The gas is initialized to a temperature of 10,000 K and solar metallicity.
Inside the gas disk, we initialize a magnetic field with a morphology that is purely toroidal, containing no vertical or radial components. The field strength scales with gas density as
\[B=B_{0}\left(\frac{\rho}{\rho_{0}}\right)^{2/3} \tag{2}\]
B\({}_{0}\) is the value of the magnetic field strength on the midplane at the center of the galaxy. The magnetic field strength decreases further out in the disk, decreasing by a factor of 10 by 12 kpc. Due to the grid initialization process, some initial B values are changed by \(\pm\) 10%. Figure 1 includes radial profiles of the initial magnetic field strength (straight red lines).
We simulate four galaxies which are identical except for the initial magnetic field, with values for each case summarized in Table 1. The first case has zero initial magnetic field, equivalent to being simulated with regular hydrodynamics. The remaining three galaxies are referred to as MHD Weak, MHD Medium, and MHD Strong. In each of them, the field strength initially scales with gas density according to equation 2, but the constant \(B_{0}\) is modified so that the MHD Medium has fields 10 times stronger than the MHD Weak, and MHD Strong has fields 10 times stronger than MHD Medium. An increase of 10 times in magnetic field strength corresponds to an increase of 100 times in magnetic energy.
### Star Formation and Feedback
Stars particles are formed stochastically via a Schmidt-law of the form
\[\frac{d\rho_{\rm s}}{dt}=\frac{\epsilon_{\rm ff}}{\rho\,t_{\rm ff}}\quad{\rm if }\quad\rho>\rho_{\rm crit} \tag{3}\]
where \(\rho_{\rm s}\) is the stellar density, \(\epsilon_{\rm ff}\) is the star formation efficiency per free-fall time which we set to \(\epsilon_{ff}=0.1\), and \(\rho_{\rm crit}\) is a threshold density which corresponds to a number density of 100 cm\({}^{-3}\).
Stellar feedback is entirely supernovae (SN), injected as thermal energy 5 Myr after stars first form with 10\({}^{51}\) erg per 91 M\({}_{\odot}\) of young stars. This energy is treated via the delayed cooling model of Agertz et al. (2011), which allows unresolved superbubbles to grow correctly by initially treating hot supernova ejecta as unresolved, non-cooling bubbles whose energy is converted to regular thermal energy with a e-folding time of 5 Myr. At the chosen resolution, other forms of feedback are largely unresolved, as are the dense structures in which they would chiefly operate.
We note that these choices differ from the original Agora simulations (Kim et al., 2016). In particular, Agora used a low efficiency (\(\epsilon_{ff}=0.01\)) in dense gas. This pushes the characteristic time for star formation to \(\sim\) 1 Gyr, effectively forcing a typical overall galactic star formation rate (SFR), that was the same with or without feedback. With \(\epsilon_{ff}=0.1\), stellar feedback is necessary to regulate star formation to the level expected for typical disk galaxies (Robinson, 2021). Feedback is expected to couple in such a way to reproduce large scale ISM properties including the scale height (Ostriker et al., 2010; Benincasa et al., 2016). Thus we have adopted a robust and easy to reproduce approach to star formation and feedback for this work.
Each galaxy was evolved for 1 Gyr, corresponding to a handful dynamical times at the outer radius. The inner regions were expected to evolve rapidly and then settle into a quasi-steady state of ongoing star formation. Thus the goal was to produce an interval of several 100 Myr to study this relatively quiet, self-regulated state in each case.
## 3 Simulation Results
At the start of the simulation, all four galaxies begin their evolution by settling from the slightly unstable intial state by compressing vertically. While this happens cold-phase gas condenses and fragments along spiral arms into individual clouds. In the Hydro and MHD Weak cases, this collapse is violent and results in a starburst, mostly localized to the galactic center (r \(<\) 2 kpc). In the cases with stronger fields, the fields resist the compression, preventing a starburst from occurring, and the onset of star formation is delayed. As a result, the Hydro and MHD Weak galaxies have a smaller fraction of gas remaining in the galactic center for the remainder of the simulation, for this reason we exclude the center 2 kpc from some of our analysis to ensure a fair comparison between the galaxies.
### Magnetic Field Evolution
The primary focus of this work is how magnetic fields influence the structure of the ISM and the regulation of star formation at later stages. Before examining that we give a brief overview of the magnetic field evolution leading to the final state.
Figure 1 shows the average magnetic field strength vs. radius in each galaxy over time, and compares them to field strengths from synchrotron observations by Basu & Roy (2013). Firstly, we note that this observational sample includes NGC 5055, for which the simulation set-up is a reasonable proxy. Secondly, the variation among the observational sample is also quite small, with estimated field strengths of \(\sim\) 10-20 \(\mu\)G in the radii of interest, 2-10 kpc.
The figure includes two methods of calculating the average field strength. The first is a mass-weighted average (top panels) and the
\begin{table}
\begin{tabular}{l r r r} \hline Name & B\({}_{\rm ISM}\) & \(\beta_{\rm ISM}\) & B\({}_{0}\) \\ \hline Hydro & 0 & \(\infty\) & 0 \\ MHD Weak & 0.1 \(\mu\)G & 800 & 0.85 \(\mu\)G \\ MHD Medium & 1 \(\mu\)G & 8 & 8.5 \(\mu\)G \\ MHD Strong & 10 \(\mu\)G & 0.08 & 85 \(\mu\)G \\ \hline \end{tabular}
\end{table}
Table 1: Summary of initial magnetic properties of each simulation. B\({}_{\rm ISM}\) and \(\beta_{\rm ISM}\) are magnetic field and plasma \(\beta\) values, respectively, in the typical ISM (gas density \(n\sim 0.25\) cm\({}^{-3}\)). B\({}_{0}\) is the corresponding magnetic field strength in equation 2 (at the geometric center of the galaxy). Otherwise initial magnetic field strengths scale as B \(\propto\rho^{2/3}\). As a result of this scaling, the plasma \(\beta\) (\(\nabla_{\rm thermal}\)/\(\nabla_{\rm mag}\)) increases with radius.
second is a volume weighted average (lower panels). The method of averaging does not particularly affect the field estimate in the MHD Strong galaxy because the gas scale height is quite large (see 3.4 for a quantitative comparison). For weaker fields, the galaxies are thinner and the differences more pronounced.
The volume-weighted average in the MHD Weak galaxy is the only one that saturates with a flat profile, as seen in the sychrotron estimates, albeit with a lower field strength. Synchrotron emission arises from cosmic rays which are typically assumed to have a large scale height (Zweibel, 2017), making the volume-weighted average the closest match. Precise cosmic ray energy distributions are hard to pin down. Regardless of the assumed cosmic ray to magnetic energy ratio, the observed flat radial distribution should be approximately preserved. This makes the MHD Weak case the most compelling. In particular, we infer that initial fields must be well below equipartition in order to naturally evolve to a realistic saturated state.
Similar amounts of magnetic field amplification are visible in both the MHD Weak and MHD Medium galaxies regardless of the choice of weighting. The MHD Strong galaxy experiences a net loss of field strength over time, confirming that it was oversaturated from the beginning. Both the MHD Weak and Medium galaxies appear close saturation after about 500 Myr, but they do not saturate at the same value. Field strengths may decrease over time due to magnetic flux leaving the disk. In the inner regions of the MHD Weak galaxy, field strengths peak at around t=500 Myr and then decrease slightly due to flux leaving in vertical outflows. In the MHD Strong galaxy, net flux loss is expected due to magnetic braking.
In all cases, the amplification rate is lower in the outer disk, where there is less star formation (see section 3.3). This behaviour is most consistent with a turbulent style dynamo, both in terms of the rate of growth and the strong association with feedback from star formation.
Our field strengths can also be compared to those inferred from Zeeman measurements, by placing them on a field strength vs. density (B vs. n) plot, shown in figure 2. Here we plot the median field strength in each density bin, but variations of an order of magnitude are common at a given density. In this figure, field amplification due to dynamo action is seen as a vertical translation. When field strengths increase due to gas compression (or expansion), the points should also move to the right as the density increases (or decreases, respectively). This largely explains the rapid expansion of the plot upward in density and field strength after the initial state. In our galaxies, it is clear that most amplification is happening in the diffuse medium with number densities of \(0.1-10\) cm\({}^{-3}\), while the dense gas strengths remain constant or even slightly decrease over time. The result is a flat power law, close to 0.5 in the MHD Weak case. There is a trend to shallower power laws as we progress from the MHD Weak to the MHD Strong case. Thus, only the MHD Weak case displays the steep power law at higher densities inferred by Crutcher et al. (2010).
In the MHD Strong galaxy, any gas that got more dense than the initial condition has typically experienced field increases of only a factor of two at most. This can be explained if the gas flows are directed mostly along field lines, which is expected in regions which are magnetically dominated. This also explains why the mass-weighted average gives similar field strengths in figure 1, because there is hardly any amplification in the high-density gas.
At low densities, the simulated field strengths continue to decrease with density, showing no hint of the constant field extrapolation
Figure 1: Magnetic field strength vs. galactocentric radius in each MHD galaxy. Top row shows a mass-weighted average, and bottom row shows a volume-weighted average. Color shows the the time of the snapshot from 0 (red), to 1 Gyr (pink). Black lines are data from nearby galaxies from Basu & Roy (2013).
below number densities of \(\sim 10\) cm\({}^{-3}\) suggested by, e.g. Crutcher et al. (2010).
### Overall structure
We now proceed with an examination of the state of the galaxies after 1 Gyr of evolution. Figure 3 shows face-on visualisations of each galaxy. The top row shows the surface densities of gas in each galaxy. The excess of gas in the medium and strong galaxies is clearly visible. Those galaxies also appear less fragmented, with more flocculent spiral arms and fewer superbubble holes. The bottom row shows slices of the magnetic field strength in the midplane of the galactic disk. Although there is some amplification of the fields during the galaxies evolution, the fields do not saturate at the same level and the case with the stronger initial fields still has the strongest magnetic fields at the end of the run. The stronger magnetic fields have less structure in them, mostly due to them having lower star formation rates (see section 3.3), but also due to the weaker fields being less dynamically important and having less resistance to being pushed around by motions of the gas. Large scale spiral structure is reflected in the magnetic fields, and the stronger field cases result in spiral morphology that is less disrupted by superbubbles. To understand this better, we need to examine the distribution of star formation.
### Star Formation
Figure 4 summarizes the star formation history of each galaxy. To ensure a fair comparison between the galaxies we restrict this analysis to gas outside of the central 2 kpc, which minimizes the differences arising from the large difference in gas depletion in the center of the galaxies in different cases. The Hydro and MHD Weak galaxy undergo starbursts that have SFRs reaching up to 20 M\({}_{\odot}\) yr\({}^{-1}\). After the initial burst, the hydro galaxy settles into a constant SFR of \(\sim 2M_{\odot}yr^{-1}\) that reduces slightly by the end due to a decreasing gas content. The MHD Weak galaxy has an an initially elevated SFR of 5 M\({}_{\odot}\) yr\({}^{-1}\), which continually decreases and reaches 1 M\({}_{\odot}\) yr\({}^{-1}\) by the end of the run, lower than the hydro galaxy. The MHD Medium does initially undergo a slight starburst, but it is delayed until \(\sim\)75 Myr and much smaller. Once it does begin to form stars it quickly reaches a SFR of 4 M\({}_{\odot}\) yr\({}^{-1}\) before decreasing even more quickly than the MHD Weak galaxy, and ends up with a SFR of less than 1 M\({}_{\odot}\) yr\({}^{-1}\). The decreasing SFR in both the MHD Weak and Medium galaxies is due to the amplification of the magnetic fields, As they become stronger they become more dynamically important, they limit star formation more effectively. The MHD Strong galaxy has very limited star formation, remaining around 1 M\({}_{\odot}\) yr\({}^{-1}\) or less for its entire evolution.
The differences in the remaining gas content can be seen in the top row of figure 5, which plots the surface density of gas versus galactocentric radius. The stronger the fields in the galaxy, the more gas remains due to the different star formation history. The differences are most prominent in the galactic center but exist out to around 10 kpc, beyond which all 4 galaxies have similar gas surface densities. The second row of figure 5 shows star formation surface density as a function of radius, averaged over the last 100 Myr of the simulation. The stronger field galaxies have enhanced star formation in the center 2 kpc due to having more gas remaining at this point in time. In the outer regions, star formation only occurs out to a limited distance. It is truncated at 14 kpc in the Hydro and MHD Weak cases, 10 kpc in the MHD Medium case, and at 8 kpc in the MHD Strong case. The truncation of star formation in the stronger field cases happens where they have higher surface densities which are typically association with higher star formation rates, indicating that the magnetic fields can completely shut down star formation if strong enough.
The final row in figure 5 combines above data to make the well-known Kennicutt-Schmidt plot. The surface densities were measured at 1 kpc resolution, and then binned by radius. Star formation rate surface density is calculated using stars that formed within the last 100 Myr in each pixel to make a fair comparison to observational tracers of star formation rate. The dashed lines on the plot show lines of constant consumption times of \(10^{8}\), \(10^{9}\), and \(10^{10}\) years from top to bottom. Bigiel et al. (2008) label these using efficiencies per \(10^{8}\) yr of 100%, 10% and 1%, respectively. The Hydro and MHD Weak galaxies have most of their gas at surface densities of 1-10 M\({}_{\odot}\)/pc\({}^{2}\). They approach a consumption time of around \(2\times 10^{9}\) years at 10 M\({}_{\odot}\)/pc\({}^{2}\) with a reduced star formation at lower surface densities. This behaviour is typical of the Bigiel et al. (2008) data. The stronger magnetic fields push the turn down further right, with star formation being drastically reduced at low surface densities. They extend to higher surface densities towards the center of the galaxies. We note that the open points represent points with radii less than 2 kpc. The MHD Medium and Strong galaxies both have magnetic fields that are strong enough to support the gas with levels of star formation
Figure 2: Median magnetic field strength vs. gas number density over time. Black points are measured values from clouds inside the Milky Way (Crutcher et al., 2010). Amplification is visible in diffuse gas in galaxies with weaker initial fields.
that are below the observations. This result is reinforced when we explicitly examine gas support in the next section.
### Gas Properties and Distribution
The Hydro and MHD Weak galaxies end up with higher star formation rates than the other two galaxies despite having less gas content remaining at the end. Our star formation model depends on the amount of dense gas. Thus we expect to see less dense gas when there is reduced star formation. Figure 6 shows a histogram of the total gas mass vs. number density excluding the central 2 kpc at the final snapshot. We see progressively less star forming gas (above the number density of 100) in the stronger field cases as expected. The left side of the distribution is also more extended with the weaker fields. This is because of the stronger feedback and larger bubbles. The regular small bumps in the plot are a result of the refinement strategy of Ramses.
As individual superbubbles form due to supernova feedback, the magnetic fields lines are dragged by the gas in the explosions, wrapping themselves around the bubbles. Figure 7 shows a typical example of such an event. This particular bubble has stopped expanding and does not end up escaping out of the disk. These explosions are a major source of turbulence in the gas and play a major role in the evolution of both the galaxy and its magnetic fields. Due to the magnetic fields resisting being bent and compressed, they will counteract the expansion of the bubbles as the fields are dragged. Thus the field strength also affects the visual morphology by limiting the number and size of holes in the gas distribution as seen in figure 3.
The gas volume distribution is affected more dramatically affected by the magnetic fields than the mass distribution. In figure 8, we plot the volume fraction of each phase of gas over time. We define the three phases as cold gas (T < 5000 K), warm gas (5000 K < T < 50000 K), and hot gas (T > 50000 K). To first order, the volume fraction is explained by the star formation; high star formation rates lead to more supernovae which create more hot gas in superbubbles. As the star formation decreases in the MHD Weak and Medium galaxies, the volume fraction of hot gas decreases correspondingly. But that is not the whole picture; the cases with magnetic fields have a lower hot volume fraction for a given star formation rate. Between 500 -700 Myr, the MHD Weak galaxy has roughly the same star formation rate as the Hydro galaxy, but a systematically lower volume fraction. Similarly, when the MHD Medium galaxy peaks in star formation around t=150 Myr, which is higher than the hydro galaxy, it only achieves a maximum hot volume fraction of just over 20 percent.
Figure 4: Star formation rate vs. time for each galaxy, excluding the central 2 kpc. Star formation rate is calculated by summing the mass of all stars formed within each time bin and dividing by the size of the bin. For the MHD Weak and Medium galaxies, field strengths decline over time due to increasing field strengths.
Figure 3: Visualisations of each galaxy 1 Gyr. The top row shows surface density projections, and the bottom row shows slices of magnetic field strength in the midplane. The Hydro case is black because there is zero magnetic fields everywhere.
The stars that do form in the MHD Strong galaxy are hardly able to make any bubbles at all, with the entire galaxy's volume being dominated by warm phase gas. The MHD Medium galaxy ends up in a similar state by the end of the simulation.
This strongly suggests that the magnetic fields are limiting the growth of superbubbles, as seen in figure 7. Another possible cause of the difference in bubble volume is the clustering of stars, if the star formation is more clustered, the supernovae combine more efficiently and will grow even larger (Nath & Shchekinov, 2013; Keller et al., 2014). We have confirmed that the masses of the star clusters do not change between the four galaxies, all having similar distributions. It is commonly estimated that hot gas occupies \(\sim\) 50 % of the ISM by volume (Tielens, 2010). Only the MHD Weak and Hydro cases reflect this. The MHD Medium and Strong filling factors in figure 8 extremely low at late times.
Figure 9 shows the mass weighted average of the height of the gas vs. radius. All of the galaxies have a value less than 100 pc in the center, increasing as the disk flares outwards. The MHD Weak Galaxy has the thinnest disk, due to its reduced star formation rate. It's thickness is fairly similar to the Hydro galaxy, and they both reach 250 pc by a radius of 15 kpc. The MHD Medium and Strong galaxies are both systematically thicker, despite their drastically reduced star formation rates. The MHD Strong galaxy is the thickest, reaching a height of 400 pc. Because of the mass weighting this measure of thickness is mostly set by the high density gas, but in the galaxies with stronger fields there is less cold gas so the increased height is largely due to warm diffuse gas which is magnetically supported. To quantify these results, we need to examine the different contributions to the supporting pressure.
The differences in gas properties and star formation rate ultimately arise due to the different forces (or pressures) acting on the gas,
Figure 5: Top row: gas surface density as a function of galactocentric radius at 1 Gyr. Middle row: star formation rate surface density vs. gas surface density, otherwise known as the Kennicutt-Schmidt relation. Open circles indicate points within 2 kpc of the center. Diagonal lines indicate constant gas depletion times of \(10^{8}\), \(10^{9}\), and \(10^{10}\) years, as done in (Bigiel et al., 2008)
Figure 6: Histogram of gas number densities excluding central 2 kpc. Each bin represents the total mass at that density inside of a disk with radius 15 kpc, and 1 kpc high, normalized by the total gas mass of that disk. Magnetic fields limit the amount of star forming gas that is created by narrowing the distribution.
Figure 7: Example of a typical supernova bubble forming in the simulation. Color shows the number density of gas, with the magnetic field lines visualized by a line-integral convolution. Overplotted are the velocity of the gas (white), and newly formed star particles (blue). This particular bubble occurs at a radius of 7 kpc, at t=500 Myr in the MHD Medium galaxy. As the bubble expands, magnetic fields lines are dragged with the gas, and resist expansion.
including thermal pressure, magnetic forces, turbulent motions and gravity (Beniecasa et al., 2016). Galaxies that have stronger magnetic fields have reduced turbulence due to the reduced stellar feedback.
Figure 10 shows mass-weighted pressures providing vertical support within 500 pc of the disk midplane versus radius, averaged over the final 100 Myr. The curves shows thermal pressure, \(P_{\rm thermal}=n\;k\;T\), magnetic pressure \(P_{\rm B}=B^{2}/8\pi\), and a turbulent pressure, estimated using \(P_{\rm turb}=\rho v_{z}^{2}\). The top row is cold gas, the middle row is warm gas, and the bottom row is hot gas, using the same definitions as above (i.e. in figure 8). In the cold gas, the MHD Weak, Medium, and Hydro galaxies have roughly the same total pressure, however with increasing magnetic field strength there is a smaller contribution from turbulence. The central regions of the MHD Medium and Strong galaxies have higher support to hold up the extra gas remaining there, and the radial extent of the cold phase is reduced. We draw attention to the difference between the Hydro and MHD Weak cases. The turbulent support is roughly half for MHD Weak case (with the difference being made up for by magnetic support), mirroring the halving of the star formation rate at this time relative to Hydro.
In the warm gas, the difference in magnetic pressure is much larger between the MHD Weak and Medium. The increase in magnetic pressure in warm gas is largely responsible for the increased scale height in the Medium and Strong Galaxies. All pressures are in a rough equipartition in the MHD Weak galaxy, but the warm phase becomes dominated by magnetic pressure in the Medium and Strong galaxies.
The hot gas is localized in high pressure superbubbles which are dominated by thermal pressure in all galaxies. The magnetic fields expand with the hot gas, resulting in the lower magnetic pressure in the hot gas. There is also a high turbulent pressure, which is likely due to high-velocity flows as opposed to small scale turbulence.
### Disk stability
We have shown that the formation of a cold phase and subsequent star formation is much more limited in the stronger field cases. Gravitational instability drives the formation of the cold phase. We can quantify this using the Toomre Q parameter for the gas,
\[Q=\frac{\kappa\sqrt{c_{s}^{2}+v_{a}^{2}+\sigma_{v}^{2}}}{\pi G\Sigma}, \tag{4}\]
which accounts for the thermal, magnetic and turbulent support discussed in the previous section. Here \(c_{s}\) is sound speed, \(v_{a}\) is the Alfven velocity, \(\sigma_{v}\) is the velocity dispersion of the gas, and \(\Sigma\) is the surface density of the gas. The literature contains several extended versions of the Toomre Q parameter (Romeo & Falstad, 2013; Korten et al., 2019; Nipoti, 2023), which include adjustments for the 3D structure of the disk and multiple components like the stellar disk. We note that the Agora stellar disk has quite a high velocity dispersion and thus the stellar component is quite stable and does not contribute much to the effective Q locally. The regions of low gas Q closely match up with the locations where stars form, confirming that this choice of Q is a reasonable approximation. Figure 11 shows the Toomre Q parameter for each galaxy at 1 Gyr, along with each of the individual support terms. The Alfven velocity and sound speed were calculated by taking a mass weighted average of gas within 250 pc of the midplane, and the velocity dispersion in the midplane is estimated by summing over the differences between neighbouring cells.
The hydro galaxy again has highest turbulent support which is localized around star forming regions. Conversely, velocity dispersion provides the least support in the magnetic galaxies. Those same regions also contain high temperature gas which yields high sound speeds. The prevalence of hot bubbles drops rapidly as the field strength increase from left to right in the third row of the figure. The thermal support is lowest in dense gas seen along spiral arms, where the magnetic support is highest.
The fourth row of figure 11 shows the Toomre Q parameter, with red regions indicating Q\(<1\). Much of the material with lower Q values has already collapsed so these values below 1 are not in
Figure 8: Volume fraction of each phase of gas in the ISM over time. Volume is calculated from a disk of radius 15 kpc, and 1 kpc high. Majority of the volume is split between the warm and hot phase gas. With increasing field strength the volume fraction of the warm phase gas increases, and hot phase gas decreases.
Figure 9: Mass weighted average \(|\mu|\) (which represents the thickness of the disk), as a function of galactocentric radius. In each radial bin, gas within 4 kpc vertically of the midplane was included. The MHD Weak galaxy has a thinner disk than the hydro case, but the more strongly magnetized galaxies are thicker.
formative. Each galaxy is unstable only in regions of high surface density along spiral arms. The galaxies with stronger magnetic fields have dramatically fewer regions that are unstable and at large radii become totally stable against collapse, explaining the locations of star formation in the fifth row. Our MHD Weak galaxy has similarly placed but smoother unstable regions when compared to the hydro galaxy. This is directly related to the addition of magnetic support shown in the top panel. The spiral features are thus smoother and more continuous, which is a general feature in the magnetized cases.
As discussed in Kortgen et al. (2019), magnetized galaxies may also be able to fragment via the Parker instability rather than gravitational instability, potentially allowing collapse in regions where \(Q>1\). This would be difficult to observe in our galaxies because stellar feedback is constantly stirring the gas and may disrupt the wavelike structure required. However, we see relatively little star formation outside of Toomre unstable regions. In addition, the timescale for these large-scale Parker instabilities tends to be long compared to turbulent crossing times.
In the outer regions of the disks, much of the gas is marginally stable (\(1\leq Q\leq 2\)). In this regime, the development of spirals is still possible due to swing amplification (Binney & Tremaine, 2008). These spirals form but do not necessarily fragment into clouds. In the MHD strong case no dense structure forms in the outer regions (see figure 3) due to the enhanced magnetic support, and the star formation is heavily centrally concentrated, as shown in the bottom row.
## 4 Discussion
We simulated four galaxies with varying magnetic field strengths in order to understand the impact on their ISM and star formation regulation. We saw clear differences between the galaxies in their star formation rates. This result arises due to the consumption of gas fuel and the growth of the magnetic fields strength over time. Star formation rates can be understood via the gas support in each case. This is clearly reflected in the stability of the gas. Stronger fields are able to support the gas without feedback, particularly at large radii. Weak or absent fields must rely on turbulent support associated with star formation. Every case is internally self-consistent. However, the combination of the star formation rates, field strengths and gas morphology heavily favour the steady state generated by the MHD Weak case as the best match to observed disk galaxies.
A lack of star formation also means a lack of cold, dense gas
Figure 10: Comparison of different pressures in each galaxy. Included pressures are thermal (red), magnetic B\({}^{2}\)/8 \(\pi\) (blue), and turbulent \(\rho v_{c}^{2}\) (yellow). Each pressure is calculated by taking a mass weighted average of gas within 1 kpc vertically in each radial bin. The pressures are calculated for three separate phases, using only the gas that falls within each temperature range: cold (T < 5000 K), warm (5000 K < T < 50000 K), and hot (T > 50000 K).
precursors for star formation, which is reflected in the ISM gas phase results. This is tied to the Schmidt-law star formation model, that always forms stars at a fixed efficiency per free-fall time once gas is above a density threshold. While this is a standard model choice, it is important to recognize that the actual efficiency of star formation in dense gas is the subject of vigorous theoretical and observational study. In particular, the presence of unresolved magnetic fields and turbulence could affect this efficiency in real galaxies, whereas the efficiency is fixed here.
It is difficult to compare simulated field strengths to observed values. We argue that volume weighted fields are similar to observed synchrotron estimates. In particular, the MHD Weak case creates a flat radial profile. While this is compelling relative to the other cases, mock observations would provide more detailed insights (Ponnada et al., 2022, 2023).
Figure 11: Toomre Q parameter and the three support terms included in it as shown in Equation 4. Top row: Alfvén velocity. Second row, velocity dispersion. Third row: sound speed. Fourth row: Toomre Q parameter. Red regions show Qc1, which are unstable to collapse. Q values of 1-2 are unstable to forming spirals. Fifth row: Surface density of star formation using stars formed within the last 100 Myr.
The MHD Weak galaxy was the only one that reproduced the steep power law increase in field strength at high densities inferred from Zeeman observations (Crutcher et al., 2010). Our ideal MHD simulated field strengths should be upper limits, as turbulent ambipolar diffusion will reduce the field at high densities (Heitsch et al., 2004). Our simulations also show no evidence for the commonly inferred constant field strength in lower density gas. This behaviour has been observed by several independent simulation groups (Kortgen et al., 2019; Rieder and Teyssier, 2017; Ponnada et al., 2022). Thus there is a tension between observations and theory on this point.
It is tempting to ask which of the four galaxies are the most realistic. Magnetic fields certainly exist in galaxies, ruling out the hydro galaxy. Similarly, the MHD Strong galaxy had initial fields above what we infer from observational constraints. Both it and the MHD medium case failed to match most observed properties for the field itself, gas morphologies and star formation.
The MHD Weak case did well on all these measures. We expect that any case starting with a sufficiently high plasma beta (\(\gtrsim 100\)) would evolve to a saturated state similar to the MHD Weak case. The main difference would be the time it takes to do so. The key requirement is that the fields are allowed to amplify naturally to the point of saturation.
We note that the MHD Weak galaxy achieves similar star formation rates and gas distributions to the Hydro galaxy at intermediate times. Eventually, the gas support includes a magnetic component similar to the turbulence and the star formation rate dips below the hydro case. In future work, we will explore the morphology of the field and how it is linked to its ability to support the gas in place of stellar feedback. This may be related to the split between turbulent and mean fields. It would also be worth testing different morphologies in the initial condition. We started with a purely toroidal field which could possibly bias the final field configuration.
This work does not account for cosmic rays, which could contribute significantly to gas support in principle (e.g. Semenov et al., 2021). We note that there is a lot of uncertainty regarding cosmic rays coupling to the gas and how best to model them numerically. While the current work uses simple approaches to isolate key effects, cosmic rays would be an interesting future direction.
We also do not account for non-ideal MHD effects. Ideal MHD is a good approximation for most of the ISM. Effects such as ambipolar diffusion may become important in high density star forming regions. We anticipate that these effects are comparable to the impact of the magnetic field on small star formation generally, which is not addressed in our simple star formation prescription. We also avoided more elaborate feedback models. Complex feedback models would also be affected by unresolved turbulence and magnetic fields. Though we chose simple feedback by design to keep interpretation simple, it is worth keeping in mind that factors like star formation efficiency could be different in different cases (e.g. with stronger magnetic fields).
Studying the properties of star forming regions and molecular cloud analogues is a natural extension of this work to smaller scales. Magnetic fields may affect the shapes and sizes of star forming clouds, which could also affect small-scale star formation. To do this, we need higher resolution simulations. We have performed zoom-in simulations, beginning with the galactic setup in this work, seeking to characterize the affect of galactic-scale dynamics on star forming regions (Zhao et al, 2023, in prep).
Another population of interest within galaxies is that of superbubbles. JWST observations have made it possible to perform a detailed census of superbubbles in nearby galaxies (Watkins et al., 2023). Our results show that magnetic fields dramatically affect the evolution of superbubbles. It would be interesting to characterize their populations in these and higher resolution simulations as a further constraint on how well the simulated magnetic fields match those in real galaxies.
## 5 Conclusions
We summarize our conclusions as follows:
* We have demonstrated that evolving a simulated galaxy towards a realistic self-regulating ISM and correspondingly realistic magnetic fields is achievable with an initially weak toroidal field and simple models for star formation and feedback. conversely, starting with field strengths typical for the ISM does not produce realistic end states.
* Stronger magnetic fields generally reduce a galaxy's star formation. As magnetic fields amplify over time and their strengths increase, star formation rates will decrease. The field strengths are also linked to the star formation and feedback and provide an additional form of self-regulation in galaxies.
* In our isolated galaxies, dynamo amplification mainly occurs in the warm diffuse medium of the ISM, from number densities of 0.1 to 10 cm\({}^{-3}\). The galactic magnetic fields first saturate at high densities but continue to grow at intermediate densities over longer timescales. Starting with high initial fields may lock the galaxy into unrealistic field configurations.
* Stronger magnetic fields generally result in reduced turbulence. This is due to both the reduced supernovae feedback, but also from magnetic fields limiting superbubble growth.
* Magnetic fields play an important role in the vertical pressure support in galaxies, reaching equipartition levels with turbulence in the cold phase, and with both thermal pressure and turbulence in the warm phase. This adds an extra component the established pressure regulation picture (Ostriker et al., 2010; Benincasa et al., 2016).
* Magnetic fields change the distribution of unstable regions in the disk, which are well characterized with a modified gas Toomre Q. Strongly magnetized galaxies can be completely stabilized against collapse for much of the disk.
* Strong magnetic fields can dramatically reduce the size of supernova bubbles. This can be seen visually in the simulated galaxies, and by the reduced volume fraction of hot gas for a given star formation rate.
## Acknowledgements
The authors would like to thank Ralph Pudritz for many useful discussions. Hector Robinson is supported by an NSERC postgraduate scholarship, and James Wadsley is supported by Discovery Grants from NSERC of Canada. Computational resources for this project were enabled by a grant to James Wadsley from Compute Canada/Digital Alliance Canada and carried out on the Niagara Supercomputer.
## Data Availability
The data used this article will be shared upon reasonable request to the corresponding author. |
2303.02600 | Stopping to Reflect: Asymptotic Static Moving Mirrors as Quantum Analogs
of Classical Radiation | Radiation from an accelerating charge is a basic process that can serve as an
intersection between classical and quantum physics. We present two exactly
soluble electron trajectories that permit analysis of the radiation emitted,
exploring its time evolution and spectrum by analogy with the moving mirror
model of the dynamic Casimir effect. These classical solutions are finite
energy, rectilinear (nonperiodic), asymptotically zero velocity worldlines with
corresponding quantum analog beta Bogolyubov coefficients. One of them has an
interesting connection to uniform acceleration and Leonardo da Vinci's water
pitcher experiment. | Michael R. R. Good, Eric V. Linder | 2023-03-05T07:58:18Z | http://arxiv.org/abs/2303.02600v1 | # Stopping to Reflect: Asymptotic Static Moving Mirrors as Quantum Analogs of Classical Radiation
###### Abstract
Radiation from an accelerating charge is a basic process that can serve as an intersection between classical and quantum physics. We present two exactly soluble electron trajectories that permit analysis of the radiation emitted, exploring its time evolution and spectrum by analogy with the moving mirror model of the dynamic Casimir effect. These classical solutions are finite energy, rectilinear (nonperiodic), asymptotically zero velocity worldlines with corresponding quantum analog beta Bogolyubov coefficients. One of them has an interesting connection to uniform acceleration and Leonardo da Vinci's water pitcher experiment.
moving mirrors, black hole evaporation, acceleration radiation, Larmor power, point charge pacs: 41.60.-m (Radiation by moving charges), 04.70.Dy (Quantum aspects of black holes)
## I Introduction
The mechanism of particle creation proposed by Hawking [1], whereby the gravitational field of a collapsing star in curved spacetime amplifies vacuum fluctuations into particle emission, bears striking resemblance to the radiation of particles from a perfect mirror in flat spacetime accelerated through the vacuum [2; 3; 4]. Particles of a massless quantum scalar field in \(1+1\) dimensions [5; 6] are created due to the acceleration of the mirror, which is an ideal point and boundary condition on the field [7; 8; 9; 10; 11; 12; 13], essentially a dynamical Casimir effect [14]. In this study, we demonstrate a functional duality and analog to an accelerated point charge in ordinary 3+1 spacetime and its non-thermal radiation spectrum, revealing the particle creation correspondence.
Accelerating point charge radiation has been a subject of interest in physics for over a century [15], and it is of particular interest as a simple example of non-thermal radiation. Nonthermal radiation is ubiquitous in astrophysical phenomena, for example, and the particle number and angular spectral distribution may not be apparent. Furthermore, even evaporating black holes might emit non-thermal radiation, e.g. the recent [16]. Therefore a concrete relation between accelerated particle non-thermal radiation and the moving mirror "slicing" of the vacuum [17; 18; 19; 20], especially in light of the well-established correspondence between moving mirrors and black hole horizons, is of interest.
The discovery of a clear association (generalized to non-thermal emissions) between the radiation from an electron and from a moving mirror became apparent via radiation reaction derived by Ford and Vilenkin in 1982 [8]. In 1995, Nikishov and Ritus [21] established a formal link through particle count, which further strengthened this connection. Ritus [22; 23; 24; 25] later provided additional development on the Bogolyubov-current association. The relationship was next confirmed via Larmor power in Zhakenuly et al [26]. One of the present authors has exploited the electron-mirror connection using explicit solutions; for instance, the connection between radiation power loss and kinetic power loss for an electron approaching the speed of light was demonstrated in [27], and in [28] an electron was treated as a mirror for a trajectory that asymptotically approaches a constant velocity. This article focuses on the interesting results for the electron-mirror relation for trajectories that come to a complete stop, giving finite energy, finite particle creation, and unitary evolution.
In Sec. II, we review some elements of acceleration radiation for relativistic moving point charges, including Larmor power, Feynman power, and their connection to total energy emitted. We present the spectra for two different motions of point charges and the quantum analogs that have desirable properties and analytic Bogolyubov coefficients in Sec. III and Sec. IV. In Sec. V we show the general correspondence between the classical bremsstrahlung and dynamical Casimir effect in energy, particle count, and spectral distribution. We summarize and discuss further areas for study in Sec. VI.
## II Acceleration Radiation Elements
In this section, we set up the various elements needed to compute the radiated power, energy, and spectral distribution of both an accelerating charge and from a moving mirror dynamical Casimir effect. Throughout we use natural units, \(\hbar=c=\mu_{0}=\epsilon_{0}=1\), thus \(e^{2}=4\pi\alpha_{\rm fs}\) where \(\alpha_{\rm fs}\) is the fine structure constant. However, for simplicity, when exclusively in the context of classical
electrodynamics we switch units and employ unit charge \(e=1\) (\(\hbar=1/4\pi\alpha_{\rm fs}\)).
### Power and Force
In classical electrodynamics [29], the power radiated and the radiation reaction force,
\[P=\frac{\alpha^{2}}{6\pi}\,\qquad F=\frac{\alpha^{\prime}(\tau)}{6\pi}\, \tag{1}\]
are given by the relativistically covariant Larmor formula and the (magnitude of the) Lorentz-Abraham-Dirac (LAD) force. Here \(\alpha\) is the proper acceleration, and the prime is a derivative with respect to the argument, in this case proper time \(\tau\).
### Energy Integrals
When the charged particle accelerates, energy is radiated, with the total energy found by integrating over coordinate time. That is, for particle velocity \(v(t)\) the integrals
\[E=\int_{-\infty}^{\infty}P\,{\rm d}t=-\int_{-\infty}^{\infty}F\cdot v\,{\rm d}t, \tag{2}\]
demonstrate that the Larmor power, \(P=\alpha^{2}/6\pi\), and what we call the 'Feynman power' [30], \(F\cdot v\), associated with the self-force (radiation reaction force), directly tell an observer the total energy emitted by a point charge along its time-like worldline. The total energy is finite as long as the proper acceleration is asymptotically zero; that is, the worldline must possess asymptotic inertia. We restrict ourselves to this case.
The negative sign demonstrates that the total work against the LAD force represents the total energy loss. That is, the total energy loss from radiation resistance due to Feynman power must equal the total energy radiated by Larmor power. We will demonstrate that the Larmor and Feynman powers themselves - the integrands - are not the same. Separately, it is a subtle matter that these powers are not applicable for asymptotically _non-inertial_ rectilinear trajectories (which we do not consider here); see e.g. [31; 27].
A third expression for the total energy can be employed to establish a link to quantum physics and verify consistency. This spectral consistency integrates over spectral modes,
\[E=\int_{0}^{\infty}\int_{0}^{\infty}p\,|\beta_{pq}|^{2}\,{\rm d}p\,{\rm d}q\, \tag{3}\]
using the quantum analog moving mirror model, generalized to 3+1 dimensions using both sides of the 1+1 dimensional moving mirror, see e.g. [26; 21].
The quantity \(\beta_{pq}\) is the beta Bogolyubov coefficient related to the creation/annihilation operators and \(p\) and \(q\) are the out-going and in-going frequencies, respectively, that describe the modes used to expand the field subject to the accelerating boundary.
### Spectral Distribution
The spectral distribution [32] of the total radiation energy \(E\) with respect to frequency \(\omega\) and solid angle \(\Omega\) is
\[\frac{{\rm d}I(\omega)}{{\rm d}\Omega}\coloneqq\frac{{\rm d}^{2}E}{{\rm d} \omega\,{\rm d}\Omega}\, \tag{4}\]
see also [29]. For the radiation of a moving point charge (in natural units with unit charge - see e.g. Eq. 23.89 on page 911 of Zangwill [33] in SI units or Eq. 14.67 on page 701 of Jackson [29] in Gaussian units) this is given by the motion as
\[\frac{{\rm d}I(\omega)}{{\rm d}\Omega}=\frac{\omega^{2}}{16\pi^{3}}\ \Bigg{|}\ \mathbf{\hat{n}}\times\int_{-\infty}^{\infty}dt\,\mathbf{\hat{r}}(t)e^{i\phi}\ \Bigg{|}^{2}. \tag{5}\]
Here \(\omega\) is the frequency, \(\mathbf{k}=\omega\mathbf{\hat{n}}\) the wave vector, \({\rm d}\Omega\) the solid angle, \(\mathbf{r}\) the charge trajectory with velocity vector \(\mathbf{\hat{r}}\), and \(\phi=\omega t-\mathbf{k}\cdot\mathbf{r}(t)\). Defining \(\mathbf{\hat{n}}\cdot\mathbf{\hat{r}}=\cos\theta\) and assuming straight line motion, we have
\[\frac{{\rm d}I(\omega)}{{\rm d}\Omega}=\frac{\omega^{2}}{16\pi^{3}}\,\sin^{2} \theta\ \Bigg{|}\ \int_{-\infty}^{\infty}dt\,\hat{r}(t)e^{i\phi}\ \Bigg{|}^{2}. \tag{6}\]
Integrating this over solid angle \(d\Omega=\sin\theta d\theta d\varphi\) and frequency \(\omega\) will yield the total energy emitted.
We can also interpret the trajectory as not that of a point charge but an accelerating mirror (boundary) and compare the horizon radiation from this dynamical Casimir effect. Thus we can test that the classical energy emitted agrees with the quantum result from the Bogolyubov creation/annihilation coefficients, and also, contrast the Larmor and Feynman powers. This further provides a way to derive the spectrum angular distribution for particle production from a moving mirror trajectory.
### Asymptotic Rest
To pursue an understanding of the spectrum angular dependence for the quantum analog, we consider moving mirror trajectories that deliver finite total energy and particle count (ensuring all integrals are convergent). Asymptotically inertial mirrors have finite total energy, while mirrors that also are asymptotically static (eventually coming to rest with zero velocity) have finite particle count, entropy, and have unitary evolution (seen geometrically since all light rays reflect off the mirror and none
are lost). Therefore we consider only cases with asymptotic rest.
The following list summarizes the only known trajectories possessing asymptotic rest with solved Bogolyubov coefficients.
* Walker-Davies [34]: but noninvertible \(t(x)\).
* Arctx [35]: but nonfunctional particle count.
* **Self-Dual**[36]: time symmetric.
* **betaK**[37]: time antisymmetric.
* Schwarzschild-Planck [38; 39] (also see [40]): fully evaporating black hole with unitarity.
None of these have previously had published solutions for the beta Bogolyubov coefficients using both mirror sides to obtain the 3+1 D analog (and hence classical particle motion). In the next two sections we present solutions for the two boldface trajectories - in particular as examples of time-symmetric vs antisymmetric motion, and the associated spectral distributions.
## III Self-dual trajectory
The self-dual mirror trajectory [36]
\[x(t)=\frac{-v}{\kappa}\,\ln(\kappa^{2}t^{2}+1)\, \tag{7}\]
is even in time, and the self-dual nature means that the particle emission spectrum is equal on both sides of the mirror. The quantity \(v\) is the maximum speed of the mirror, occurring at \(\kappa t=1\). The quantity \(\kappa\) sets the scale of the acceleration (and the surface gravity of the black hole analog in the accelerating boundary correspondence).
The analog Larmor power radiated is
\[P_{L}=\frac{2\kappa^{2}v^{2}\left(\kappa^{4}t^{4}-1\right)^{2}}{3\pi\left[ \left(\kappa^{2}t^{2}+1\right)^{2}-4\kappa^{2}t^{2}v^{2}\right]^{3}}. \tag{8}\]
As expected, no power is radiated by a stationary particle, \(v=0\), and none at the moment of maximum velocity when the acceleration is zero (i.e. when \(\kappa t=1\), as well as at asymptotically early and late times).
The Feynman force can be similarly calculated analytically but the expression is long. Figure 1 plots the Larmor and Feynman powers vs time. The Larmor power is of course always positive, while the Feynman power from the radiation reaction force can be both positive and negative. The Feynman power crosses zero at maxima of the Larmor power. Both types of power asymptotically vanish rapidly.
Integrating over all time, Eq. (2), the total energy emitted is
\[E=\frac{\kappa}{24}\gamma v^{2}\left(\gamma^{2}+3\right)\, \tag{9}\]
where \(\gamma=(1-v^{2})^{-1/2}\) is the Lorentz factor. Figure 2 plots the total energy as a function of the maximum velocity. As the velocity approaches the speed of light, the Lorentz factor greatly increases the energy emitted.
For the Bogolyubov spectrum as found from the double-sided moving mirror, the result (see e.g. [35] for the details of the steps) is
\[|\beta_{pq}|^{2}=\frac{16vpq}{\pi^{2}\kappa^{2}\sigma\omega}\,\sinh\left( \frac{\pi v\sigma}{\kappa}\right)\,\left|K_{iv\frac{\pi}{\kappa}+\frac{1}{2} }\left(\frac{\omega}{\kappa}\right)\right|^{2}\, \tag{10}\]
where \(\sigma=p-q\) and \(\omega=p+q\). The particle spectrum \(N_{p}=\int dq\,|\beta_{pq}|^{2}\) is non-thermal, and has finite particle production, as seen in Figure 3.
For the spectral (angular) distribution, we use the self
Figure 1: The Larmor and Feynman powers for the self-dual trajectory are plotted vs time, with \(v=0.9\). A higher maximum velocity squeezes and heightens the peaks for both powers. The Feynman power plotted is \(P_{F}=-F\cdot v\) so that the total area under the curve is positive, \(E=\int P_{F}\,\mathrm{d}t\), see Eq. (2). Note the integrals under the curves are equal, giving the total energy radiated, Eq. (9).
Figure 2: The total energy as a function of maximum velocity parameter is plotted for the self-dual trajectory (Eq. 9) and the betaK trajectory (Eq. 19).
dual trajectory in Eq. (6), giving
\[\frac{\mathrm{d}I}{\mathrm{d}\Omega}=\frac{v\omega^{2}}{\kappa^{2}\pi^{3}}\frac{1- T^{2}}{2T}\sinh\left(\frac{\pi vT\omega}{\kappa}\right)\left|K_{\frac{1}{2}+ \frac{i\pi T\omega}{\kappa}}\left(\frac{\omega}{\kappa}\right)\right|^{2}\, \tag{11}\]
where \(T\equiv\cos\theta\). Some details of the derivation are given in Appendix A. Note the similarity to the form of the beta Bogolyubov coefficients, but with added angular dependence (see the next subsection for further discussion).
Figures 4 and 5 plot the spectral distribution in a 3D view. Notice there is no radiation in the forward or backward \(T\to\pm 1\) (\(\theta\to[0,\pi]\)) directions. This is expected of straight-line bremsstrahlung [41]. The spectral distribution in the \(T\to 0\) (\(\theta\to\pi/2\)) limit is:
\[\lim_{T\to 0}\,\frac{\mathrm{d}I}{\mathrm{d}\Omega}=\frac{v^{2}\omega^{2}}{4 \pi\kappa^{2}}e^{-2\omega/\kappa}\,, \tag{12}\]
which demonstrates a radiation allotment in directions perpendicular to the motion that is exponentially suppressed at high frequencies. The spectrum, \(I(\omega)\), can be numerically found by integrating the spectral distribution, Eq. (11), over solid angle. See Figure 6 for an illustration.
The spectral distribution can be directly integrated over solid angle and frequency to obtain the total energy
\[E =\int_{0}^{\infty}\mathrm{d}\omega\int_{-1}^{1}\mathrm{d}T\int_{0 }^{2\pi}\mathrm{d}\varphi\,\,\frac{\mathrm{d}I}{\mathrm{d}\Omega} \tag{13}\] \[=\frac{\kappa}{24}\,\gamma v^{2}\left(\gamma^{2}+3\right). \tag{14}\]
This indeed agrees with Eq. (9).
## IV Betak trajectory
The betaK trajectory [37]
\[x(t)=\frac{-v_{0}}{\kappa}\,\sinh^{-1}\kappa t\, \tag{15}\]
Figure 4: 3D view of the radiated spectrum angular distribution \(\mathrm{d}I/\,\mathrm{d}\Omega\) from motion corresponding to the self dual trajectory. Here we use unit charge, natural units, and \(\omega=\kappa=1\). The maximum speed of the charge is \(v=0.95\). Note the expected property of zero radiation directly in the forward direction.
Figure 5: As Figure 4 but for \(\omega=4\), \(\kappa=1\), showing the high-frequency exponential suppression.
Figure 3: A plot of particle spectrum \(N(p)\) from the mirrors. This is the particle count as a function of the outgoing mirror mode frequency, \(p\). Here the maximum velocity of each mirror is \(v=v_{0}=0.9\).
Figure 6: A plot of energy spectrum \(I(\omega)\), which numerically integrates the spectral distributions for the self-dual, Eq. (11), and betaK, Eq. (21), cases over solid angle \(\Omega\). The vertical axis has been multiplied by \(10^{3}\) for readability. Here the maximum velocity of each case is \(v=v_{0}=0.9\).
by contrast is odd in time, and gives more tractable solutions than the Walker-Davies or Arctx models. Furthermore it has an interesting relation to uniform acceleration in 3+1 D (though not in the 1+1 D mirror case)1. Its name arises because this trajectory has exactly solvable beta Bogolyubov coefficients involving a modified Bessel function \(K\) in the moving mirror model, giving finite energy and finite particle production.
Footnote 1: We thank Ahmad Shariati for pointing this out.
This trajectory equation arises as well for a particle shot horizontally from the origin with an initial velocity \(v_{0}\) (which is also the maximum velocity) encountering a constant vertical acceleration. Indeed, this is similar to the recently rediscovered "Leonardo da Vinci's water pitcher" that moves horizontally at constant speed \(v\) spilling water in a uniform gravitational field [42] - but here we consider relativistic speeds. The derivation appears in Appendix B.
Note that in the relativistic case, despite no horizontal force the particles (water drops) do not have constant horizontal velocity: due to the coupling of horizontal and vertical motions through the Lorentz factor a horizontal acceleration is induced as made clear in Appendix B.
The Larmor power radiated by a charge with the betaK trajectory is
\[P_{L}=\frac{\alpha^{2}}{6\pi}=\frac{\kappa^{2}}{6\pi}\gamma^{6}\left(v_{0}^{2} -V^{2}\right)\frac{V^{4}}{v_{0}^{4}}\, \tag{16}\]
where the velocity is
\[V(t)\equiv\dot{x}(t)=\frac{-v_{0}}{\sqrt{\kappa^{2}t^{2}+1}}. \tag{17}\]
The speed \(|V|\leq|v_{0}|\) so the power always remains non-negative. For this time antisymmetric trajectory, the power has only one maximum on each side of \(t=0\) and no zeros for finite \(t\neq 0\). The Feynman power is
\[P_{F}=\frac{\alpha^{2}}{6\pi}\,\left[2-\frac{V^{2}(1-v_{0}^{2})}{v_{0}^{2}-V^{ 2}}\right]. \tag{18}\]
The total energy, using Eq. (2), is
\[E=\frac{\kappa}{48}\gamma_{0}^{3}v_{0}^{2}. \tag{19}\]
See Figure 2 for the energy and Figure 7 for the Larmor and Feynman powers.
The Bogolyubov spectrum as found from the double-sided moving mirror is
\[|\beta_{pq}|^{2}=\frac{8v_{0}^{2}pq}{\pi^{2}\kappa^{2}\omega^{2}}\cosh\left( \pi v_{0}\frac{\sigma}{\kappa}\right)\left|K_{iv_{0}\frac{\sigma}{\kappa}} \left(\frac{\omega}{\kappa}\right)\right|^{2}\, \tag{20}\]
where \(\sigma=p-q\) and \(\omega=p+q\). This spectrum is not thermal. Note the similarities, but also subtle differences with the self-dual case, Eq. (10). The energy is confirmed by associating a quantum \(\hbar p\) (where \(p\) is the outgoing frequency mode) and integrating using Eq. (3), which yields Eq. (19). The particle spectrum \(N_{p}=\int dq\,|\beta_{pq}|^{2}\) is shown in Figure 3.
Using the betaK trajectory within classical electrodynamics [29], we find the spectral distribution,
\[\frac{\mathrm{d}I}{\mathrm{d}\Omega}=\frac{v_{0}^{2}\omega^{2}}{4\kappa^{2} \pi^{3}}(1-T^{2})\cosh\left(\pi v_{0}T\frac{\omega}{\kappa}\right)\left|K_{iv _{0}T\frac{\omega}{\kappa}}\left(\frac{\omega}{\kappa}\right)\right|^{2}. \tag{21}\]
where \(T=\cos\theta\). Again a relation between the classical spectral distribution and quantum beta Bogolyubov coefficient is apparent; we address this in Section V.
The energy spectrum \(I(\omega)\) is shown in Figure 6. Integration of Eq. (21) over \(\mathrm{d}\omega\,\mathrm{d}\Omega\) agrees with the total energy of Eq. (19). Like the self-dual case, there is no radiation in the forward or backward \(T\rightarrow\pm 1\) (\(\theta\rightarrow[0,\pi]\)) directions, as expected. See Figure 8 for a 3D view of the spectral distribution. The spectral distribution in the \(T\to 0\) (\(\theta\rightarrow\pi/2\)) limit is:
\[\lim_{T\to 0}\,\frac{\mathrm{d}I}{\mathrm{d}\Omega} = \frac{v_{0}^{2}\omega^{2}}{4\pi^{3}\kappa^{2}}\,\left[K_{0}\left( \frac{\omega}{\kappa}\right)\right]^{2} \tag{22}\] \[\approx \frac{v_{0}^{2}\omega}{8\pi^{2}\kappa}\,e^{-\frac{2\omega}{ \kappa}}\, \tag{23}\]
again showing the high-frequency exponential suppression, wherein the second line we have expanded around large \(\omega/\kappa\).
The betaK trajectory is well-motivated, physically intuitive, and potentially realizable in the laboratory as it is straightforwardly the horizontal component of an electron's motion subject to an initial horizontal velocity and constant vertical force. In the following section, we use betaK's analytic tractability to help confirm the duality between the classical point charge and the quantum moving mirror.
Figure 7: The Larmor and Feynman powers for the betaK trajectory are plotted vs time, with \(v_{0}=0.9\). Like the self-dual trajectory, a higher \(v_{0}\) narrows and heightens the peaks for both powers. For illustration, the Feynman power plotted is \(P_{F}=-F\cdot v\) so that the total area under the curve is positive. The areas under the curves are equal, giving the total energy radiated, Eq. (19).
## V Classical-quantum correspondence
We have seen that at the level of total energy there is agreement between the charge radiation approach and the moving mirror Bogolyubov coefficient approach,
\[E=\int_{0}^{\infty}\mathrm{d}\omega\int_{-1}^{1}\mathrm{d}T\int_{0}^{2\pi} \mathrm{d}\varphi\ \frac{\mathrm{d}I}{\mathrm{d}\Omega}\Leftrightarrow\int_{0}^{\infty}\int_{0}^{ \infty}p\,|\beta_{pq}|^{2}\,\mathrm{d}p\,\mathrm{d}q\,. \tag{24}\]
We can further see that the agreement extends to the particle count,
\[N=\int\frac{1}{\omega}\frac{\mathrm{d}I}{\mathrm{d}\Omega}\,\mathrm{d}\Omega \,\mathrm{d}\omega\Leftrightarrow\frac{1}{2}\int\int|\beta_{pq}|^{2}\,\mathrm{ d}p\,\mathrm{d}q\,. \tag{25}\]
The factor \(1/\omega\) converts particle energy to particle number, and the factor \(1/2\) arises because while both sides of the mirror are employed in the correspondence, an observer could only see one side See Figure 9 for an illustration of particle count.
As mentioned in Section III and Section IV the connection persists at the level directly between the spectral distribution and the beta Bogolyubov coefficient, i.e. the integrands. The steps to obtain the exact relation are as follows. First, the Jacobian going from \(\{p,q\}\) coordinates to \(\{\omega,T\}\) coordinates is \(\omega/2\). Recall that \(\mathrm{d}\Omega=\sin\theta\,\mathrm{d}\theta\,\mathrm{d}\varphi\) and that \(\mathrm{d}T\equiv\mathrm{d}(\cos\theta)=\sin\theta\,\mathrm{d}\theta\) and the \(\mathrm{d}\varphi\) integral simply contributes \(2\pi\). Finally, the parity is reversed on opposite sides of the mirror so that one side is related to the other by \(T\leftrightarrow-T\), so we write
\[\int_{-1}^{+1}dT\,\frac{dI}{d\Omega}=\frac{1}{2}\left[\int_{-1}^{+1}dT\,\frac {dI(T)}{d\Omega}+\int_{-1}^{+1}dT\,\frac{dI(-T)}{d\Omega}\right]\,\,. \tag{26}\]
Putting all the elements together delivers the correspondence
\[|\beta_{pq}|^{2}\ \leftarrow\ \frac{4\pi}{\omega^{2}}\left[\frac{\mathrm{d}I}{ \mathrm{d}\Omega}(\omega,\cos\theta)+\frac{\mathrm{d}I}{\mathrm{d}\Omega}( \omega,-\cos\theta)\right]\,\,. \tag{27}\]
This can be verified directly for the solutions given for the two trajectories. Note that the correspondence formally goes in only one direction, from charge radiation to moving mirror, as the beta Bogolyubov coefficient has no angular information on the in-going and out-going modes. Only once we introduce an angle \(\theta\) such that \(p=\omega(1+\cos\theta)/2\) and \(q=\omega(1-\cos\theta)/2\), hence \(p+q=\omega\) and \(\sigma\equiv p-q=\omega\cos\theta\), can we go the other way.
Such a classical-quantum correspondence is very useful, but we emphasize that it does not capture all quantum effects. While the particle production can be computed classically, this neglects quantum effects when the radiation (photon) energy becomes comparable to the particle (electron) energy, e.g. the radiation wavelength is smaller than the charge de Broglie wavelength.
## VI Conclusions
We have solved for the accelerating point charge radiation - its energy, particle count, and spectral angular distribution - of two trajectories that asymptotically come to complete stop, compatible with finite total particle emission. As Feynman [30] has emphasized,
_Larmor's power is only valid for cyclic motions, or at least motions which do not grow forever in time._
The betaK and Self-Dual trajectories fulfill that condition, and these two solutions inspired by the accelerating boundary (moving mirror) analog are the only known rectilinear solutions with exactly soluble spectra, finite energy, and finite particle count. This allows comparison of classical and quantum systems directly.
The main results presented include:
* We have found the time dependence of radiative solutions. One utility of an exact solution for moving point charge radiation is that in QED, time-dependent computations are notoriously difficult.
Figure 8: 3D view of the radiated spectrum angular distribution \(\mathrm{d}I/\,\mathrm{d}\Omega\) from motion corresponding to the betaK trajectory. Here we use unit charge, natural units, and \(\omega=\kappa=1\). The maximum speed of the charge is \(v_{0}=0.95\).
Figure 9: A plot of total finite particle count of the radiation particles created by the two mirrors, using Eq. (25), for maximum velocity ranging from \(0.05\) to \(0.99\).
Here the dynamics are explicit in the applicable Larmor and Feynman powers.
* We have demonstrated consistency between the total energy derived in terms of the Larmor power, the Feynman power, and the quantum Bogolyubov coefficients.
* We have derived the spectral distributions of these two accelerating, but asymptotically static, motions analytically, and further shown consistency with the total energy emission and total particle count. In addition to 3D plots of the radiation angular distribution we discussed the angular limits (e.g. the forward and transverse emission) and high frequency limits.
* We have laid out explicitly a quantum-classical correspondence to the moving mirror model, mapping between the classical spectral distribution and the quantum Bogolyubov coefficients.
The demonstrated consistency and explicit correspondence enhances the utility of the moving mirror model by showing its role as a point charge analog. Thus the accelerated boundary correspondence of the moving mirror to black hole radiation may potentially point to a connection to accelerating charge radiation via a Hawking-Feynman-Larmor correspondence.
This is an exciting prospect for future directions. It may be tractable to link directly these electron trajectories to curved spacetime counterparts, revealing spacetime metrics that radiate with similar nonthermal spectra (or reveal charge motions that could show a period of thermal emission). Further, given that a connection for beta decay to a moving mirror analog has been made [43; 44; 45], other well-known QED scattering processes might correspond at lowest order to one of the solutions given. Asymptotic rest, with its finite particles and unitarity, could be a powerful tool, and it would be interesting to develop further solutions, such as the Schwarzschild-Planck radiation [38; 39; 40] to compare accelerating electron and black hole radiation in the thermal limit.
###### Acknowledgements.
Funding comes in part from the FY2021-SGP-1-STMM Faculty Development Competitive Research Grant No. 021220FD3951 at Nazarbayev University. This work is supported in part by the Energetic Cosmos Laboratory, and in part by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under contract no. DE-AC02-05CH11231.
## Appendix A Spectral Distribution Calculation
To show how one can go from the formula for the spectral distribution, Eq. (6), to the modified Bessel function result we illustrate the steps for the self dual case. The integral has the form
\[A\equiv\int_{-\infty}^{+\infty}dt\,\dot{x}\,e^{i\omega(t-x\cos\theta)}. \tag{10}\]
Substituting in the self dual expressions for \(x(t)\) from Eq. (7), and \(\dot{x}\), and writing \(T\equiv\cos\theta\) we have
\[A = \int_{-\infty}^{+\infty}dt\,\frac{-2v\kappa t}{\kappa^{2}t^{2}+1 }\,e^{i\omega[t+(vT/\kappa)\ln(\kappa^{2}t^{2}+1)]} \tag{11}\] \[= \frac{-2v}{\kappa}\int_{-\infty}^{+\infty}ds\,s(s^{2}+1)^{-1+i \omega vT/\kappa}\,e^{i\omega t}\] (12) \[= \frac{-4iv}{\kappa}\int_{0}^{\infty}ds\,s(s^{2}+1)^{-1+i\omega vT /\kappa}\,\sin\frac{\omega s}{\kappa}. \tag{13}\]
In the second line we have taken the exponential of the log term, and defined \(s=\kappa t\), while in the third line we have used that we must take the odd part of the remaining exponential to give an even integrand over the symmetric range of integration.
This integral can be evaluated through Gradshteyn & Ryzhik 3.771.5 [46], resulting in
\[A = \frac{4v}{\kappa\sqrt{\pi}}\,\left(\frac{\omega}{2\kappa}\right) ^{1/2-i\omega vT/\kappa}\,\sinh(\pi\omega vT/\kappa)\,\Gamma\left(\frac{i \omega vT}{\kappa}\right) \tag{14}\] \[\times K_{1/2+i\omega vT/\kappa}\left(\frac{\omega}{\kappa} \right)\.\]
The modulus squared, using that \(|\Gamma(ix)|^{2}=\pi/(x\sinh x)\), is
\[|A|^{2}=\frac{8v}{\kappa^{2}T}\,\sinh(\pi\omega vT/\kappa)\,\left|K_{1/2+i \omega vT/\kappa}\left(\frac{\omega}{\kappa}\right)\right|^{2}. \tag{15}\]
For the betaK case we proceed similarly, noting that since \(\dot{x}\) is even in time in that case we must take the even part of the exponential (i.e. cosine).
## Appendix B Leonardo's Pitcher: From Electron to betaK
The motion of a relativistic particle with unit mass subject to an external force comes from the action2
Footnote 2: This is a first prototypical system of a relativistic Lagrangian (see e.g. page 323 of [47]).
\[S=-\int dt\,\left(\sqrt{1-v^{2}}+Fx\right). \tag{16}\]
For a force dependent only on position the equations of motion are simply
\[\alpha = \frac{d}{dt}\frac{v}{\sqrt{1-v^{2}}}\equiv\frac{d(\gamma v)}{dt} \tag{17}\] \[= (0,\alpha_{y},0)\, \tag{18}\]
where the last line holds for purely vertical force, and we will take \(\alpha_{y}=\,\)const (e.g. gravity in Leonardo's water pitcher experiment). Finally, we take the initial velocity to be purely horizontal, \(v=(v_{0},0,0)\).
The results are simple - nonuniform motion in the horizontal direction due to the relativistic boost factor \(\gamma\), and hyperbolic motion under constant acceleration in the vertical direction - but worth quickly going through to reveal the form of nonuniformity.
The \(z\) direction is trivial: as there is no initial velocity, nor subsequent acceleration, in this direction then Eq. (25) guarantees that \(z(t)=z(0)\) and we can ignore this dimension. In the \(x\) (horizontal) direction, Eq. (25) gives
\[\gamma(t)v_{x}(t)=\gamma_{0}v_{0}\, \tag{26}\]
and the key point is that while nonrelativistically one would simply have \(v_{x}(t)=v_{0}\), i.e. uniform motion, the Lorentz factor \(\gamma\) couples in the \(y\) motion (recall \(\gamma=1/\sqrt{1-v_{x}^{2}-v_{y}^{2}}\)), which is accelerated. This results in nonuniform motion horizontally.
We can relate \(v_{x}\) and \(v_{y}\), and solve for both motions by squaring Eq. (26) to get
\[v_{x}^{2}=(1-v_{y}^{2})v_{0}^{2}. \tag{27}\]
This immediately tells us that \(v_{x}\) has its maximum value at the initial time, so \(v_{y}(t)<v_{0}=v_{y}(0)\). That is, the vertical acceleration effectively causes a horizontal deceleration!
In the \(y\) (vertical) direction, the equation of motion gives \(\gamma v_{y}=\alpha_{y}t\) so
\[v_{y}=\frac{\kappa t}{\sqrt{1+(\kappa t)^{2}}}. \tag{28}\]
At late times this approaches the speed of light. To presage the betaK mirror analogy we have written \(\kappa\equiv\alpha_{y}/\gamma_{0}\). Finally, with Eq. (27) we obtain the horizontal velocity
\[v_{x}=\frac{v_{0}}{\sqrt{1+(\kappa t)^{2}}}\, \tag{29}\]
which indeed decelerates from its initial value to zero. Again presaging the mirror analog, we will end up with an asymptotically static mirror defined by the 1D horizontal motion.
Integrating the velocities gives the trajectories, with
\[y(t)=\kappa^{-1}\sqrt{1+\kappa^{2}t^{2}}-\kappa^{-1}\, \tag{30}\]
revealing hyperbolic motion in the vertical direction. In the horizontal direction,
\[x(t)=\frac{v_{0}}{\kappa}\,\sinh^{-1}\kappa t\, \tag{31}\]
exactly (after a trivial sign flip on initial velocity) the betaK trajectory, Eq. (15).
|
2308.00365 | Vertical structure of buoyancy transport by ocean baroclinic turbulence | Ocean mesoscale eddies enhance meridional buoyancy transport, notably in the
Antarctic Circumpolar Current where they contribute to setting the deep
stratification of the neighboring ocean basins. The much-needed
parameterization of this buoyancy transport in global climate models requires a
theory for the overall flux, but also for its vertical structure inside the
fluid column. Based on the quasi-geostrophic dynamics of an idealized patch of
ocean hosting an arbitrary vertically sheared zonal flow, we provide a
quantitative prediction for the vertical structure of the buoyancy flux without
adjustable parameters. The prediction agrees quantitatively with meridional
flux profiles obtained through numerical simulations of an idealized patch of
ocean with realistic parameter values. This work empowers modelers with an
explicit and physically based expression for the vertical profile of buoyancy
transport by ocean baroclinic turbulence, as opposed to the common practice of
using arbitrary prescriptions for the depth-dependence of the transport
coefficients. | Julie Meunier, Benjamin Miquel, Basile Gallet | 2023-08-01T08:13:30Z | http://arxiv.org/abs/2308.00365v1 | # Vertical structure of buoyancy transport by ocean baroclinic turbulence
###### Abstract
We derive a prediction for the depth-dependence of the buoyancy flux associated with ocean baroclinic turbulence in an idealized setup.
The prediction is validated quantitatively by simulations of an idealized patch of ocean meant to resemble Southern Ocean conditions.
The prediction can be readily implemented into global models as a vertical profile for the Gent-McWilliams coefficient.
###### Abstract
Ocean mesoscale eddies enhance meridional buoyancy transport, notably in the Antarctic Circumpolar Current where they contribute to setting the deep stratification of the neighboring ocean basins. The much-needed parameterization of this buoyancy transport in global climate models requires a theory for the overall flux, but also for its vertical structure inside the fluid column. Based on the quasi-geostrophic dynamics of an idealized patch of ocean hosting an arbitrary vertically sheared zonal flow, we provide a quantitative prediction for the vertical structure of the buoyancy flux without adjustable parameters. The prediction agrees quantitatively with meridional flux profiles obtained through numerical simulations of an idealized patch of ocean with realistic parameter values. This work empowers modelers with an explicit and physically based expression for the vertical profile of buoyancy transport by ocean baroclinic turbulence, as opposed to the common practice of using arbitrary prescriptions for the depth-dependence of the transport coefficients.
## Plain Language Summary
Ocean mesoscale vortices are turbulent structures tens of kilometers wide that play a central role in transporting tracers such as heat, salt and carbon. In the Southern Ocean, the associated buoyancy transport crucially sets the deep stratification of neighboring ocean basins. Because mesoscale vortices are not resolved by most state-of-the-art climate models, modelers resort to rather crude parameterizations where - in the absence of a better theory - the transport properties of the eddies are often assumed to be depth-invariant in the ocean interior. In this contribution we derive a quantitative and parameter-free prediction for the vertical structure of the turbulent buoyancy flux, which can be readily implemented in global models at little computational cost.
## 1 Introduction
The baroclinic instability of large-scale ocean currents generates mesoscale eddies that strongly enhance heat and tracer transport. In the Antarctic Circumpolar Current, the resulting turbulent buoyancy transport contributes to setting the slope of the Southern Ocean density surfaces and therefore the deep stratification of the neighboring ocean basins (Wolfe & Cessi, 2010; Nikurashin & Vallis, 2011, 2012). Mesoscale eddies have a core size comparable to the Rossby deformation radius, a length scale of the order of \(60\) km at midlatitudes and \(15\) km in the Southern Ocean, smaller than the coarse resolution of most global climate models. Parameterizing the transport induced by mesoscale eddies in such global models is thus crucial to obtain realistic ocean states that quantitatively reproduce the sloping density surfaces of the Southern ocean and the deep stratification of ocean basins. Physically-based parameterizations are inferred from the study of an isolated patch of ocean, where baroclinic turbulence has homogeneous statistics in the horizontal directions. The parameterization problem then consists in determining the scaling behavior of the overall diffusivity in terms of the various control parameters (shear flow magnitude, background stratification, bottom friction coefficient, etc.) but also the vertical structure of the various fluxes within the water column. Far more studies have addressed the former task (Phillips, 1954; Salmon, 1978, 1980; Larichev & Held, 1995; Held & Larichev, 1996; Arbic & Scott, 2007; Arbic & Flierl, 2004a, 2004b; Thompson & Young, 2006, 2007; Chang & Held, 2019; Gallet & Ferrari, 2020, 2021) than the latter (Stanley et al., 2020; Zhang & Wolfe, 2022; Yankovsky et al., 2022) in the ocean context. In the absence of a better theory many global models assume that the transport coefficients are depth-invariant in the ocean interior (see, e.g. S. Griffies et al. (2005)), while other models consider surface-intensified coefficients with arbitrary prescriptions for their vertical structure (such as, e.g., assuming that the coefficients are proportional to the local squared buoyancy frequency (Ferreira et al., 2005; Danabasoglu & Marshall, 2007; P. R. Gent, 2011)). The latter assumption of surface-intensified transport coefficients is at odds with idealized eddy-resolving channel simulations, which point to a bottom-enhanced buoyancy transport coefficient instead (Abernathey et al., 2013) (the so-called Gent-McWilliams coefficient, see below).
To improve upon this unsatisfactory state of the art, in this Letter we derive a parameter-free prediction for the vertical structure of the turbulent buoyancy flux within the water column. We consider an idealized patch of ocean with arbitrary background zonal shear flow and stratification, and \(\beta\neq 0\), see Figure 1. Water occupies a volume \((x,y,z)\in[0,L]^{2}\times[-H,0]\) with a stress-free boundary at \(z=0\) and a linear-friction boundary condition at \(z=-H\), in a frame rotating around the vertical axis with a local Coriolis parameter \(f_{0}+\beta y\), where \(y\) denotes the meridional (North-South) coordinate. The fluid layer is density-stratified with an arbitrary profile \(N(z)\) for the buoyancy frequency, and we restrict attention to a single stratifying agent. We focus on the quasi-geostrophic (QG) regime arising for fast rotation and strong stratification (Venaille et al., 2011; Salmon, 1998; Vallis, 2017). The base flow consists of an arbitrary zonal velocity profile \(U(z)\) in thermal wind balance with a \(z\)-dependent meridional buoyancy gradient \(\partial_{y}B=-f_{0}U^{\prime}(z)\), where the prime symbol denotes a vertical derivative. We consider arbitrary departures from this base state with periodic boundary conditions in the horizontal directions. We denote as \(p(x,y,z,t)\) the departure from the base pressure field, with \(u=-p_{y}\) the departure zonal velocity, \(v\,=\,p_{x}\) the departure meridional velocity, \(b\,=\,f_{0}\,p_{z}\) the departure buoyancy and \(w\) the subdominant (geostrophic) vertical velocity. Non-dimensionalizing time and space using \(|f_{0}|^{-1}\) and \(H\), the dimensionless base flow is written as \(U/|f_{0}|H\,\,=\,\,Ro\,\mathcal{U}(z)\), where \(Ro\,=\,|U(0)/f_{0}H|\) is the Rossby number associated with the surface speed of the base flow and \(\mathcal{U}(z)\) denotes the base-flow profile normalized at the surface (\(|\mathcal{U}(0)|=1\)). For brevity we use the same symbols for the dimensionless variables.
Consider a tracer \(\tau\) stirred by the 3D flow and subject to horizontally uniform gradients (at lowest order in \(Ro\)) \(G_{y}^{(\tau)}(z)\) and \(G_{z}^{(\tau)}(z)\) in the meridional and vertical directions, respectively. The QG evolution equation for \(\tau\) reads:
\[\partial_{t}\tau+\mathit{Ro}\,\mathcal{U}(z)\,\tau_{x}+J(p,\tau)=-p_{x}G_{y}^{ (\tau)}(z)-wG_{z}^{(\tau)}(z)+\mathcal{D}_{\tau}\,, \tag{1}\]
where the Jacobian is \(J(g,h)=g_{x}h_{y}-g_{y}h_{x}\) and \(\mathcal{D}_{\tau}\) denotes small-scale diffusion.
Denoting with an overbar "a time average together with a horizontal area average, the eddy-induced meridional and vertical fluxes of \(\tau\) are related to the background gradients by a Gent-McWilliams/Redi (GM/R) diffusion tensor (Redi, 1982; P. Gent & Mcwilliams, 1990; S. M. Griffies, 1998; McDougall & McIntosh, 2001; P. R. Gent, 2011):
\[\begin{pmatrix}\overline{v\tau}\\ \overline{w\tau}\end{pmatrix}=\begin{bmatrix}-K_{R}&(K_{GM}-K_{R})\mathcal{S} \\ -(K_{GM}+K_{R})\mathcal{S}&-K_{R}\mathcal{S}^{2}\end{bmatrix}\begin{pmatrix}G_{ y}^{(\tau)}\\ G_{z}^{(\tau)}\end{pmatrix} \tag{2}\]
Figure 1: **An idealized patch of ocean.** A layer of fluid is subject to global rotation at a rate that varies linearly with the meridional coordinate \(y\). The fluid is density stratified with an arbitrary profile \(N(z)\) for the buoyancy frequency. The background zonal shear flow has an arbitrary profile \(U(z)\). This flow coexists with a background meridional buoyancy gradient. Friction damps kinetic energy on the ocean floor.
where the Redi diffusivity \(K_{R}(z)\) encodes diffusion along the mean isopycal direction, the GM coefficient \(K_{GM}(z)\) encodes the advective (or skew-diffusive) transport, and we denote the isopycal slope of the base state as \(\mathcal{S}(z)=Ro\mathcal{U}^{\prime}/N^{2}\). While (2) is often introduced based on physical intuition and educated guesses, we have recently proposed a direct derivation of this diffusion tensor from the quasi-geostrophic dynamics of the present system (Meunier et al., 2023). For completeness we briefly recall a few results from this recent study.
The quasi-geostrophic potential vorticity (QGPV) \(q\ =\ \Delta_{\perp}p+\partial_{z}\left[p_{z}/N^{2}(z)\right]\) is governed by equation (1) with \(\tau\ =\ q\), \(G_{z}^{(q)}\ =\ 0\) and \(G_{y}^{(q)}\ =\ \tilde{\beta}\ -\ \mathcal{S}^{\prime}(z)\), while buoyancy is governed by (1) with \(\tau\ =\ b\), \(G_{z}^{(b)}\ =\ N^{2}\) and \(G_{y}^{(b)}\ =\ -Ro\mathcal{U}^{\prime}\). Substitution of these background gradients into the flux-gradient relation (2) indicates that \(K_{GM}(z)\) and \(K_{R}(z)\) can alternatively be thought of as the effective diffusivities associated with the meridional transport of \(b\) and \(q\), respectively:
\[K_{GM}=-\frac{\overline{vb}}{G_{y}^{(b)}}=\frac{\overline{vb}}{Ro\ \mathcal{U}^{ \prime}}\,,\qquad K_{R}=-\frac{\overline{vq}}{G_{y}^{(q)}}=\frac{\overline{vq }}{\mathcal{S}^{\prime}(z)-\tilde{\beta}}\,. \tag{3}\]
Because the vertical velocity vanishes at the surface, the governing equations for \(q\) and \(b\) admit the same limiting form as one approaches the top boundary. Both tracers are advected by the surface horizontal flow, fluctuations being induced by distortions of a horizontally homogeneous background meridional gradient. The associated meridional diffusivity is thus equal for \(b\) and \(q\) at the surface: \(\overline{vb}(0)/G_{y}^{(b)}(0)=\overline{vq}(0)/G_{y}^{(q)}(0)\). Provided the friction coefficient is small, the same holds near the bottom boundary, at a depth \(z\ =\ -1^{+}\) located just above the bottom Ekman layer: \(\overline{vb}(-1^{+})/G_{y}^{(b)}(-1^{+})\simeq\overline{vq}(-1^{+})/G_{y}^{ (q)}(-1^{+})\) (while the friction-induced vertical pumping velocity is crucial for damping kinetic energy through the stretching of planetary vorticity, it has a negligible direct contribution to buoyancy transport for low drag coefficient). Using (3) we recast these equalities as:
\[K_{GM}(0)=K_{R}(0)\,,\qquad K_{GM}(-1^{+})\simeq K_{R}(-1^{+})\,. \tag{4}\]
The two equalities in (4) are illustrated numerically in Meunier et al. (2023). An additional constraint on \(K_{GM}\) and \(K_{R}\) is obtained by substituting the definition of \(q\) into the meridional QGPV flux \(\overline{vq}\). After a few integration by parts one obtains the Taylor-Bretherton relation \(\overline{vq}=\mathrm{d}(v\overline{b}/N^{2})/\mathrm{d}z\)(Taylor, 1915; Bretherton, 1966; Smith & Marshall, 2009; Dritschel & McIntyre, 2008; Young, 2012), and expressing the meridional fluxes using (3):
\[K_{R}(\mathcal{S}^{\prime}-\tilde{\beta})=\frac{\mathrm{d}}{\mathrm{d}z}(K_{ GM}\,\mathcal{S})\,. \tag{5}\]
In the following we show that the constraints (4-5) allow for a perturbative derivation of the vertical structure of the eddy-induced buoyancy flux within the water column in two situations of interest.
## 2 Case I: The impact of weak \(\beta\) on Eady turbulence
The QG Eady model corresponds to depth-independent stratification \(N^{2}\) and shear \(\mathcal{U}^{\prime}\) (linear zonal velocity profile, \(\mathcal{U}(z)\,=\,z+1\)), together with \(\beta\,=\,0\). As discussed in Gallet et al. (2022), there is no background PV gradient in this setup and therefore a solution can be obtained by assuming \(q\ =\ 0\) in the bulk of the domain. The meridional QGPV flux then vanishes, and from relation (5) we conclude that the meridional buoyancy flux, and thus \(K_{GM}\), are independent of \(z\).
As established in Meunier et al. (2023), \(K_{R}(z)\) is given by the Taylor-Kubo eddy diffusivity coefficient associated with the horizontal geostrophic flow. That is, at every depth \(z\) the coefficient \(K_{R}(z)\) is given by the integral of the Lagrangian correlation function of the horizontal geostrophic flow. Because in the low-drag limit the Eady flow barotropizes, we expect the horizontal geostrophic flow to be depth-invariant, which leads to \(K_{R}\) being independent of \(z\). Using the boundary relation (4) we conclude that the GM and Redi coefficients are depth-invariant
and equal to one another. The low-drag Eady model thus represents one limiting situation for which the depth-invariance and equality of the GM and Redi coefficients can be established. We stress the fact that the equality of the GM and Redi coefficients has been established based on the properties of the low-drag equilibrated state, namely barotropization, and the theory presented below is really a theory for such a low-drag equilibrated - or 'turbulent' - state. By contrast, the theory would not hold to predict the vertical structure of an eigenmode obtained using linear stability analysis, whose transport properties typically display strong depth-dependence (see Supplementary Information).
Consider now the impact of a weak planetary vorticity gradient on Eady turbulence, that is, a Charney model with weak \(\beta\). Within QG, the impact of \(\beta\) is characterized by the product of \(\beta\) with the squared deformation radius over the typical velocity of the background shear flow (Charney, 1947; Thompson & Young, 2007; Gallet & Ferrari, 2021; Chang & Held, 2021). We thus define the (\(z\)-invariant) parameter \(\beta_{*}=\tilde{\beta}N^{2}/Ro\). In the perturbative regime \(\beta_{*}\ll 1\) the correction to the \(z\)-invariant \(\beta_{*}=0\) situation is small, and a standard expansion leads to \(K_{R}(z)=K_{GM}(z)[1+\mathcal{O}(\beta_{*})]\), where the \(\mathcal{O}(\beta_{*})\) correction vanishes both at the top and at the bottom boundary in the low-drag weakly diffusive regime, see equation (4). Substitution into (5) yields \(K_{GM}^{\prime}(z)=-\beta_{*}K_{GM}(z)+\mathcal{O}(\beta_{*}^{2})\) and, neglecting the \(\mathcal{O}(\beta_{*}^{2})\) correction, \(K_{GM}(z)~{}=~{}\text{const.}~{}\times~{}e^{-\beta_{*}z}\). Substituting this expression for \(K_{GM}(z)\) into (3) and denoting the overall buoyancy flux as \(\left\langle vb\right\rangle=\int_{-1}^{0}\overline{vb}(z)\mathrm{d}\tilde{z}\), we obtain a parameter-free prediction for the vertical structure \(\overline{vb}(z)/\left\langle vb\right\rangle\) of the meridional buoyancy flux:
\[\frac{\overline{vb}(z)}{\left\langle vb\right\rangle}=\frac{\beta_{*}}{e^{ \beta_{*}}-1}\,e^{-\beta_{*}z}\,. \tag{6}\]
To test this perturbative prediction, we have performed numerical simulations of this setup in the QG regime with periodic boundary conditions in the horizontal directions. As detailed in the Supporting Information (see also Meunier et al. (2023)), our numerical approach consists in time-stepping a set of primitive-like equations with tailored \(\beta\) terms that are compatible with the horizontal periodic boundary conditions. Importantly, these tailored terms reduce to the standard \(\beta\) terms in the QG limit. Because we focus on parameter values that are strongly QG, this approach is equivalent to (but more convenient than) directly solving the QG system. In Figure 2 we plot
Figure 2: **An illustrative example: the Charney model with weak \(\beta\).****a.** Snapshot of the departure buoyancy field \(b\) for \(\beta_{*}~{}~{}~{}=~{}~{}~{}0.5\) (large values in red, low values in blue). **b.** Vertical structure of the meridional buoyancy flux in the equilibrated state for increasing \(\beta_{*}\) (solid line: DNS, dashed line: perturbative prediction (6)). The agreement with the prediction is excellent in the perturbative regime \(\beta_{*}~{}~{}\ll~{}~{}1\) and deteriorates somewhat as \(\beta_{*}\) reaches \(\mathcal{O}(1)\) values. The perturbative prediction performs always better than the common practice of parameterizing turbulent transport using a depth-invariant \(K_{GM}\), which corresponds to depth-invariant \(\overline{vb}\) for the present setup. The prediction (6) also performs better than using the meridional flux associated with the most unstable eigenmode, computed perturbatively for weak \(\beta_{*}\) and represented for \(\beta_{*}=0.1\) as a green dash-dotted line.
the vertical structure of the meridional buoyancy flux, \(\overline{vb}(z)/\left\langlevb\right\rangle\), for increasing \(\beta_{*}\). The numerical profiles are in excellent agreement with the parameter-free prediction (6) for low \(\beta_{*}\). As expected, the perturbative prediction deteriorates somewhat as \(\beta_{*}\) increases up to \(\beta_{*}\ =\ 1\). In the next section we show that the perturbative regime accurately captures the typical oceanic situation, characterized by surface-intensified baroclinic turbulence.
## 3 Case II: Surface-intensified shear and stratification
The perturbative approach developed in the preceding section is based on the small value of the background meridional PV gradient: when low-drag baroclinic turbulence is subjected to a weak meridional PV gradient, the profile of \(K_{GM}(z)\) can be inferred by inserting \(K_{R}(z)\simeq K_{GM}(z)\) into the Taylor-Bretherton relation (5).
The meridional PV gradient associated with \(\beta\) is modest in a typical oceanic setting, which may suggest that one can again use the approximate relation \(K_{R}(z)\simeq K_{GM}(z)\) in the bulk of the domain. However, the PV gradient associated with the \(z\)-dependent shear profile is much greater (see figure 6 of Smith and Marshall (2009)). Fortunately, ocean baroclinic turbulence is surface-intensified and the largest shear-induced meridional PV gradient arises in the upper region of the fluid column, where the approximate equality \(K_{R}\simeq K_{GM}\) holds by virtue of the near-surface relation (4). Once again, one can thus substitute the approximate relation \(K_{R}(z)\simeq K_{GM}(z)\) into the Taylor-Bretherton relation (5) to compute the profile of \(K_{GM}\), this time perturbatively in distance from the upper boundary. We conclude that a useful approximation to the vertical dependence of \(K_{GM}\) should be obtained by substituting \(K_{R}(z)\simeq K_{GM}(z)\) into (5) throughout the entire water column. As for Case I, one way to derail this procedure would be to have a surprisingly large meridional PV flux arise in the interior of the domain despite the very weak PV gradient. This typically happens for a nearly marginal eigenmode, but not for the present low-drag equilibrated - or 'turbulent' - states. In that respect the theory below is really a theory for such equilibrated baroclinic turbulence.
Figure 3: **Surface-intensified baroclinic turbulence** from the base run, meant to resemble a patch of the Antarctic Circumpolar Current (positive values in red and negative values in blue, background profiles in the upper-right panel).
After re-arranging, the substitution of \(K_{R}(z)\simeq K_{GM}(z)\) into (5) leads to the following ODE for the vertical structure of the GM coefficient:
\[\frac{\mathrm{d}}{\mathrm{d}z}\ln K_{GM}=-\frac{\tilde{\beta}}{\mathcal{S}(z)}\,. \tag{7}\]
This relation points to the crucial role of \(\beta\) in setting the vertical structure of the eddy-induced buoyancy flux: according to (7) the common assumption of a depth-invariant GM coefficient is valid for \(\beta=0\) only. For arbitrary \(\beta\) equation (7) can be integrated into:
\[K_{GM}(z)=\text{const.}\times\exp\left[-\int_{0}^{z}\frac{\tilde{\beta}}{ \mathcal{S}(\tilde{z})}\mathrm{d}\tilde{z}\right]\,. \tag{8}\]
We have obtained an explicit expression for the vertical structure of the GM coefficient in terms of the vertical profiles of background stratification and shear. Using equation (3), the expression (8) can be recast into a parameter-free prediction for the vertical structure of the meridional buoyancy flux:
\[\frac{\overline{vb}(z)}{\langle vb\rangle} = \frac{\mathcal{U}^{\prime}(z)\exp\left[-\int_{0}^{z}\frac{\tilde{ \beta}}{\mathcal{S}(\tilde{z})}\mathrm{d}\tilde{z}\right]}{\int_{-1}^{0} \mathcal{U}^{\prime}(\tilde{z})\exp\left[-\int_{0}^{z}\frac{\tilde{\beta}}{ \mathcal{S}(\tilde{z})}\mathrm{d}\tilde{z}\right]}\mathrm{d}z\,. \tag{9}\]
To test the prediction (9) we have performed numerical simulations of surface-intensified baroclinic turbulence with parameter values typical of the Antarctic Circumpolar Current (ACC). The dimensionless stratification profile is linear in \(z\) and surface intensified, \(N^{2}(z)\,=\,a_{0}\,+\,a_{1}\,z\), with constant coefficients \(a_{0}\) and \(a_{1}\). The shear flow has an exponential profile \(\mathcal{U}(z)=s\,e^{z/\ell}\) with an e-folding scale \(\ell\) (in units of \(H\)) and a sign prefactor \(s=+1\) for an eastward flow and \(s=-1\) for a westward one. We perform a base run with dimensionless QG parameter values similar to the situation addressed by Smith and Marshall (2009): dimensional magnitude \(|f_{0}|=1.23\times 10^{-4}\) s\({}^{-1}\) for the Coriolis parameter and \(\beta=1.23\times 10^{-11}\) m\({}^{-1}\).s\({}^{-1}\) for the planetary vorticity gradient (corresponding to a latitude of \(57.5^{o}\)S), depth of fluid equal to \(H=4000\) m, eastward shear flow with surface speed \(U(0)=0.15\) m.s\({}^{-1}\) and e-folding scale of \(2000\) m. The dimensional buoyancy frequency ranges from \(8.7\times 10^{-4}\) s\({}^{-1}\) at the bottom to \(2.46\times 10^{-3}\) s\({}^{-1}\) at the surface, which corresponds to a Rossby deformation radius \(\lambda\simeq 19\) km based on the rough WKB estimate \(\lambda/H=\int_{-1}^{0}N(z)\mathrm{d}z/\pi\) (recalling that \(N(z)\) is non-dimensionalized with \(|f_{0}|\)). In terms of dimensionless parameters, these values translate into a Rossby number \(Ro=0.3\), a vertical scale \(\ell=0.5\) for the shear flow, a dimensionless planetary vorticity gradient \(\tilde{\beta}=4.0\times 10^{-4}\) and stratification coefficients \(a_{0}\,=\,400\) and \(a_{1}\,=\,350\). To ensure that the base numerical run indeed corresponds to the fully QG regime, we have used values for \(Ro\) and \(\tilde{\beta}\) that are smaller by a factor of \(10\) (that is, we use \(Ro\,=\,0.03\) and \(\tilde{\beta}\,=\,4.0\,\times 10^{-5}\)), which leaves invariant the dissipation-free QG dynamics.
Together with this base run we have performed a run without \(\beta\) and a run with \(\beta>0\) and a westward base flow. These additional runs are performed with slightly larger stratification (\(a_{0}=800\) and \(a_{1}\,=\,700\)) using the inferred values \(Ro\,=\,0.3\) and \(\tilde{\beta}\,=\,4.0\,\times 10^{-4}\) for the dimensionless buoyancy and planetary vorticity gradients. Finally, we have repeated similar runs using the larger value \(\ell=1\) for the e-folding scale of the shear (see Supporting Information for the values of the other parameters).
In Figure 3 we provide snapshots of the buoyancy and velocity fields in the equilibrated state of the base run. As expected the turbulence is surface intensified and so is the meridional buoyancy flux \(\overline{vb}\), provided in the upper-right panel of Figure 4. Substituting the linear profile for \(N^{2}(z)\) and the exponential profile for \(\mathcal{U}(z)\) into expression (9), the theoretical prediction for the vertical structure of the meridional buoyancy flux becomes:
\[\frac{\overline{vb}(z)}{\langle vb\rangle}=\frac{\exp\left\{\frac{z}{\ell}+ \frac{s}{Ro}[a_{0}+a_{1}(z+\ell)]e^{-\frac{z}{\ell}}\right\}}{\int_{-1}^{0} \exp\left\{\frac{z}{\ell}+\frac{s}{Ro}[a_{0}+a_{1}(z+\ell)]e^{-\frac{z}{\ell} }\right\}\mathrm{d}z}\,. \tag{10}\]
We compare this prediction to the numerically determined meridional flux profiles in Figure 4. The agreement is very good for both values of \(\ell\), both with and without \(\beta\), and for both eastward and westward flows. For \(\beta~{}=~{}0\) the theoretical prediction is that of a depth-invariant GM coefficient, and thus a meridional buoyancy flux that inherits the vertical structure \(\mathcal{U}^{\prime}(z)\) of the background shear. The good agreement with the numerical profiles validates this prediction and indicates that a depth-invariant GM coefficient is indeed an excellent parameterization when \(\beta=0\). For \(\beta\neq 0\), however, the prediction (10) departs from the common practice of using a depth-invariant GM coefficient. In all cases the prediction (10) better captures the vertical structure of \(v\overline{b}\), without adjustable parameters (see Figure 4). The difference between the two predictions - equation (10) versus uniform \(K_{GM}\) - is modest for \(\ell~{}=~{}0.5\) and greater for \(\ell~{}=~{}1.0\). In particular, using a uniform \(K_{GM}\) would lead to the same vertical structure for the buoyancy flux regardless of whether the base flow is directed eastward or westward. By contrast, the numerical data indicate that the vertical structure strongly depends on the direction of the base flow: for \(\ell=0.5\) the bottom-to-top meridional flux ratio, evaluated as \(\overline{vb}(-0.95)/\overline{vb}(-0.05)\), is 12% for a westward flow and 24% for an eastward flow. For \(\ell=1\) this ratio is 15% for a westward flow and 73% for an eastward flow. For a visual illustration of these differences, we represent in Figure 4 the uniform-\(K_{GM}\) prediction as an orange dotted-line for comparison with the present prediction, demanding that the two predictions be equal at the top surface \(z=0\).
Figure 4: **Vertical structure of the meridional buoyancy flux** for an eastward flow, for a westward flow and for the case \(\beta~{}=~{}~{}0\), using either \(\ell~{}~{}=~{}0.5\) or \(\ell~{}~{}=~{}~{}1\). The solid line is the profile extracted from the numerical runs. The dashed line is the theoretical prediction (10). The orange dotted-line corresponds to the uniform-\(K_{GM}\) model that matches the surface value of the full prediction (10) (see text for details). The prediction (10) reduces to a depth-invariant \(K_{GM}\) when \(\beta~{}=~{}~{}0\), which agrees accurately with the \(\beta~{}=~{}~{}0\) numerical profiles. For \(\beta~{}~{}~{}\neq~{}~{}~{}0\) the prediction (10) departs from a uniform \(K_{GM}\) and agrees well with the numerical profiles.
## 4 Conclusion
The predictions (8) and (9) are based on a perturbative approach that holds in the near-surface and near-bottom regions of the fluid column for arbitrary meridional potential vorticity gradient, and throughout the entire fluid column when the meridional potential vorticity gradient is weak. The present perturbative framework is useful for baroclinic turbulence in the ocean, where the shear flow and meridional PV gradient are boundary-intensified, with weaker PV flux in the interior. One can then combine the Taylor-Bretherton relation between the buoyancy and PV fluxes with the near-equality of the GM and Redi coefficients in the vicinity of the boundaries. This leads to a prediction for the vertical structure of the buoyancy flux that agrees well with the profiles extracted from direct numerical simulations, see Fig. 4. It would be interesting to further investigate the range of validity of the predictions (8) and (9) beyond the present oceanographically relevant situations. For instance, a system with a vanishing meridional buoyancy gradient at the bottom \((G_{y}^{(b)}(-1)=0)\) may emphasize the role of bottom friction and disrupt the relation \(K_{GM}(-1^{+})\simeq K_{R}(-1^{+})\). More generally, while surprisingly successful the present perturbative approach should probably be used with caution whenever the exponential factor in (8) varies by much more than a factor of two within the water column.
For eastward shear flows (positive shear) the right-hand side of equation (7) is negative: \(K_{GM}(z)\) is greater at depth according to both the theory and the numerics, even though the turbulence is surface-intensified. This prediction is fully compatible with the \(K_{GM}\)-profile reported by Abernathey et al. (2013) and challenges models where the \(K_{GM}\)-profile is assumed to be proportional to the profile of \(N^{2}(z)\)(Ferreira et al., 2005). The present results also seem to invalidate the idea that the vertical structure of the flux could be governed by a single baroclinic mode (Stanley et al., 2020). Indeed, the modal decomposition (Flierl, 1978) is the same for all panels of Fig. 4 and yet the buoyancy flux profiles differ strongly between panels.
Another idea put forward in the atmospheric context is that the flux profiles in the equilibrated state resemble those of the most unstable mode inferred from linear stability analysis (Green, 1970; Held & O'Brien, 1992; Chai & Vallis, 2014). An issue with this approach is that only the equilibrated state is governed by the diffusion tensor (2), see the derivation in Meunier et al. (2023). In particular, equation (2) indicates that the ratio of the vertical to the meridional buoyancy flux is given by the mean isopycnal slope \(\mathcal{S}\) (adiabatic transport). As discussed in Eady (1949) and Vallis (2017), this constraint does not hold for an unstable eigenmode because of the non-stationary terms, the associated profiles \(\overline{wb}(z)\) and \(\overline{vb}(z)\) being therefore incompatible with (2) (in other words, one would infer a different profile for \(K_{GM}(z)\) based on \(\overline{vb}(z)\) or \(\overline{wb}(z)\)). We have nevertheless computed the most unstable eigenmode of the present Charney model, perturbatively for weak \(\beta_{*}\) (see Supporting Information). As shown in Figure 2, the associated meridional buoyancy flux overpredicts the variations of \(\overline{vb}(z)\) with depth and compares unfavorably with the present prediction (6). The most-unstable-mode approach may be better-suited for weakly nonlinear atmospheric states charaterized by a weak supercriticality \(\xi=1/\beta_{*}\), as opposed to the present large-supercriticality oceanic situations (Jansen & Ferrari, 2012).
The success of the perturbative approximation \(K_{R}~{}=~{}K_{GM}\) throughout the entire water column for case II above may come as a surprise to the reader accustomed to channel simulations, where \(K_{R}\) typically exceeds \(K_{GM}\) in the interior (see e.g. Abernathey et al. (2013)). The reason for this success is that the meridional QGPV gradient is small in the interior and around the so-called'steering levels' (Green, 1970; Treguier, 1999; Smith & Marshall, 2009; Abernathey et al., 2010, 2013), making the QGPV flux \(\overline{vq}\) negligible there (see e.g. figure 6 of Smith and Marshall (2009)). One thus makes a negligible error by inferring the buoyancy and QGPV flux profiles using the approximation \(K_{R}=K_{GM}\) throughout the entire water column.
The perturbative prediction (8) for the vertical structure of the GM coefficient is simple to implement, it is easily extended to a patch of ocean subject both to zonal and meridional large-scale gradients and shear flows, it is free of adjustable parameters - except for the overall magnitude of the transport - and it compares very favorably with the common practice of using a depth-invariant GM coefficient. The implementation of (8) in a global model should lead to a more ac
curate description of the stratification of the Southern Ocean, and therefore of neighboring ocean basins. Beyond this modeling application, the physically-based vertical structure (9) for the buoyancy flux could be of use to infer the buoyancy flux throughout the entire water column based on near-surface data. Indeed, figure 4 shows that the prediction (9) allows one to propagate the value of the near-surface flux to the interior of the water column in a way that agrees closely with the full DNS profile. By contrast, propagating the near-surface information using a uniform GM coefficient would lead to the orange line in figure 4, which at depth typically departs from the DNS profile by \(40\%\) to \(100\%\) depending on the situation.
## Acknowledgments
This research is supported by the European Research Council under grant agreement FLAVE 757239. The numerical study was performed using HPC resources from GENCI-CINES and TGCC (grants 2021-A0102A10803, 2022-A0122A12489 and 2023-A0142A12489).
|
2302.05246 | Electrical characterization of the azimuthal anisotropy of
$(\mathrm{Ni}_x\mathrm{Co}_{1-x})\mathrm{B}$-based ferromagnetic nanotubes | We report on the structural, electric and magnetic properties of
$(\mathrm{Ni}_x\mathrm{Co}_{1-x})\mathrm{B}$ ferromagnetic nanotubes,
displaying azimuthal magnetization. The tubes are fabricated using electroless
plating in polycarbonate porous templates, with lengths several tens of
micrometers, diameters from 100nm to 500nm and wall thicknesses from 10nm to
80nm. The resistivity is $\sim 1.5\times10^{-6}\mathrm{\Omega/m}$, and the
anisotropic magnetoresistance~(AMR) of 0.2-0.3%, one order of magnitude
larger~(resp. smaller) than in the bulk material, which we attribute to the
resistance at grain boundaries. We determined the azimuthal anisotropy field
from M(H) AMR loops of single tubes contacted electrically. Its magnitude is
around 10mT, and tends to increase with the tube wall thickness, as well as the
Co content. However, surprisingly it does not dependent much on the diameter
nor on the curvature. | Dhananjay Tiwari, Martin Christoph Scheuerlein, Mahdi Jaber, Eric Gautier, Laurent Vila, Jean-Philippe Attané, Michael Schöbitz, Aurélien Masseboeuf, Tim Hellmann, Jan P. Hofmann, Wolfgang Ensinger, Olivier Fruchart | 2023-02-10T13:53:16Z | http://arxiv.org/abs/2302.05246v1 | Electrical characterization of the azimuthal anisotropy of (Ni\({}_{x}\)Co\({}_{1\,-\,x}\))B-based ferromagnetic nanotubes
###### Abstract
We report on the structural, electric and magnetic properties of (Ni\({}_{x}\)Co\({}_{1\,-\,x}\))B ferromagnetic nanotubes, displaying azimuthal magnetization. The tubes are fabricated using electroless plating in polycarbonate porous templates, with lengths several tens of micrometers, diameters from \(100\,\mathrm{nm}\) to \(500\,\mathrm{nm}\) and wall thicknesses from \(10\,\mathrm{nm}\) to \(80\,\mathrm{nm}\). The resistivity is \(\sim 1.5\times 10^{-6}\,\mathrm{\Omega}\cdot\mathrm{m}\), and the anisotropic magnetoresistance (AMR) of \(0.2\) to \(0.3\%\), one order of magnitude larger (resp. smaller) than in the bulk material, which we attribute to the resistance at grain boundaries. We determined the azimuthal anisotropy field from M(H) AMR loops of single tubes contacted electrically. Its magnitude is around \(10\,\mathrm{mT}\), and tends to increase with the tube wall thickness, as well as the Co content. However, surprisingly it does not dependent much on the diameter nor on the curvature.
+
Footnote †: Present address: Advanced Safety and User Experience, Aptiv Services Poland SA, Krakow, Poland
+
Footnote †: Present address: Advanced Safety and User Experience, Aptiv Services Poland SA, Krakow, Poland
## I Introduction
Nanotubes (NTs) are hollow structures characterized by a sub-micrometer diameter, a wall thickness (outer minus inner diameter), and a length much larger than the diameter. They are part of the wider family of one-dimensional structures (1-D), which in magnetism provide an ideal platform for both the fundamental investigation of domain-wall (DW)[1] or skyrmion[2] motion, spin-wave propagation[3], and the implementation of logic[4; 5] or memory functionalities[6; 7]. While most developments for magnetism in 1-D structure have been based on flat strips fabricated by the combination of physical deposition and nanofabrication so far, cylindrical structures offer specific physics related to curvature and dimensionality[8]. For instance, a magnetic domain wall with a unique topology had been predicted to arise in nanowires and give rise to very high mobilities, the Bloch-point wall[9; 10; 11], whose existence and high mobility were recently confirmed experimentally[12; 13]. NTs provide two additional degrees of freedom compared to nanowires, one being the ratio of outer over inner radius, the second being the ability to fabricate core-shell structures with interfaces. The latter is particularly appealing, as most spintronic effects arise from interfaces. One expects the magnetization to be uniform and parallel to the axis in long NTs made of a soft-magnetic material, because of the dipolar shape effect[14; 15]. Accordingly, theory predicted that the behavior of such magnetic NTs is very similar to that of magnetic nanowires, such as the occurrence of curling at the apex of the tube[16], and vortex-type domain walls with high mobilities[17; 18; 19]. Other theoretical works examined the situation of tubes with azimuthal magnetization, predicting other specific features such as the curvature-induced non-reciprocal propagation of Daemon-Eschbach-type spin waves[20].
From the experimental point of view there are now many methods for fabricating long NTs, based on the coating of porous anodized alumina[21] or polymer [22] templates, or wire templates such as resulting from VLS growth[23]. The coating methods include electrochemical deposition [24; 25], atomic layer deposition (ALD) [26], electroless plating [22], chemical vapour deposition (CVD) [27] or physical deposition. Yet another route for the fabrication of NTs is the nano-rolling of free thin films[28], however rather delivering diameters in the micrometer range. While the case of NTs with axial magnetization has been confirmed as expected[29], there have been a number of reports, demonstrating that domains with azimuthal magnetization could be obtained experimentally[28; 30; 31; 32; 33], either by coating non-magnetic wire templates by tilted-incidence physical deposition, rolled thin films or electroless plating of porous templates. While in the former two azimuthal magnetic anisotropy is reminiscent of the one arising in thin films induced by tilted deposition or uniaxial strain, the latter came more unexpectedly, and has been ascribed to the curvature-induced anisotropy of intergranular anisotropy or magneto-elastic energy[31]. It is the purpose of the present work to report extensively on the link between azimuthal anisotropy in electroless-plated NTs versus tube diameter, wall thickness and material composition. The motivation is to provide a panorama of static properties that can be obtained, before searching for the magneti
zation dynamics predicted for NTs with azimuthal magnetization, and possibly to shed light on the microscopic origin of magnetic anisotropy in such NTs.
## II Synthesis and structural analysis
Three batches of (Ni\({}_{x}\)Co\({}_{1\,-\,x}\))B NTs (\(x=30\), 50 and 80) were fabricated using electroless plating in ion track-etched polycarbonate membranes. The synthesis of the NTs is based on a previously-described procedure [31; 34], and is schematically shown in Fig. 1 (a)-(d). First, polycarbonate foils are irradiated with swift heavy ions, creating latent damage tracks that are more vulnerable to chemical etching than the surrounding bulk polymer [Fig. 1 (a)]. Subsequent treatment in a NaOH solution yields cylindrical pores, which are used as templates for the fabrication of NTs [Fig. 1 (b)]. In order to initiate the electroless deposition reaction, catalytically-active Pd nanoparticles are deposited on the membrane surface by alternately submerging the membrane in Sn(II)- and Pd(II)-containing solutions [Fig. 1 (c)]. Subsequently, the surfaces of the membrane are coated with (Ni\({}_{x}\)Co\({}_{1\,-\,x}\))B by electroless plating, including the inside of the pores, yielding the formation of tubes. The Ni-to-Co ratio is tuned by changing the relative concentration of the respective metal precursors in the plating bath, while B is introduced a a byproduct by the reducing agent (dimethyl aminoborane, DMAB). A more detailed description of the NT fabrication and of the underlying mechanisms can be found in the appendix, section E. As indicated by XPS measurements, the B content of the material is in the range of 20 % - at, which is typical for electroless CoB and NiB deposits fabricated using DMAB as a reducer (see Appendix, section F). After synthesis, the polycarbonate membranes were dissolved in dichloromethane. Ideally, this would yield a suspension of purely single, isolated NTs. However, since also the top and bottom surfaces of the membrane are coated during the electroless plating process, some of the tubes remain attached to one another [see Fig. 1 (e)]. Nonetheless, due to the considerate amount of mechanical stress caused by the swelling of the polymeric matrix during dissolution, many single tubes are present in the suspension. Next, a drop of diluted NT suspension is applied onto highly-resistive silicon wafers, to obtain single NTs ready for further analysis, and ultimately electrical contacting, or to a copper grid with a lacey carbon film for TEM analysis. Images obtained in conventional (S)TEM imaging are presented in Fig. 2 (a-c). The nano-granular structure of the grown layer is clearly visible. As transmission images of tubular structures overlap information from the top and bottom layer in the projected image, we focused on a broken tube [Fig. 2 (b-inset)] to conduct high-resolution imaging on a single layer. This delivers sharp images, from which the typical size of grains is inferred to be 8(5) nm with a typical grain boundary as large as 1 nm. The grain boundaries appear black in the HAADF contrast image [Fig. 2 (c)], which points at light elements, compatible with Boron (however, the detection of Boron was not possible at this high magnification in our setup). A higher magnification [Fig. 2 (c-inset)] image showed that the grains displays a finer structure, light with HAADF contrast, which we accordingly associate with the Pd seeds used for the electroless growth.
We used Focused Ion Beam to slice NTs and perform a cross-sectional analysis in a TEM. Fig. 2 (d) shows the TEM lamella before the final thinning, whose thickness we estimated to be around 80(10) nm from know-how in such preparation. Observation of the thinned lamella indicates a rather homogeneous wall thickness of 60 nm. Energy-Dispersive X-ray (EDX) analysis of the tubes, which provides a qualitative yet not fully quantitative view, revealed that the NiCo composition is not homogeneous across the tube thickness. Instead, nickel tends to segregate towards both the inner and outer surfaces of the NT [Fig. 2 (e) and (f)]. This surface enrichment in nickel comes with a decrease in cobalt content, an effect slightly more pronounced at the inner surface. We further analyzed the slice with Electron Energy Loss Spectroscopy (EELS). This revealed a variation of composition from Ni\({}_{50}\)Co\({}_{50}\) at the outer surface to Ni\({}_{40}\)Co\({}_{60}\) at the inner surface, separated by a plateau in the core of the material with a Ni\({}_{30}\)Co\({}_{70}\) composition. EELS also confirmed the absence of oxidation at the inner side of the NT[35]. We also performed Electron Holography on the slice using the time-reversal [36] method to separate the electrostatic (Mean inner potential - MIP)
Figure 1: Illustration of the NT growth process using electroless plating. (a) Damaged-track formation by swift heavy-ion irradiation. (b) Cylindrical pores formed inside polycarbonate membranes via chemical etching. (c) Formation of the active seed layer (Pd in our case) as a catalyst for material growth. (d) Reduction of metal ions for the growth of (Ni\({}_{x}\)Co\({}_{1\,-\,x}\))B NTs, synthesized using electroless plating (e) Scanning electron microscopy (SEM) image of NTs released on a Si wafer from a drop of the solution with suspended NTs. (f) Optical image of a NT contacted electrically between two conductive pads.
and the magnetostatic (MAG) parts of the reconstructed phase [Fig. 2 (g) and (h)]. The iso-lines of the MAG-cosine phase are shown on Fig. 2 (g), displaying the magnetic induction flux lines. Fig. 2 (h) shows the MIP and MAG phases profiles. The slope of the latter is estimated to \(0.11(4)\,\mathrm{rad/nm}\), which translates into an estimation of magnetization of \(\mu_{0}M_{\mathrm{s}}=$0.9(1)\,\mathrm{T}$\), based on slice thickness \(80\,\mathrm{nm}\). Note that the MAG profile may indicate a slight decrease of magnetization (lower slope change) near the inner surface, consistent with the structural indication of lower Co content. However, this has not been seen uniformly on other profiles extracted at other parts of the slice. So, this could result from a local decrease of the thickness of the slice rather than from a composition change. Finally, it worth noticing that the flux-closure state observed in such slice cannot be extrapolated to a full tube as a ground state : the slicing strongly promotes azimuthal magnetization due to the short aspect ratio of the resulting tube, taking more the form of a ring here.
## III Magnetic and transport properties
### Magnetotransport measurements
To conduct transport measurement, NTs were transferred on highly-resistive silicon (Si) wafers (\(\sim$10^{6}\,\mathrm{\SIUnitSymbolOhm}$\cdot$\mathrm{m}$\)), capped with natural oxide and pre-patterned with alignment marks. The surface of the wafer is examined by scanning electron microscopy (SEM) to locate suitable NTs, with respect to the alignment marks. Next, one or a few NTs per cm\({}^{2}\) were contacted electrically as follows. First, a two-lead pattern is written in the resist (positive resists, LOR 3A of \(\sim$200\,\mathrm{nm}$\) and S1805 of \(\sim$500\,\mathrm{nm}$\)) using laser lithography. Second, the surface of the NTs is cleaned through in-situ ion-beam etching, to remove any oxide layer from the surface. A \(\mathrm{Ti}($15\,\mathrm{nm}$)/\mathrm{Au}($250\,\mathrm{nm}$)\) layer is then evaporated, followed by lift-off of the resist, which defines the conductive leads. The distance between two leads is \(14\,\mathrm{\SIUnitSymbolMicro m}\) in Fig. 1 (f). Details of the contacting process were already provided elsewhere [13].
Fig. 3 shows the geometry of the measuring setup, and the magnetotransport characterization of a NT with composition \(\mathrm{Ni}_{30}\mathrm{Co}_{70}\), tube diameter \(d=$470\,\mathrm{nm}$\) and wall thickness \(t_{0}=$59\,\mathrm{nm}$\). Transport measurements were conducted by applying a current (\(I_{\mathrm{DC}}=$10\,\mathrm{\SIUnitSymbolMicro A}$\)), and measuring the voltage across the same leads. This corresponds to a current density of about \(1.3\times 10^{8}\,\mathrm{A}\cdot\mathrm{m}^{-2}\), if assumed to be uniform across the tube.
The resistance of the device at remanence is \(128.3\,\mathrm{\SIUnitSymbolOhm}\) at \(300\,\mathrm{K}\) and \(112.3\,\mathrm{\SIUnitSymbolOhm}\) at \(10\,\mathrm{K}\). This translates into resistivities \(\rho_{0}=$1.5\times 10^{-6}\,\mathrm{\SIUnitSymbolOhm}$\cdot$\mathrm{m}$\) at \(300\,\mathrm{K}\) and \(\rho_{0}=$1.3\times 10^{-6}\,\mathrm{\SIUnitSymbolOhm}$\cdot$\mathrm{m}$\) at \(10\,\mathrm{K}\), assuming absence of contact resistance and of voltage drop across the leads. These values are one order of magnitude higher compared to bulk NiCo[37]. This likely results from the high boron content and from the nanocristalline nature of the material [Fig. 2 (a)], liable to give rise to inter-granular resistance at grain boundaries. This will be further supported by magnetoresistive measurements, reported below.
Magnetoresistance properties were investigated by applying an external magnetic field up to \(1\,\mathrm{T}\) along various directions. The geometry is sketched in Fig. 3 (a), with \(\theta_{H}\) (resp. \(\theta_{M}\)) the angle between the applied field (resp. magnetization) and the axis of the NT, which
Figure 2: TEM characterization of a NT. (a) STEM-HAADF view of a NT dispersed onto a grid, revealing a granular structure of the material. (b and c) Zoomed view in bright field and HAADF respectively of a one-wall-only end part, exhibiting the granular and intergranular structure spotted in the text. Insets display (c) the general view of the broken tube used for this single layer analysis and (d) higher magnification image in HAADF mode displaying the fine structure of the grains. (d) Penultimate step of the Focus Ion Beam preparation of a NT single slice. (e) EDX mapping of the slice, highlighting the presence of Ni and Co (green and blue, respectively). The dashed arrow indicates the extracted profiles. (f) EDX profiles of the signal account for cobalt and nickel, using the same color code as for (c). (g) Electron holography output, displaying \(\mathrm{MIP}\times\cos\left(5\cdot\mathrm{MAG}\right)\) (see text for details). Same dashed arrow as in (c), highlighting here the location for the phase profile. (h) Phase profiles for the MIP (brown) and MAG (red) components of the phase shift (see text for details).
is also the direction of the flowing current. The measurements [Fig. 3 (b)] are qualitatively similar to those already performed on various types of NTs displaying azimuthal magnetization[23; 30], which we analyze in the following. For magnetic field applied along the tube axis [black in Fig. 3 (b), with zoom in Fig. 3 (c)], saturation is reached at about \(20\,\mathrm{mT}\). The \(R(H)\) loop is qualitatively consistent with the picture of azimuthal magnetization at remanence already proven directly in the same tubes by magnetic imaging[31], and with the existence of a positive anisotropic magnetoresistance in these materials (AMR)[37; 38] as defined by:
\[R=R_{\perp}+(R_{\parallel}-R_{\perp})\cos^{2}\theta_{\mathrm{M}}, \tag{1}\]
\[\mathrm{AMR}=\frac{R_{\parallel}-R_{\perp}}{R_{\perp}}. \tag{2}\]
The magnetoresistance curve is slightly hysteretic at small fields, meaning that the direction of magnetization depends on the magnetic history. This implies that magnetization may not be perfectly azimuthal at remanence. To illustrate the rotation of the magnetization under application of the longitudinal field, is is convenient to display normalized \(M(H)\) loops obtained as \(\cos\theta_{\mathrm{M}}=\sqrt{\Delta R(H)}/\Delta R_{\mathrm{AMR}}\), with \(\Delta R(H)=R(H)-R_{0}\) and \(\Delta R_{\mathrm{AMR}}=R_{\mathrm{sat}}-R_{0}\), with the resistances \(R_{\mathrm{sat}}\) at saturation, and \(R_{0}\) the minimum resistance [Fig. 3 (c)]. The hysteresis now appears in a more usual fashion, with coercivity of about \(2\,\mathrm{mT}\).
We now examine a magnetoresistance curve for a magnetic field applied across the tube, _i.e._, \(\theta_{H}=90^{\circ}\) [red line in Fig. 3 (b)]. Starting from remanence the resistance increases until \(\mu_{0}H_{\mathrm{ext}}\approx 65\,\mathrm{mT}\), and then continuously decreases up to \(1\,\mathrm{T}\). This bell-shaped response shares some features with previously-reported magnetoresistive curves of NTs with azimuthal magnetization[23; 30], however here with a much clearer dip at remanence. We understand the bell shape the following way (Fig.4). At remanence the value of resistance is very similar to that obtained with a longitudinal field, which points at sharing the same remanent state, with a largely azimuthal magnetization. When the transverse field is increased a transition to an onion state is expected[30]. This process is likely to explain the sizable hysteresis around this field. Indeed, the change of magnetization distribution cannot be achieved reversibly, and requires nucleation and motion of domain walls. At the transition field the head-to-head and tail-to-tail parts are not expected to be aligned along the applied field, which is moderate, but rather rotate along the axial direction to remain parallel to the local surfaces, and thereby keep the magnetostatic energy moderate. Magnetization in these parts is expected to be parallel to the electric current, which is consistent with the increase of resistance. This area with axial magnetization is expected to decrease upon increasing the field, ending in a NT mostly saturated transverse to its axis. Indeed, in this situation magnetization is perpendicular to the current everywhere, bringing resistance to a minimum. It is the difference between the maximum and min
Figure 3: (a) Optical image of a contacted NT with composition (Ni\({}_{30}\)Co\({}_{70}\))B, external diameter \(d=470\,\mathrm{nm}\), wall thickness \(t_{0}=59\,\mathrm{nm}\), and distance between the leads \(14\,\mathrm{\SIUnitSymbolMicro m}\). (b) Resistance versus applied field \(\mu_{0}H_{\mathrm{ext}}\) swept up-and-down in a four-quadrant fashion, applied either parallel (black) and perpendicular (red) to the NT axis. (c) Hysteresis loop of (Ni\({}_{30}\)Co\({}_{70}\))B tubes, displaying the longitudinal magnetization \(M(H)=\cos[\theta_{M}(H)]\) reconstructed from \(R(H)\) loops. _i.e._, azimuthal magnetization.
imum values on Fig. 3 (b), considering all directions of applied field, that defines most accurately the magnitude of AMR, based on Eq.(2). This sets the AMR ratio of the single (Ni\({}_{30}\)Co\({}_{70}\))B NT at \(\sim 0.15\,\%\) at 300 K and 0.25 % at 10 K. These figures are one order of magnitude lower than both bulk CoNi and CoNiB alloys[37]. This is consistent with the high resistivity measured, understood as a resistance dominated by intergranular effects. Indeed, the latter should not give rise to magnetoresistance as long as magnetization is uniform along the current flow, which is largely expected here.
The same measurements have been made on individual NTs with concentration (Ni\({}_{50}\)Co\({}_{50}\))B and (Ni\({}_{20}\)Co\({}_{80}\))B. The behavior is qualitatively similar, with the quantitative analysis reported in the next section.
### Magnetic anisotropy
We now report on the determination of the strength of the azimuthal magnetic anisotropy of the NTs derived from their hysteresis loops as reconstructed in Fig. 3 (c), and then discuss it versus the geometry and composition of the NTs.
We first describe the protocol to determine the various quantities associated with the anisotropy. Owing to the moderate thickness of the NTs considered in the following, we assume that the direction of the magnetization may not vary significantly across the radius, and accordingly consider an effective value of volume density of magnetic anisotropy, \(K_{\rm eff}\). The volume density of magnetic energy of a NTs is \(K_{\rm eff}\cos^{2}\theta_{M}\), \(\theta_{M}\) being the polar
Figure 4: Sketch of the expect magnetization state under transverse applied magnetic field: at (a) remanence, fully azimuthal (b) at intermediate field, in an onion state (c) at large field with asymptotically uniform magnetization.
Figure 5: Magnetic anisotropies: effective anisotropy volume density \(K_{\rm eff}\) and anisotropy field \(H_{\rm a}\) determined experimentally, versus: (a) tube thickness, (b) tube diameter, and (c) material composition. The measured tube thickness measured for each sample is indicated on the top \(x\) axis for (b) and (c) (see text). \(H_{\rm a}\) represents the extracted magnetic anisotropy field.
angle of magnetization versus the tube axis. With these notation, a positive value for \(K_{\rm eff}\) means a longitudinal hard axis. Considering the local shape anisotropy that shall tend to restrict the magnetization direction within the shell, a positive \(K_{\rm eff}\) translates into an azimuthal easy axis. This effective anisotropy is, by definition, the area above the magnetization hysteresis loop considered along a hard-axis direction:
\[K_{\rm eff}=\mu_{0}\int_{0}^{M_{\rm s}}H(M)\,\mathrm{d}M. \tag{3}\]
To avoid a calculation bias induced by the hysteresis, even if moderate, we consider the unhysteretic curve by averaging the up and down H(M) curves.
The strength of the effective magnetic anisotropy may also be expressed in terms of the anisotropy field, defined as:
\[H_{\rm a}=\frac{2K_{\rm eff}}{\mu_{0}M_{\rm s}}. \tag{4}\]
Calculating these values requires information about the magnetization of the material properties. This was done using magnetometry, with good agreement with the Slater-Pauling curve for CoNi alloys (see appendix).
Finally, for the sake of identifying the role of curvature in the anisotropy, it is important to remember that we expect two contributions to \(K_{\rm eff}\). The first contribution arises from the interaction with the lattice, which we will write \(K_{\rm mc}\) for magnetocrystalline anisotropy, be it a magnetocrystalline, magnetoelastic or interface anisotropy. The second contribution is that of the exchange energy (\(K_{\rm ex}\)) associated with azimuthal curling of the magnetization:
\[K_{\rm ex}=-\frac{A}{R_{0}^{2}}\;, \tag{5}\]
with \(R_{0}\) the average tube radius[39; 15; 8] and \(A\approx 10\,\mathrm{pJ}\cdot\mathrm{m}^{-1}\) the exchange stiffness. The minus sign reflects the fact that exchange favors axial uniform magnetization as the ground state. So, in the end the anisotropy arising from the lattice, _i.e._, the microstructure of the material, is \(K_{\rm mc}=K_{\rm eff}-K_{\rm ex}\).
We now present and discuss the values of the azimuthal anisotropy. The order of magnitude of \(K_{\rm eff}\) and \(\mu_{0}H_{\rm a}\) are \(5\,\mathrm{kJ}\cdot\mathrm{m}^{-3}\) and \(10\,\mathrm{mT}\), respectively. Fig. 5 display the dependence of the volume density of azimuthal magnetic anisotropy, as well as that of the associated anisotropy field, versus the tube thickness, diameter and material composition. Note that at the synthesis stage we aim at reaching a nominal thickness by controlling the deposition time, however in practice the thickness may deviate from the target and needs be determined by TEM. This explains why we cannot display the diameter and composition dependence fully independently from the thickness, which is indicated at the top \(x\) axis for (b) and (c), for each sample. Fig. 5(a) displays the anisotropy versus tube thickness dependence for fixed diameter \(d_{0}=470\,\mathrm{nm}\) and composition (Ni\({}_{30}\)Co\({}_{70}\))B, for different deposition time. The contribution of the exchange to \(K_{\rm eff}\) is negligible for this large diameter. The anisotropy tends to increase for larger thicknesses. This is understandable as strain, curvature and therefore the anisotropy of grains are expected to increase as the inner diameter of the tube decreases. More surprising is the presence of a plateau between 35 and \(45\,\mathrm{nm}\). Fig. 5(b) displays the anisotropy versus the tube diameter, for a thickness of about \(30\,\mathrm{nm}\) and composition (Ni\({}_{30}\)Co\({}_{70}\))B. The contribution of exchange is weak, except for the smaller diameters investigated here, _i.e._, \(150\,\mathrm{nm}\). Following the subtraction of the contribution of exchange, the part of anisotropy due to the lattice only, \(K_{\rm mc}\), does not show a clear variation with the curvature. This is surprising as curvature is a required ingredient to break the symmetry between the axial and azimuthal directions, and therefore induce azimuthal anisotropy. Last, Fig. 5(c) displays the anisotropy versus composition for a diameter \(470\,\mathrm{nm}\) and thickness of about \(30\,\mathrm{nm}\). Again, the contribution of exchange is negligible for this large diameter. As one possible underlying physical mechanism for anisotropy is strain and inverse magnetostriction, let us examine what is known about CoNi alloys. Both magnetostriction coefficients \(\lambda_{100}\) and \(\lambda_{111}\) of CoNi metallic single crystals increase with the Co concentration in the present range[40]. Regarding boron-containing alloys, data is available for metallic glasses[41], showing a maximum of magnetostriction around the composition Ni\({}_{50}\)Co\({}_{50}\). Both situations would be consistent with the smaller anisotropy of Ni-rich alloys, although one should remain cautious about the interpretation, as the fine details of the electroless material are not known (eg., the exact amount of boron, and whether in the matrix or at the grain boundaries). Also, the surface segregation of nickel and the enrichment with Co (Fig.2) of the core may also affect anisotropy.
## IV Conclusion
We have investigated the magnetoresistive properties and the strength of the magnetic anisotropy favoring the azimuthal direction of magnetization in electroless-plated (Ni\({}_{x}\)Co\({}_{1\,-\,x}\))B nanotubes, versus the nanotube diameter and thickness, and the composition of the alloy. The measured resistivity is one order of magnitude higher compared to that of bulk samples, while the anisotropic magnetoresistance is one order of magnitude lower, which we believe is related to the drop of voltage across grain boundaries in this nanocrystalline material. The strength of the azimuthal magnetic anisotropy is of about \(5\,\mathrm{kJ}/\mathrm{m}^{3}\). The anisotropy tends to increase with the tube thickness, and depends only weakly on its diameter. While no direct proof can be given about its microscopic origin, its variation with the material composition is consistent with the curvature-induced anisotropy of strain, combined with inverse magnetostriction.
## Acknowledgments
This project received support from the ANR-DFG C3DS project (ANR-18-CE92-0045, DFG-406700532). A CC-BY public copyright license has been applied by the authors to the present document and will be applied to all subsequent versions up to the Author Accepted Manuscript arising from this submission, in accordance with the grant's open access conditions[42]. We thank the Nanofab platform at Institut Neel, whose team and especially Bruno Fernandez provided technical support. M. C. S. and T. H. are grateful for valuable discussion with Jona Schuch (Surface Science Laboratory, TU Darmstadt). M. C. S. and W. E. thank Prof. Christina Trautmann and Dr. Maria Eugenia Toimil-Molares (Materials Research Group, GSI Helmholtzzentrum fur Schwerionenforschung) for their support during the ion irradiation experiments.
## Appendix A Resistivity measurements
The evaluation of the resistivity is based on the measured resistance, and requires knowledge of the cross-sectional area of the NT through which the current is flowing. Thus, uncertainties on both affect the inferred value for resistivity. The value of resistance may be affected by a number of effects, notably an interfacial resistance between the tube and the electrical leads. As regards area, in electroless plating, the tube thickness (\(t_{0}\)) depends on the duration of deposition time, on the concentration of the solution, and the diameter of the pore, the latter hindering diffusion. Thus it is important to measure directly the thickness of every batch, and in practice this provides some variety of values for the thickness[31; 34]. Figure 6 (a) provides an overview of the resistivity measurements on NTs with various thicknesses, diameters and composition, showing a spread in results, however not clearly correlated with any of these parameters. Fig. 6 (b) shows that resistivity decreases at low temperature, consistent with a metallic behavior. However, the decrease is moderate compared with a clean metal. This confirms our hypothesis of resistance dominated by inter-granular resistance due to boron-rich grain boundaries, expected to be only weakly dependent on temperature. This phenomenon is phenomenologically similar to the usual situation of grain boundary scattering in metals, which induces an offset in resistivity, however without affecting much its temperature dependence[43; 44].
## Appendix B Magnetization measurements
Magnetization has been inferred for all compositions reported here based on hysteresis loops of thin films of a given area, measured by vibrating sample magnetometry (Fig. 7). The variation is very similar to that expected from the linear variation of the Slater-Pauling curve [45] for pure CoNi alloys, _i.e._, with no boron. This hints at a rather low boron concentration in the material.
Figure 6: (a) Electrical resistivity of NTs versus their geometrical features and chemical compositions. (b) Resistivity as a function of temperature.
Figure 7: The experimental values of \(M_{\text{s}}\) as a function of tube concentration (Ni\({}_{x}\)Co\({}_{1\text{ }-x}\))B
## Appendix C Anisotropy field confirmed from magnetometry
Hysteresis loops were performed with vibrating sample magnetometry on an array of (Ni\({}_{30}\)Co\({}_{70}\))B tubes still in the polycarbonate membrane, with magnetic field applied along the tube axis (_i.e._, perpendicular to the polymer foil). Compared with the situation of single tubes investigated via AMR, magnetizing all tubes along their axis requires to pay the cost of the demagnetizing energy of the entire array of tubes, scaling with the magnetic filling factor. This translates into a contribution to the anisotropy energy :
\[H_{\mathrm{a,tube-array}}=M_{\mathrm{s}}\times\rho_{\mathrm{d}}\times\pi(R_{0}^ {2}-R_{\mathrm{i}}^{2})\;. \tag{1}\]
\(\rho_{\mathrm{d}}=10^{8}\,\mathrm{cm}^{-2}\) is the areal density of pores, \(\pi(R_{0}^{2}-R_{\mathrm{i}}^{2})\) the cross-sectional area of the tube with parameters \(R_{0}=235\,\mathrm{nm}\) the outer radius and \(R_{\mathrm{i}}=205\,\mathrm{nm}\) the inner radius. The calculated \(H_{\mathrm{a,tube-array}}\) using Eq(1) is 207 mT and \(H_{\mathrm{a,VSM}}\) = 186 mT. The magnitude of anisotropy field of a single tube is then derived as: \(H_{\mathrm{a,single-tube}}=H_{\mathrm{a,VSM}}-H_{\mathrm{a,tube-array}}\). Its numerical value is \(\sim 20\,\mathrm{mT}\), which is quantitatively similar to \(H_{\mathrm{a}}=16.3\,\mathrm{mT}\) measured from the reconstructed M-H loop as shown in Fig. 3 (c). The latter is however more reliable, not requiring a subtraction between too large figures when tubes are interacting in the array.
## Appendix D Effect of annealing
It was previously shown that annealing tends to decrease the strength of magnetic anisotropy in these NTs, ultimately restoring axial magnetization[31]. Here, a batch of NTs was annealed at 180\({}^{\circ}\) for 2 hours before contacting the tubes. We observe that \(H_{\mathrm{a}}\) and \(K_{\mathrm{eff}}\) decrease with annealing, as expected [Fig. 8]. This is observed for all concentration of (Ni\({}_{x}\)Co\({}_{1-x}\))B NTs. Also, the resistivity decreases, which is consistent with grain growth and our hypothesis that resistance is dominated by grain boundaries.
## Appendix E Electroless deposition of (Ni\({}_{x}\)Co\({}_{1-x}\))B nanotubes
The synthesis of (Ni\({}_{x}\)Co\({}_{1-x}\))B NTs was performed according to a previously published procedure, with minor modifications [31; 34]. The sample fabrication is briefly described in the following paragraphs.
### Chemicals and methods
The following chemicals have been used, without further modification or purification: Tin(II) chloride dihydrate (_Sigma-Aldrich_, 98 %), trifluoroacetic acid (_Sigma-Aldrich_, 99 %), methanol (_PanReac AppliChem_, pure), palladium(II) chloride (_Aldrich_, 99 %), potassium chloride (_PanReac Applichem_, USP, Ph. Eur.), nickel(II) sulfate heptahydrate (_Acros Organics_, for analysis), cobalt(II) sulfate heptahydrate (_Sigma-Aldrich_, \(\geq\)98 %), trisodium citrate (_Alfa-Aesar_, 99 %), borane dimethylamine complex (DMAB, _Aldrich_, 97 %).
All aqueous solutions were prepared using purified water (_Milli-Q_, \(>\)18.2 M\(\Omega\)). Prior to use, all glassware was cleaned with boiling _aqua regia_, stored in an alkaline bath for multiple days and rinsed with copious amounts of deionized water.
### Template preparation
Polycarbonate (PC) foils with a thickness of 30 \(\mathrm{\SIUnitSymbolMicro m}\) (Pokalon, _Lofo High Tech Film GmbH_) were irradiated with swift heavy ions (Au\({}^{26+}\), 5.9 MeV/u, 1 \(\times\) 10\({}^{8}\) cm\({}^{-2}\)) using the _UNILAC_ linear accelerator facility at _GSI Helmholtzzentrum fur Schwerionenforschung GmbH_, Darmstadt, Germany [Fig. 1(a)]. Cylindrical pores were obtained by subsequent chemical etching in stirred, aqueous 6 M NaOH solution at 50 \({}^{\circ}\)C [([Fig. 1(b)]. The duration of the etching process, which determines the diameter of the pores, was varied between 10 min to 30 min.
### Electroless deposition
To initiate the electroless plating reaction, catalytically-active Pd seeds are deposited on the PC template surface by a previously described, two-step sensitization and activation procedure: [34] Firstly, the membranes are submerged in a Sn(II)-containing solution [42 mM SnCl\({}_{2}\cdot\)2 H\({}_{2}\)O and 72 mM trifluoroacetic acid in methanol and water (1:1)] for 45 min. After washing with water, they are transferred on an aqueous Pd(II)-solution (11.3 mM PdCl\({}_{2}\), 33.9 mM KCl) for 4 minutes. The two steps are repeated two more times, with the sensitization duration shortened to 15 min. Electroless plating was then conducted from a bath
Figure 8: Effect of annealing on anisotropies for (Ni\({}_{30}\)Co\({}_{70}\))B tubes. (a) \(H_{\mathrm{a}}\) and AMR (b) magnetic anisotropies as a function of annealing.
containing NiSO\({}_{4}\cdot 7\) H\({}_{2}\)O and CoSO\({}_{4}\cdot 7\) H\({}_{2}\)O as the metal-ion source (100 mM in total), disodium citrate as a chelating ligand (100 mM), as well as borane dimethylamine (DMAB) as reducer (100 mM). The Ni/Co ratio of the final (Ni\({}_{x}\)Co\({}_{1-x}\))B deposit was determined by the ratio of the respective metal-ions in the plating solution, while the wall-thickness of the tubes was controlled by the deposition time. Due to the faster plating speed of Ni-rich electrolytes (see Appendix E) the depositions for (Ni\({}_{0.5}\)Co\({}_{0.5}\))B and (Ni\({}_{0.8}\)Co\({}_{0.2}\))B were conducted at 4 \({}^{\circ}\)C, all others at room temperature (\(\sim 25\,^{\circ}\)C).
### Tuning the composition of (Ni\({}_{x}\)Co\({}_{1-x}\))B nanotubes
Due to the similar chemical behavior of Co\({}^{2+}\) and Ni\({}^{2+}\) ions, it is possible to deposit alloys of the respective metals from a single plating bath. In both cases, citrate has proven to be a suitable ligand and stabilizer, and both metals are catalytically active towards DMAB decomposition. This allows to tune the Ni/Co ratio of the deposit by simply adjusting the ratio of ions in solution. Comparing the deposition reactions of Ni-rich and Co-rich electrolytes with the same reactant concentrations, it can be observed that the deposition of the Ni-rich materials is considerably faster than that of their Co-rich counterparts. This can be attributed to the slightly more positive reduction potential of Ni, (see Equation E1 and E2) [46], as well as its higher catalytic activity towards DMAB decomposition [47, 48].
\[\text{Ni}^{2+}+2\,\text{e}^{-}\rTo\text{Ni}; -0.257\,\text{V vs. SHE}\] (E1) \[\text{Co}^{2+}+2\,\text{e}^{-}\rTo\text{Co}; -0.28\,\text{V vs. SHE}\] (E2)
In practice, this might cause problems for Ni-rich deposits, as high plating speeds can cause inhomogeneous thickness along the tubes or even lead to a blockage of the pore openings. In order to alleviate this effect, the Ni-rich depositions were conducted at lower temperatures, enabling a more controlled deposition. The inset in Fig. 9 shows the thickness of the deposit (i.e. the NT wall thickness) in relation to the plating time for a (Ni\({}_{0.3}\)Co\({}_{0.7}\))B deposit. Although in the time frame observed in our study the wall thickness appears to change linearly with time, it is expected that the plating reaction slows down after a while, due to the ongoing consumption of both metals salts and reducing agent.
The relation between the Co\({}^{2+}\)-content in the electrolyte and the Co-content in the final tubes is given in Fig. 9. Due to the electrochemical similarities between Co\({}^{2+}\) and Ni\({}^{2+}\), one might expect that the Co-content in the deposit either linearly follows the Co\({}^{2+}\)-content in the electrolyte, or that Ni is deposited predominantly due to its higher reduction potential and catalytic activity. However, it can be observed that Co is deposited preferentially, despite being the less noble of the two metals. This anomalous preferential deposition of Co has been observed before, both in electroless plating [47, 49, 34] as well as electroplating [50, 51]. One possible explanation for this phenomenon is the adsorption of (intermediate) Co-species on the deposited Ni, hindering further Ni-deposition [51].
The use of DMAB as a reducer leads to the incorporation of B into the deposit [47]. In fact, the material likely consists of a complex phase mixture of Ni and Co alloys with different Ni and Co borides featuring a nanocrystalline structure, which appears almost amorphous in X-ray diffraction experiments. [52, 47, 34] Depending on the plating bath composition and reaction parameters such as pH and temperature, deposits with vastly different B-contents can be realized. According to Richardson _et al._[53], the B-content can be tuned in a wide range by adjusting the pH-value of the plating bath, leading up to 45 \(\%\)at. B using a pH 7.5 electrolyte. Compared to our study, however, they use different additives as well as a much higher relative concentration of DMAB in the plating bath, which can lead to an increased B concentration [47]. Other studies that utilize similar bath chemistry to our approach, found lower B-contents in the range of 12 \(\%\) to 30 \(\%\)at., depending on the plating parameters [47, 49, 54]. As we based our synthesis on the recent study by Stano _et al._[31], using the same bath composition and reaction parameters, we expected the B-content to be in the range of 10 \(\%\) to 25 \(\%\)at. To get a better understanding of the B content in our samples, X-ray photo-electron spectroscopy (XPS) was performed on a typical Co-rich deposit (see appendix, section F). In the aforementioned study by Stano _et al._, the deposit also has been investigated structurally by TEM, highlighting grain sizes in the range of 10 nm separated by
Figure 9: Co-content of the final NiCoB NTs in relation to the Co\({}^{2+}\) content in the electrolyte, showing the preferential deposition of Co. Inset shows the variation of tube thickness with respect to deposition time for a (Ni\({}_{0.3}\)Co\({}_{0.7}\))B deposit.
1 nm to 2 nm thick transitional regions, presumably rich in lighter elements, such as O and B [31]. Based on the observed dimensions, it can be roughly estimated that these transitional regions make up between 30 % to 60 % of the total volume of the deposit [55], meaning they likely strongly influence the overall electrical and magnetic properties of the material.
Appendix F X-ray photo-electron spectroscopy (XPS) analysis of electroless (Ni\({}_{\mathbf{x}}\)Co\({}_{\mathbf{1-x}}\))B
XPS was performed on a typical electroless Co-rich deposit, in order to investigate the chemical configuration as well as the B content of the material. Due to the surface sensitivity of the technique, three measurements were conducted with intermittent Ar sputtering for 10 s.
### Measurement parameters
All measurements were performed using a monochromatic X ray source (Al K\(\alpha\)) with an excitation energy of 1486.6 eV at a _Thermo Fisher Scientific_ Escalab 250 spectrometer using a spot size of 650 \(\upmu\)m. Pass energies of 10 eV and step sizes of 0.05 eV with a dwell time of 50 ms per measurement point were used. Ar sputtering was performed inside the XPS measurement chamber using a _Thermo Fisher Scientific_ EX05 ion gun. The acceleration voltage and spot size were set to 3 keV and \(3\times 3\) mm, respectively. All spectra were calibrated to the Fermi level of silver (0 eV), the binding energy of the Au4f\({}_{7/2}\) emission line (84.0 eV), the Ag3d\({}_{5/2}\) emission line (368.26 eV) and the Cu2p\({}_{3/2}\) emission line (932.67 eV). Background subtraction and fitting was performed using _CasaXPS_ Version 2.3.16Dev52. A Shirley background was applied for all emission lines. For Co2p and Ni2p, the background subtracted spectra were integrated, meaning that no peak fitting was performed. To determine the peak areas, the resulting background corrected spectra were then simply integrated. For fitting the B1s emission lines, GL(30) line shapes were used. To normalize the peak areas of the different elements, the areas were divided by the respective Scofield sensitivity factors, the energy-dependent spectrometer transmission function and KE\({}^{0.6}\), with KE being the kinetic energy of photo-electrons, to account for the energy-dependent mean free path.
### Results of XPS analysis
The Co2p and Ni2p lines suggest the presence of metallic and oxidic species of both elements. As shown in
\begin{table}
\begin{tabular}{c c c c}
**Ar sputtering / s Co / \%at. Ni / \%at. B / \%at.** \\ \hline
0 & 74.08 & 8.32 & 17.60 \\
10 & 71.71 & 8.75 & 19.54 \\
20 & 71.67 & 8.69 & 19.64 \\ \end{tabular}
\end{table}
Table 1: Concentrations of Co, Ni and B in the investigated Co-rich deposit, as determined by XPS.
Figure 10: Detailed XPS spectra of the (a) Co2p, (b) Ni2p and (c) B1s regions after 0, 10 and 20 s of Ar sputtering. The data suggest that the deposit consists of a complex phase mixture of superficially oxidized Ni and Co, Ni and Co borides as well as boron oxides. B contents of 17.60 %at., 19.54 %at., and 19.64 %at. can be determined after 0 s, 10 s, and 20 s of Ar sputtering, respectively. The subtracted backgrounds for composition analysis are shown as dashed lines, peak fitting is performed for the B1s line (c), depicted as darker solid lines.
Fig. 10 (a) the Co2p\({}_{3/2}\) line is divided into multiple peaks. Here, the peak at 778.3 eV is attributed to metallic Co, whereas peaks in the range from 780 to 790 eV suggest the presence of Co oxides. In the case of Ni (Fig. 10 (b)), a similar behavior can be observed, with a metallic peak at 852.9 eV and oxidic contributions from 855 to 860 eV. The position of the oxidic peaks suggests that the dominant species in this case is likely Ni(OH)\({}_{2}\)[56]. The 2p\({}_{1/2}\) lines of both elements further corroborate the coexistence of metallic and oxidic species. As the ratio between metallic and oxidic contributions shifts towards the former with increasing sputter time, it can be assumed that the metal oxides form superficially after synthesis due to the reaction with atmospheric oxygen and moisture. This agrees with the previously discussed findings from EELS analysis, showing the absence of metal oxides in the bulk material. It is worth noting, however, that the Ar sputtering could also partially contribute to the reduction of both Co and Ni oxides. The B1s line [Fig. 10 (c)] is separated into two peaks, clearly hinting at the presence of two distinct B species. The peak at higher binding energies (around 192 eV) can be attributed to boron oxides (in particular B\({}_{2}\)O\({}_{3}\)), while the peak at around 188 eV indicates the presence of Co and Ni borides [57; 58; 59]. This dichotomy of B species is commonly observed in this type of material, where the B oxides likely are a product of NiB and CoB oxidation [57; 60]. Since independent fitting of the metallic and oxidic Ni2p and Co2p peaks is challenging, only a Shirley background subtraction was performed and the resulting spectra were then integrated to determine the total peak areas. The B content amounts to 17.60 %at., 19.54 %at., and 19.64 %at. after 0 s, 10 s, and 20 s of Ar sputtering, respectively (see Table. 1). This lies well within the range of B concentrations reported in electroless CoB and NiB alloys fabricated using citrate and DMAB as stabilizer and reducer, respectively [47; 54; 49].
|
2308.10176 | Schrödinger oscillators in a deformed point-like global monopole
spacetime and a Wu-Yang magnetic monopole: position-dependent mass
correspondence and isospectrality | We show that a specific transformation/deformation in a point-like global
monopole (PGM) spacetime background would yield an effective position-dependent
mass (PDM) Schr\"{o}dinger equation (i.e., a von Roos PDM Schr\"{o}dinger
equation). We discuss PDM Schr\"{o}dinger oscillators in a PGM spacetime in the
presence of a Wu-Yang magnetic monopole. Within our transformed/deformed global
monopole spacetime, we show that all PDM Schr\"{o}dinger oscillators admit
isospectrality and invariance with the constant mass Schr\"{o}dinger
oscillators in the regular global monopole spacetime in the presence of a
Wu-Yang magnetic monopole. The exclusive dependence of the thermodynamical
partition function on the energy eigenvalues manifestly suggests that the
Schr\"{o}dinger oscillators and the PDM Schr\"{o}dinger oscillators share the
same thermodynamical properties as mandated by their isospectrality. Moreover,
we discuss the hard-wall effect on the energy levels of the PDM Schr\"{o}dinger
oscillators in a PGM spacetime without and with a Wu-Yang magnetic monopole.
Drastic energy levels' shift-ups are observed as a consequence of such
hard-wall effect. | Omar Mustafa | 2023-08-20T06:51:22Z | http://arxiv.org/abs/2308.10176v1 | Schrodinger oscillators in a deformed point-like global monopole spacetime and a Wu-Yang magnetic monopole: position-dependent mass correspondence and isospectrality.
###### Abstract
**Abstract:** We show that a specific transformation/deformation in a point-like global monopole (PGM) spacetime background would yield an effective position-dependent mass (PDM) Schrodinger equation (i.e., a von Roos PDM Schrodinger equation). We discuss PDM Schrodinger oscillators in a PGM spacetime in the presence of a Wu-Yang magnetic monopole. Within our transformed/deformed global monopole spacetime, we show that all PDM Schrodinger oscillators admit isospectrality and invariance with the constant mass Schrodinger oscillators in the regular global monopole spacetime in the presence of a Wu-Yang magnetic monopole. The exclusive dependence of the thermodynamical partition function on the energy eigenvalues manifestly suggests that the Schrodinger oscillators and the PDM Schrodinger oscillators share the same thermodynamical properties as mandated by their isospectrality. Moreover, we discuss the hard-wall effect on the energy levels of the PDM Schrodinger oscillators in a PGM spacetime without and with a Wu-Yang magnetic monopole. Drastic energy levels' shift-ups are observed as a consequence of such hard-wall effect.
**PACS** numbers: 05.45.-a, 03.50.Kk, 03.65.-w
**Keywords:** PDM Schrodinger oscillators, Point-like global monopole, Wu-Yang magnetic monopole, isospectrality and invariance, hard-wall effect.
## I Introduction
Various kinds of topological defects depend on the topology of the vacuum manifold and are formed by the phase transition in the early universe [1; 2; 3]. Among such topological defects are the cosmic string [4; 5; 6; 3; 7], domain walls [2; 3], and global monopole [8]. Cosmic strings and global monopoles are known to be topological defects that do not introduce gravitational interactions but they rather modify the geometry of spacetime [4; 7; 8; 9]. Global monopoles are formed as a consequence of spontaneous global \(O(3)\) symmetry breakdown to \(U\left(1\right)\) and are similar to elementary particles (with their energy mostly concentrated near the monopole core) [8]. They are spherically symmetric topological defects that admit the general static metric
\[ds^{2}=-B\left(r\right)\,dt^{2}+A\left(r\right)\,dr^{2}+r^{2}\left(d\theta^{2 }+\sin^{2}\theta\,d\varphi^{2}\right). \tag{1}\]
Barriola and Vilenkin [8] have reported that
\[B\left(r\right)=A\left(r\right)^{-1}=1-8\pi G\eta^{2}-\frac{2GM}{r}, \tag{2}\]
where \(M\) is a constant of integration and in flat space \(M\sim M_{\text{\it core}}\) ( \(M_{\text{\it core}}\) is the mass of the monopole core). By neglecting the mass term and rescaling the variables \(r\) and \(t\)[8], one may rewrite the global monopole metric as
\[ds^{2}=-dt^{2}+\frac{1}{\alpha^{2}}dr^{2}+r^{2}\left(d\theta^{2}+\sin^{2} \theta\,d\varphi^{2}\right), \tag{3}\]
where \(0<\alpha^{2}=1-8\pi G\eta^{2}\leq 1\), \(\alpha\) is a global monopole parameter that depends on the energy scale \(\eta\), \(G\) is the gravitational constant, and \(\alpha=1\) corresponds to flat Minkowski spacetime [8; 9; 10; 11]. Barriola and Vilenkin [8] have shown that the monopole, effectively, exerts no gravitational force. The space around and outside the monopole has a solid deficit angle that deflects all light. This has motivated several studies, amongst are, vacuum polarization effects in the presence of Wu-Yang [12] magnetic monopole [14], gravitating magnetic monopole [15], Dirac and Klein-Gordon (KG) oscillators [16], Schrodinger oscillators [10], KG particles with a dyon, magnetic flux and scalar potential [9], bosons in Aharonov-Bohm flux field and a Coulomb potential [20], Schrodinger particles in a Kratzer potential [21], Schrodinger particles in a Hulthen potential [22], and scattering by a monopole [23]. In general, the influence of topological defects in spacetime on the spectroscopy of quantum mechanical systems (be it through the introduction of gravitational field interactions or merely a modification of spacetimes) has been a subject of research attention over the years. In relativistic quantum mechanics, for example, the harmonic oscillator is studied in the context of Dirac and Klein-Gordon (KG) [16; 17; 18; 19; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38] in different spacetime backgrounds.
On the other hand, the effective position-dependent mass (PDM) Schrodinger equation introduced by von Roos [39] finds it applications in nuclear physics, nanophysics, semiconductors, etc [39; 40; 41; 42]. Where, the von Roos PDM kinetic energy operator [39] (in \(\hbar=2m=1\) units) is given by
\[\hat{T}=-\frac{1}{2}\left[m\left(x\right)^{j}\,\partial_{x}\,m\left(x\right)^ {k}\,\partial_{x}\,m\left(x\right)^{l}+m\left(x\right)^{l}\,\partial_{x}\,m \left(x\right)^{k}\,\partial_{x}\,m\left(x\right)^{j}\right], \tag{4}\]
with an effective PDM \(m\left(x\right)=mf\left(x\right)\), and \(m\) is the mass of Schrodinger particle. However, the continuity conditions at the abrupt heterojunction suggest that \(j=l\)[40; 41; 42], where \(j,k,l\) are called the ordering ambiguity parameters that satisfy the von Roos constraint \(j+k+l=-1\)[39]. Recently, it has been shown that under some coordinate deformation/transformation [43] the PDM kinetic energy operator collapses into
\[\hat{T}=-m\left(x\right)^{-1/4}\,\partial_{x}\,m\left(x\right)^{-1/2}\, \partial_{x}\,m\left(x\right)^{-1/4}, \tag{5}\]
where \(j=l=-1/4\) and \(k=-1/2\) (known in the literature as Mustafa-Mazharimousavi's ordering [43; 44; 45]. Inspired by Khlevniuk and Tymchyshyn [46] observation that a point mass moving within the curved coordinates/space transforms into position-dependent mass in Euclidean coordinates/space, we, hereby, introduce a deformation/transformation of the global monopole spacetime metric (3) and show that the corresponding Schrodinger equation transforms into a one-dimensional von Roos [39] PDM-Schrodinger equation. We proceed, under such settings, and discuss the corresponding effects on the spectroscopic structure of the PDM Schrodinger oscillators., including the Wu-Yang magnetic monopole and a hard-wall effects.
The organization of our manuscript is in order. In section 2, we show that a deformation/transformation in the global monopole spacetime metric (3) would yield an effective position-dependent mass (PDM) Schrodinger equation.
We start with Schrodinger particles in the background of a deformed/transformed global monopole spacetime. We then connect our findings with the von Roos [39] PDM Schrodinger equation. We discuss PDM Schrodinger oscillators in a global monopole spacetime, in section 3. We consider, in section 4, the PDM Schrodinger oscillators in a global monopole spacetime in the presence of a Wu-Yang magnetic monopole [12]. Within our deformed/transformed global monopole spacetime recipe, we show that all our PDM Schrodinger oscillators admit isospectrality and invariance with the Schrodinger oscillators in the regular global monopole spacetime in the presence of a Wu-Yang magnetic monopole. Nevertheless, the exclusive dependence of the thermodynamical partition function on the energy eigenvalues manifestly suggests that the Schrodinger oscillators and the PDM Schrodinger oscillators have the same thermodynamical properties as mandated by their isospectrality. We, therefore, report their thermodynamical properties (e.g., [47; 48; 49; 50; 51; 52]), in section 5. Such properties are, in fact, shared by both Schrodinger oscillators and PDM Schrodinger oscillators in a global monopole spacetime without and with a Wu-Yang magnetic monopole. In section 6, we discuss the hard-wall effect on the energy levels PDM Schrodinger oscillators in a global monopole spacetime without and with a Wu-Yang magnetic monopole. The hard-wall confinement is studied by Bakke for a Landau-Aharonov-Casher system [53] and for Dirac neutral particles [54], by Castro [55] for scalar bosons. and by Vitoria and Bakke [56] for the rotating effects on the scalar field in spacetime with linear topological defects, to mention a few. Our concluding remarks are given in section 7. To the best of our knowledge, such a study has not been carried out elsewhere.
## II Schrodinger particles in the background of a deformed/transformed global monopole spacetime
Let us consider Schrodinger particles interacting with a point-like global monopole (PGM) with a spacetime metric given by (3) and subjected to a point canonical transformation (PCT) in the form of
\[r=\int\sqrt{f\left(\rho\right)}d\rho=\sqrt{q\left(\rho\right)}\rho\Leftrightarrow \sqrt{f\left(\rho\right)}=\sqrt{q\left(\rho\right)}\left[1+\frac{q^{\prime} \left(\rho\right)}{2q\left(\rho\right)}\rho\right]. \tag{6}\]
Then the PGM metric (3) transforms into
\[ds^{2}=-dt^{2}+\frac{f\left(\rho\right)}{\alpha^{2}}d\rho^{2}+q\left(\rho \right)\,\rho^{2}\left[d\theta^{2}+\sin^{2}\theta\,d\varphi^{2}\right], \tag{7}\]
where \(q\left(\rho\right)\) and \(f\left(\rho\right)\) are positive-valued scalar multiplier and \(q\left(\rho\right)=1\Rightarrow f\left(\rho\right)=1\) (i.e., constant mass settings) recovers the PGM metric (3). Consequently, the corresponding deformed/transformed metric tensor is
\[g_{ij}=\left(\begin{array}{ccc}\frac{f\left(\rho\right)}{\alpha^{2}}&0&0\\ 0&q\left(\rho\right)\,\rho^{2}&0\\ 0&0&q\left(\rho\right)\,\rho^{2}\sin^{2}\theta\end{array}\right);\,\,i,j= \rho,\theta,\varphi, \tag{8}\]
to imply
\[\det\left(g_{ij}\right)=g=\frac{f\left(\rho\right)}{\alpha^{2}}q\left(\rho \right)^{2}\,\rho^{4}\sin^{2}\theta,\]
\[g^{ij}=\left(\begin{array}{ccc}\frac{\alpha^{2}}{f\left(\rho\right)}&0&0\\ 0&\frac{1}{q\left(\rho\right)\rho^{2}}&0\\ 0&0&\frac{1}{q\left(\rho\right)\rho^{2}\sin^{2}\theta}\end{array}\right). \tag{9}\]
Then, Schrodinger equation
\[\left\{\left(-\frac{\hbar^{2}}{2m_{\circ}}\frac{1}{\sqrt{g}}\partial_{i}\sqrt{ g}g^{ij}\partial_{j}\right)+V\left(\rho,t\right)\right\}\Psi\left(\rho,t\right)=i \hbar\frac{\partial}{\partial t}\Psi\left(\rho,t\right), \tag{10}\]
would, with \(V\left(\rho,t\right)=V\left(\rho\left(r\right)\right)\) and \(\Psi\left(\rho,t\right)=e^{-iEt/\hbar}\psi\left(\rho\right)Y_{\ell m}\left( \theta,\varphi\right)\), yield
\[\left\{\frac{\hbar^{2}}{2m_{\circ}}\left(-\frac{1}{q\left(\rho\right)\,\sqrt {f\left(\rho\right)}\,\rho^{2}}\,\partial_{\rho}\left(\frac{q\left(\rho \right)\,\rho^{2}}{\sqrt{f\left(\rho\right)}}\,\partial_{\rho}\right)+\frac{ \ell\left(\ell+1\right)}{\alpha^{2}q\left(\rho\right)\,\rho^{2}}\right)+\frac {1}{\alpha^{2}}V\left(\rho\left(r\right)\right)\right\}\psi\left(\rho\right) =\frac{1}{\alpha^{2}}E\psi\left(\rho\right), \tag{11}\]
where \(Y_{\ell m}\left(\theta,\varphi\right)\) are the spherical harmonics, \(\ell\) is the angular momentum quantum number, and \(m\) is the magnetic quantum number. In a straightforward manner, equation (11) along with our PCT in (6), is transformed into
\[\left\{\frac{\hbar^{2}}{2m_{\circ}}\left(-\frac{1}{r^{2}}\,\partial_{r}\,r^{ 2}\partial_{r}+\frac{\tilde{\ell}\left(\tilde{\ell}+1\right)}{r^{2}}\right)+ \frac{1}{\alpha^{2}}V\left(r\left(\rho\right)\right)\right\}\psi\left(r\left( \rho\right)\right)=\mathcal{E}\psi\left(r\left(\rho\right)\right) \tag{12}\]
to imply (with \(\psi\left(r\right)=R\left(r\right)/r\))
\[\left[\frac{\hbar^{2}}{2m_{\circ}}\left(-\partial_{r}^{2}+\frac{\tilde{\ell }\left(\tilde{\ell}+1\right)}{r^{2}}\right)+\frac{1}{\alpha^{2}}V\left(r\left( \rho\right)\right)\right]R\left(r\right)=\mathcal{E}R\left(r\right), \tag{13}\]
where \(\mathcal{E}=E/\alpha^{2}\), and
\[\tilde{\ell}\left(\tilde{\ell}+1\right)=\frac{\ell\left(\ell+1\right)}{ \alpha^{2}}\Longrightarrow\tilde{\ell}=-\frac{1}{2}+\frac{\sqrt{\alpha^{2}+4 \ell\left(\ell+1\right)}}{2\alpha}\]
(this would retrieve the regular angular momentum quantum number \(\ell\) for a flat Minkowski spacetime at \(\alpha=1\)). Moreover, the two quantum mechanical systems in (11) and (12) are isospectral and invariant. That is, knowing the solution of one of them would immediately yield the solution of the other. Yet they both share the same energies.
### Deformed/transformed PGM spacetime metric and position-dependent mass connection
Let us use the substitution of
\[R\left(r\right)=R\left(r\left(\rho\right)\right)=f\left(\rho\right)^{-1/4}\phi \left(\rho\right) \tag{14}\]
in (13) to obtain, with (6) and \(\partial_{r}R\left(r\right)=f\left(\rho\right)^{-1/2}\partial_{\rho}\left(f \left(\rho\right)^{-1/4}\phi\left(\rho\right)\right)\),
\[\left\{-\frac{\hbar^{2}}{2m}f\left(\rho\right)^{-1/2}\partial_{\rho}f\left( \rho\right)^{-1/2}\partial_{\rho}+\frac{\hbar^{2}}{2m}\frac{\tilde{\ell} \left(\tilde{\ell}+1\right)}{q\left(\rho\right)\,\rho^{2}}+\frac{1}{\alpha^{2} }V\left(\rho\right)\right\}f\left(\rho\right)^{-1/4}\phi\left(\rho\right)= \mathcal{E}f\left(\rho\right)^{-1/4}\phi\left(\rho\right). \tag{15}\]
We now multiply this equation, from the left, by \(f\left(\rho\right)^{1/4}\) to obtain
\[\left\{-\frac{\hbar^{2}}{2m}f\left(\rho\right)^{-1/4}\partial_{\rho}f\left( \rho\right)^{-1/2}\partial_{\rho}f\left(\rho\right)^{-1/4}+\tilde{V}\left( \rho\right)\right\}\phi\left(\rho\right)=\mathcal{E}\,\phi\left(\rho\right). \tag{16}\]
Where
\[\tilde{V}\left(\rho\right)=\frac{\hbar^{2}}{2m}\frac{\tilde{\ell}\left(\tilde{ \ell}+1\right)}{q\left(\rho\right)\,\rho^{2}}\,+\frac{1}{\alpha^{2}}V\left( \rho\right), \tag{17}\]
and consequently the effective kinetic energy operator reads
\[\hat{T}=-\frac{\hbar^{2}}{2m}f\left(\rho\right)^{-1/4}\partial_{\rho}f\left( \rho\right)^{-1/2}\partial_{\rho}f\left(\rho\right)^{-1/4}. \tag{18}\]
Such kinetic energy operator belongs, with \(m\left(\rho\right)=mf\left(\rho\right)\) (hence the notion of position-dependent mass is, metaphorically speaking, introduced in the process), to the set of von Roos [39] PDM kinetic energy operators
\[\tilde{T}_{vR}=-\frac{\hbar^{2}}{4}\left[m\left(\rho\right)^{j}\partial_{x}m \left(\rho\right)^{k}\,\partial_{x}m\left(\rho\right)^{l}+m\left(\rho\right)^ {l}\partial_{x}m\left(\rho\right)^{k}\,\partial_{x}m\left(\rho\right)^{j} \right], \tag{19}\]
where \(j=l\) ( which is physically acceptable to secure the continuity conditions at the abrupt heterojunction in condense matter physics) and \(j+k+l=-1\) ( where, \(j,k,l\) are called ordering ambiguity parameters). In fact, such a point canonical transformation makes the notion _"position-dependent mass"_ metaphorically unavoidable in the process. On the other hand, the parametric ordering \(j=l=-1/4\) and \(k-1/2\) in (18) is known in the literature as Mustafa and Mazharimousavi's ordering [43]. Yet, in a straightforward manner, one may show that the PDM momentum operator [44; 45]
\[\mathbf{\hat{p}}\left(\rho\right)=-i\left(\nabla-\frac{\nabla f\left(\rho \right)}{4f\left(\rho\right)}\right)\Longleftrightarrow p_{\rho}=-i\left( \partial_{\rho}-\frac{f^{\prime}\left(\rho\right)}{4f\left(\rho\right)} \right);\;f\left(\rho\right)=f\left(\rho\right), \tag{20}\]
in
\[\left\{\frac{1}{2m}\left(\frac{\mathbf{\hat{p}}\left(\rho\right)}{\sqrt{f \left(\rho\right)}}\right)^{2}+V\left(\rho\right)\right\}\phi\left(\rho\right) =\mathcal{E}\,\phi\left(\rho\right), \tag{21}\]
would yield (16) with (17) in a flat Minkowski spacetime at \(\alpha=1\). Moreover, the two systems (13) and (16) are isospectral as they share the same energy levels and are invariant, therefore. In what follows we shall use \(\hbar=2m=1\) units and discuss some illustrative examples.
## III PDM Schrodinger oscillators in a global monopole spacetime background
Let us consider \(V\left(r\left(\rho\right)\right)=\omega^{2}r^{2}\) in (13) to obtain
\[\left[-\partial_{r}^{2}+\frac{\tilde{\ell}\left(\tilde{\ell}+1\right)}{r^{2}} +\tilde{\omega}^{2}r^{2}\right]R\left(r\right)=\mathcal{E}R\left(r\right), \tag{22}\]
where \(\tilde{\omega}=\omega/\alpha\) and \(\mathcal{E}=E/\alpha^{2}\). This is the radial spherically symmetric Schrodinger oscillator equation that admits exact textbook solution in the form of
\[R\left(r\right)\sim r^{\tilde{\ell}+1}\exp\left(-\frac{\tilde{\omega}r^{2}}{2 }\right)\,_{1}F_{1}\left(\frac{\tilde{\ell}}{2}+\frac{3}{4}-\frac{\mathcal{E}} {4\tilde{\omega}},\tilde{\ell}+\frac{3}{2},\tilde{\omega}r^{2}\right), \tag{23}\]
for the radial part, which is to be finite and square integrable through the condition that the confluent hypergeometric series is truncated into a polynomial of order \(n_{r}=0,1,2,\cdots\). In this case,
\[\frac{\tilde{\ell}}{2}+\frac{3}{4}-\frac{\mathcal{E}}{4\tilde{\omega}}=-n_{r} \Rightarrow\mathcal{E}=2\tilde{\omega}\left(2n_{r}+\tilde{\ell}+\frac{3}{2} \right)\Rightarrow E=2\alpha\omega\left(2n_{r}+\frac{\sqrt{\alpha^{2}+4\ell \left(\ell+1\right)}}{2\alpha}+1\right) \tag{24}\]
for the energies (which is in exact accord with the result reported by Vitoria and Belich in Eq. (11) of [10], \(\omega=\omega_{VB}/2)\)., and
\[R\left(r\right)\sim r^{\tilde{\ell}+1}\exp\left(-\frac{\tilde{\omega}r^{2}}{2} \right)\,L_{n_{r}}^{\tilde{\ell}+1/2}\left(\frac{\tilde{\omega}r^{2}}{2} \right)\Rightarrow\Psi\left(r,\theta,\varphi\right)=\mathcal{N}_{n_{r},\ell} \,r^{\tilde{\ell}}\exp\left(-\frac{\tilde{\omega}r^{2}}{2}\right)\,L_{n_{r}}^{ \tilde{\ell}+1/2}\left(\frac{\tilde{\omega}r^{2}}{2}\right)Y_{\ell m}\left( \theta,\varphi\right). \tag{25}\]
for the radial part of the wave functions, where \(\,L_{n_{r}}^{\tilde{\ell}+1/2}\left(\tilde{\omega}r^{2}/2\right)\) are the generalized Laguerre polynomials. Hereby, it should be noted that this quantum mechanical system is isospectral and invariant with the PDM one
\[\left\{-f\left(\rho\right)^{-1/4}\partial_{\rho}f\left(\rho\right)^{-1/2} \partial_{\rho}f\left(\rho\right)^{-1/4}+\frac{\tilde{\ell}\left(\tilde{\ell} +1\right)}{q\left(\rho\right)\,\rho^{2}}\,+\tilde{\omega}^{2}q\left(\rho \right)\,\rho^{2}\right\}\phi\left(\rho\right)=\mathcal{E}\,\phi\left(\rho \right), \tag{26}\]
where \(f\left(\rho\right)\) and \(q\left(\rho\right)\) are correlated through (6), provided that \(R\left(r\left(\rho\right)\right)\) is given by (14). For example, for a power-low like dimensionless radial deformation \(q\left(\rho\right)=A\rho^{\sigma}\), we obtain \(f\left(\rho\right)=A\left(1+\sigma/2\right)^{2}\rho^{\sigma}\); \(\sigma\neq 0,-2\), and the corresponding PDM Schrodinger oscillators system reads
\[\left\{-\left(\tilde{A}\rho^{\sigma}\right)^{-1/4}\partial_{\rho}^{-1/2} \left(\tilde{A}\rho^{\sigma}\right)\,\partial_{\rho}\,\left(\tilde{A}\rho^{ \sigma}\right)^{-1/4}+\frac{\tilde{\ell}\left(\tilde{\ell}+1\right)}{A\rho^{ \sigma+2}}\,+\tilde{\omega}^{2}A\rho^{\sigma+2}\right\}\phi\left(\rho\right)= \mathcal{E}\,\phi\left(\rho\right), \tag{27}\]
where \(\tilde{A}=A\left(1+\sigma/2\right)^{2}\). Such a system represents just one of so many examples on PDM Schrodinger oscillators interacting with a PGM and share the same eigenvalues, (24) with that in (22).
## IV PDM Schrodinger oscillators in a PGM spacetime and a Wu-Yang magnetic monopole
In this section, we discuss PDM Schrodinger particles a PGM spacetime and a Wu-Yang magnetic monopole. Wu and Yang [12] have proposed a magnetic monopole that is free of strings of singularities around it [9; 12; 14]. They have defined the vector potential \(A_{\mu}\) in two regions, \(R_{A}\) and \(R_{B}\), covering the whole space, outside the magnet monopole, and overlap in \(R_{AB}\) so that
\[\begin{array}{ll}R_{A}:0\leq\theta<\frac{\pi}{2}+\delta,&r>0,\ \ 0\leq\varphi<2\pi,\\ R_{B}:\frac{\pi}{2}-\delta<\theta\leq\pi,&r>0,\ \ 0\leq\varphi<2\pi,\\ R_{AB}:\frac{\pi}{2}-\delta<\theta<\frac{\pi}{2}+\delta,&r>0,\ \ 0\leq\varphi<2\pi,\end{array} \tag{28}\]
where \(0<\delta\leq\pi/2\). Moreover, the vector potential has a non-vanishing component in each region given by
\[A_{\varphi,A}=g\left(1-\cos\theta\right),\ \ A_{\varphi,B}=-g\left(1+\cos \theta\right), \tag{29}\]
where \(g\) is the Wu-Yang monopole strength and \(A_{\varphi,A}\) and \(A_{\varphi,B}\) are correlated by the gauge transformation [9; 14]
\[A_{\varphi,A}=A_{\varphi,B}+\frac{i}{e}S\,\partial_{\varphi}\,S^{-1}\,;\ S=e^{2iq \varphi},\ q=eg. \tag{30}\]
We shall, for the sake of simplicity and economy of notation, use the form \(A_{\varphi}=sg-g\cos\theta\), with \(s=1\) for \(A_{\varphi,A}\) and \(s=-1\) for \(A_{\varphi,B}\).
Under such settings, equation (10) would now read
\[\left\{\left(-\frac{1}{\sqrt{g}}\left(\partial_{i}-ieA_{i}\right)\sqrt{g}g^{ ij}\left(\partial_{j}-ieA_{j}\right)\right)+V\left(\rho,t\right)\right\}\Psi \left(\rho,t\right)=i\frac{\partial}{\partial t}\Psi\left(\rho,t\right), \tag{31}\]
to imply
\[\left\{-\frac{\alpha^{2}}{r^{2}}\,\partial_{r}\left(r^{2}\,\partial_{r}\right)- \frac{1}{r^{2}}\left(\frac{1}{\sin\theta}\partial_{\theta}\sin\theta\;\partial \theta+\frac{1}{\sin^{2}\theta}\left[\partial_{\varphi}-ieA_{\varphi}\right]^{2 }\right)+V\left(\rho\left(r\right),t\right)\right\}\Psi\left(\rho,t\right)=i \frac{\partial}{\partial t}\Psi\left(\rho,t\right), \tag{32}\]
where \(r\) is given by (6). We may now seek separation of variables for (32) and use the substitution \(\Psi\left(\rho,t\right)=e^{-iEt}\;\psi\left(\rho\right)\;Y_{\tilde{q}\ell m} \left(\theta,\varphi\right)\), where \(\tilde{q}=sq\) and \(Y_{\tilde{q}\ell m}\left(\theta,\varphi\right)\) are the Wu-Yang monopole harmonics so that
\[\left(\frac{1}{\sin\theta}\partial_{\theta}\sin\theta\;\partial\theta+\frac{1 }{\sin^{2}\theta}\left[\partial_{\varphi}-ieA_{\varphi}\right]^{2}\right)Y_{ \tilde{q}\ell m}\left(\theta,\varphi\right)=-\lambda Y_{\tilde{q}\ell m} \left(\theta,\varphi\right). \tag{33}\]
Consequently (32) reduces to
\[\left\{-\frac{\alpha^{2}}{r^{2}}\,\partial_{r}\left(r^{2}\,\partial_{r}\right) +\frac{\lambda}{r^{2}}+V\left(\rho\left(r\right)\right)\right\}\psi\left( \rho\right)=E\psi\left(\rho\right). \tag{34}\]
At this point, one should first solve for the eigenvalues \(\lambda\) of (33) using the substitution
\[Y_{\tilde{q}\ell m}\left(\theta,\varphi\right)=\exp\left(i\left(m+\tilde{q} \right)\varphi\right)\Theta_{\tilde{q}\ell m}\left(\theta\right);\;\tilde{q}= sq=seg, \tag{35}\]
to obtain
\[\left(\frac{1}{\sin\theta}\partial_{\theta}\sin\theta\;\partial\theta-\frac{1 }{\sin^{2}\theta}\left[m+q\cos\theta\right]^{2}\right)\Theta_{\tilde{q}\ell m }\left(\theta\right)=-\lambda\Theta_{\tilde{q}\ell m}\left(\theta\right). \tag{36}\]
Notably, this equation does not depend on the value of \(s\) in \(\tilde{q}\) of (35) (i.e., \(\Theta_{\tilde{q}\ell m}\left(\theta\right)=\left[\Theta_{q\ell m}\left( \theta\right)\right]_{A}=\left[\Theta_{q\ell m}\left(\theta\right)\right]_{B} =\Theta_{q\ell m}\left(\theta\right)\) as observed by Wu-Yang [12]) and consequently, with \(x=\cos\theta\), would read
\[\left\{\left(1-x^{2}\right)\,\partial_{x}^{2}-2x\,\partial_{x}-\frac{\left(m+ q\,x\right)^{2}}{1-x^{2}}\right\}\Theta_{q\ell m}\left(x\right)-\lambda\Theta_{q \ell m}\left(x\right). \tag{37}\]
Let us define
\[\Theta_{q\ell m}\left(x\right)=\left(1-x\right)^{\sigma/2}\left(1+x\right)^{ \nu/2}\,P_{q\ell m}\left(x\right), \tag{38}\]
to obtain, with \(\sigma=\left(\left|m\right|+q\right)\) and \(\nu=\left(\left|m\right|-q\right)\) (this choice is motivated by the fact that the space around a monopole is without singularities and so is the wave function around the monopole [12]),
\[\left(x^{2}-1\right)\,P_{q\ell m}^{{}^{\prime\prime}}\left(x\right)+\left[2q+ 2\left(m+1\right)x\right]\,P_{q\ell m}^{{}^{\prime}}\left(x\right)+\left(m^{2} +m-q^{2}-\lambda\right)\,P_{q\ell m}\left(x\right)=0. \tag{39}\]
The exact solution of which admits the form of hypergeometric functions
\[P_{q\ell m}\left(x\right)=C\,_{1}F_{1}\left(\left|m\right|+\frac{1}{2}\pm \frac{1}{2}\sqrt{4q^{2}+4\lambda+1},\left|m\right|+1-q,\frac{1}{2}\left(1+x \right)\right). \tag{40}\]
However, to secure finiteness and square integrability of the quantum mechanical wave functions, we truncate the confluent hypergeometric series into a polynomial of order \(n=0,1,2,\cdots\). In this case, we take
\[-n=\left|m\right|+\frac{1}{2}\pm\frac{1}{2}\sqrt{4q^{2}+4\lambda+1} \Longrightarrow\lambda=\left(n+\left|m\right|\right)\left(n+\left|m\right|+1 \right)-q^{2}\Longrightarrow\lambda=\upsilon\left(\upsilon+1\right)-q^{2}, \tag{41}\]
where \(\upsilon=n+\left|m\right|=\ell=0,1,2,\cdots\) is, without loss of generality, the angular momentum quantum number. That is, when \(q=0\) (i.e., the Wu-Yang monopole strength \(g\) is zero) one should naturally retrieve the eigenvalue of the
regular spherical harmonics as \(\lambda=\ell\left(\ell+1\right)\). Obviously, this result is in exact accord with that reported by Wu and Yang [9; 12] who have named \(Y_{\tilde{q}\ell m}\left(\theta,\varphi\right)\) as the monopole harmonics. At this point, one should observe that
\[Y_{\tilde{q}\ell m}\left(\theta,\varphi\right)=\left\{\begin{array}{ll}e^{i \left(m+q\right)\varphi}\,\left(1-x\right)^{\sigma/2}\left(1+x\right)^{\nu/2} \,P_{q\ell m}\left(x\right);\,\,\,\text{in region}\,\,R_{A}\\ e^{i\left(m-q\right)\varphi}\,\left(1-x\right)^{\sigma/2}\left(1+x\right)^{ \nu/2}\,P_{q\ell m}\left(x\right);\,\,\,\text{in region}\,\,R_{B}\end{array} \right.. \tag{42}\]
We may now rewrite the radial equation (34), with \(V\left(\rho\left(r\right)\right)=\omega^{2}r^{2}\), as
\[\left\{-\frac{1}{r^{2}}\,\partial_{r}\left(r^{2}\,\partial_{r}\right)+\frac{L \left(L+1\right)}{r^{2}}+\tilde{\omega}^{2}r^{2}\right\}\psi\left(\rho\left(r \right)\right)=\mathcal{E}\psi\left(\rho\left(r\right)\right), \tag{43}\]
where \(\mathcal{E}=E/\alpha^{2}\), \(\tilde{\omega}=\omega/\alpha\) and
\[L\left(L+1\right)=\frac{\ell\left(\ell+1\right)-q^{2}}{\alpha^{2}} \Longrightarrow L=-\frac{1}{2}+\sqrt{\frac{1}{4}+\frac{\ell\left(\ell+1 \right)-q^{2}}{\alpha^{2}}}. \tag{44}\]
One should notice that the square root signature is chosen so that for \(\alpha=1\) and \(q=0\) one would retrieve \(L=\ell\) the regular angular momentum quantum number. Moreover, the solution to (43) with \(\psi\left(\rho\right)=R\left(\rho\right)/\rho\) would read
\[R\left(r\right)=R\left(r\left(\rho\right)\right)\sim r^{L+1}\exp\left(-\frac{ \tilde{\omega}r^{2}}{2}\right)\,_{1}F_{1}\left(\frac{L}{2}+\frac{3}{4}-\frac{ \mathcal{E}}{4\tilde{\omega}},L+\frac{3}{2},\tilde{\omega}r^{2}\right);\,\,r =\sqrt{q\left(\rho\right)}\rho. \tag{45}\]
However, finiteness and square integrability would again enforce the condition that the confluent hypergeometric series is truncated into a polynomial of order \(n_{r}=0,1,2,\cdots\) so that \(\frac{L}{2}+\frac{3}{4}-\frac{\mathcal{E}}{4\tilde{\omega}}=-n_{r}\) to imply
\[\mathcal{E}=2\tilde{\omega}\left(2n_{r}+L+\frac{3}{2}\right)\Rightarrow E_ {n_{r},\ell,q}=2\alpha\omega\left(2n_{r}+\sqrt{\frac{1}{4}+\frac{\ell\left( \ell+1\right)-q^{2}}{\alpha^{2}}}+1\right). \tag{46}\]
The Schrodinger oscillators described in (43) are isospectral and invariant with the corresponding PDM Schrodinger oscillators
\[\left\{-\,-\,\frac{1}{q\left(\rho\right)\,\sqrt{f\left(\rho\right)}\,\rho^{2} }\,\partial_{\rho}\left(\frac{q\left(\rho\right)\,\rho^{2}}{\sqrt{f\left(\rho \right)}}\,\partial_{\rho}\right)+\frac{L\left(L+1\right)}{q\left(\rho\right) \,\rho^{2}}+\tilde{\omega}^{2}q\left(\rho\right)\,\rho^{2}\right\}\psi\left( \rho\right)=\mathcal{E}\psi\left(\rho\right), \tag{47}\]
At this point, we may report that the energy levels of (46) are plotted in Figures 1 and 2.
In Figures 1(a), 1(b), and 1(c), we show the energy levels of (46) against the global monopole parameter \(\alpha\) for different Wu-Yang magnetic monopole parameter values \(q=0\), \(q=\alpha/4\), and \(q=\alpha/16\), respectively. For \(q=0\) (i.e.,
Figure 1: The energy levels, Eq. (46), of the PDM Schrödinger oscillators in a PGM background and a Wu-Yang magnetic monopole for \(n_{r}=1\), \(\ell=0,1,2,3\), \(\omega=1\), and (a) \(q=0\) (i.e., no Wu-Yang magnetic monopole), (b) \(q=\alpha/4\), and (c) \(q=\alpha/16\).
No Wu-Yang monopole), we observe in Figure 1(a) that while the energy levels linearly increase with increasing \(\alpha\), the spacing between the energy levels (for the same \(n_{r}\) and \(\ell=0,1,2,3\), where \(n_{r}=1\) is used throughout) remains constant at each \(\alpha\) value. This is a common characteristic for the Schrodinger oscillator in a flat Minkowski spacetime (i.e., \(\alpha=1\)). However, in 1(b) and 1(c) ( for \(q=\alpha/4\), and \(q=\alpha/16\), respectively), we notice that the equal spacing between energy levels is no longer valid. The maximum value for \(q\) used are chosen so that \(\alpha_{\rm max}=1\). In Figures 2(a), 2(b), and 2(c), we show the energy levels at \(\alpha=0.5\), \(\alpha=0.9\), and \(\alpha=1\), respectively, for different Wu-Yang magnetic monopole strengths \(q=eg\). Where the maximum values for \(q\) are now chosen so that the square root in (46) remains a real-valued one. We observe that the Wu-Yang monopole yields non-equally spaced energy levels. Moreover, it is clear that the energies are shifted up as the PGM parameter \(\alpha\) increases for each value of the Wu-Yang monopole parameter \(q\) (including \(q=0\) for no Wu-Yang monopole).
V Thermodynamical properties of the PDM Schrodinger oscillators in a PGM background and a Wu-Yang magnetic monopole
In this section we shall study the thermodynamical properties of PDM Schrodinger oscillators in a global monopole spacetime background without and with a Wu-Yang magnetic monopole. In a straightforward manner one obtains the partition function
\[Z\left(\beta\right)=\sum_{n_{r}=0}^{\infty}\exp\left(-\beta\,E_{n_{r},\ell,q} \right)=\frac{\exp\left(-2\alpha\beta\omega\tau\right)}{1-\exp\left(-4\alpha \beta\omega\right)};\ \beta=\frac{1}{K_{B}T}, \tag{48}\]
where \(K_{B}\) is the Boltzmann constant, \(T\) is the temperature and
\[\tau=1+\frac{1}{2\alpha}\sqrt{\alpha^{2}+4\ell\left(\ell+1\right)-4q^{2}}. \tag{49}\]
At this point, one should notice that for \(q=eg=0\) represents PDM Schrodinger oscillators in a global monopole spacetime background without the Wu-Yang magnetic monopole. Moreover, the global monopole parameter \(\alpha\) and the Wu-Yang monopole strength (through \(q=eg\)) are correlated in such a way that the value under the square root
Figure 2: The energy levels, Eq. (46), of the PDM Schrödinger oscillators in a PGM background and a Wu-Yang magnetic monopole for \(n_{r}=1\), \(\ell=0,1,2,3\), \(\omega=1\), and for different values of Wu-Yang magnetic monopole parameter \(q=eg\) at (a) \(\alpha=0.5\), (b) \(\alpha=0.9\), and (c) \(\alpha=1\) (i.e., flat Minkowski spacetime).
remains real. In this case, \(0\leq q\leq\sqrt{\ell\left(\ell+1\right)+\alpha^{2}/4}\), and consequently \(q_{\max}=\ell+1/2\), where \(\alpha_{\max}=1\) corresponds to flat Minkowski spacetime.
To observe the effects of the global monopole spacetime background and the Wu-Yang magnetic monopole on some thermodynamical properties associated with such systems, we find that the Helmholtz free energy \(f\left(T\right)\) is given by
\[f\left(T\right)=-\frac{1}{\beta}\ln\left(Z\left(\beta\right)\right)=2\alpha \omega\tau+K_{B}T\,\ln\left(1-\exp\left(-\frac{4\alpha\omega}{K_{B}T}\right) \right), \tag{50}\]
the Entropy \(S\left(T\right)\)
\[S\left(T\right)=-\frac{df\left(T\right)}{dT}=-K_{B}\,\ln\left(1-\exp\left(- \frac{4\alpha\omega}{K_{B}T}\right)\right)+\frac{4\alpha\omega}{T}\left[\frac {\exp\left(-\frac{4\alpha\omega}{K_{B}T}\right)}{1-\exp\left(-\frac{4\alpha \omega}{K_{B}T}\right)}\right], \tag{51}\]
the Specific heat \(c\left(T\right)\)
\[c\left(T\right)=T\,\frac{dS\left(T\right)}{dT}=\frac{16\alpha^{2}\omega^{2}} {T^{2}K_{B}}\left[\frac{\exp\left(-\frac{2\alpha\omega}{K_{B}T}\right)}{1-\exp \left(-\frac{4\alpha\omega}{K_{B}T}\right)}\right]^{2}, \tag{52}\]
and Mean energy \(U\left(T\right)\)
\[U\left(T\right)=-\frac{dZ\left(\beta\right)}{d\beta}=2\alpha\omega\tau-\frac{ 4\alpha\omega}{1-\exp\left(\frac{4\alpha\omega}{K_{B}T}\right)}. \tag{53}\]
We observe that while the Helmholtz free energy \(f\left(T\right)\) in (50) and the Mean energy \(U\left(T\right)\) in (53) are affected by the Wu-Yang magnetic monopole through the parameter \(\tau\) in (49), the Entropy \(S\left(T\right)\) in (51) and the Specific heat \(c\left(T\right)\) in (52) are not. However, all mentioned thermodynamical properties are affected by the global monopole through the parameter \(\alpha\).
In Figures 3(a), 3(b), and 3(c), we show (for \(\ell=1\) states) the effect of the Wu-Yang [12] magnetic monopole on the Helmholtz free energies \(f(T)\), Eq.(50), of the Schrodinger-oscillator in a point-like global monopole for \(q=0\), \(q=1\), and \(q=1.4\), respectively. It is obvious that as \(q=eg\) increases the Helmholtz free energy converges more
Figure 3: The Helmholtz free energies \(f\left(T\right)\), (50), against \(K_{B}T\) of the PDM Schrödinger oscillators in a PGM background and a Wu-Yang magnetic monopole for \(\ell=1\), \(\omega=1\), and \(\alpha=0.1,0.3,0.6,0.9\) at (a) \(q=0\), (b) \(q=1\), and (c) \(q=1.4\).
rapidly to the zero value as the temperature \(T\) grows up from just above zero. In Figures 4(a), 4(b), and 4(c), we show (for \(\ell=1\) states) the effect of the Wu-Yang monopole on the mean energy \(U\left(T\right)\), Eq. (53), for \(q=0\), \(q=1\), and \(q=1.4\), respectively. We observe that as the Wu-Yang monopole strength increases (through \(q=eg\)) the mean energy decreases for each value of \(T\). We also notice that the mean energy \(U\left(T\right)\), for all allowed \(\alpha\) values used, tend to cluster at very high temperatures for \(q=0\) (i.e., no Wu-yang monopole). However, it is clear that as \(q\) increases from zero, such clustering is slowed down. In Figure 5(a), we show (for \(\ell=1\) states) the entropy \(S\left(T\right)\), Eq. (51), as the temperature grows up from just above zero for the Schrodinger-oscillator in a point-like global monopole. Figure 5(b) shows the specific heat \(c\left(T\right)\), Eq. (52), against the temperature for the Schrodinger-oscillator in a point-like global monopole. It is obvious that the ratio \(c\left(T\right)/K_{B}\to 1\Rightarrow c\left(T\right)\to K_{B}\) as \(T>>1\) for all allowed values of the point-like global monopole parameter \(\alpha\). Notably, the Wu-Yang magnetic monopole has no effect on the entropy \(S\left(T\right)\) or the specific heat \(c\left(T\right)\) as the results in (51) and (52), respectively, suggest. The same thermodynamical properties hold true for the PDM Schrodinger-oscillators in a PGM spacetime and a Wu-Yang magnetic monopole.
Figure 4: The mean energies \(U\left(T\right)\), Eq. (53), against \(K_{B}T\) of the PDM Schrödinger oscillators in a PGM background and a Wu-Yang magnetic monopole for \(\ell=1\), \(\omega=1\), and \(\alpha=0.1,0.3,0.6,0.9\) at (a) \(q=0\), (b) \(q=1\), and (c) \(q=1.4\).
Figure 5: For \(\ell=1\), \(\omega=1\), \(\alpha=0.1,0.2,0.6,0.9\) at all values of \(q\) (i.e., the Wu-Yang magnetic monopole has no effect on the Entropy) we show (a) the ratio \(S\left(T\right)/K_{B}\), where \(S\left(T\right)\) is the Entropy, against \(K_{B}T\) of the PDM Schrödinger oscillators in a PGM background and a Wu-Yang magnetic monopole, and (b) The ratio \(c\left(T\right)/K_{B}\), where \(c\left(T\right)\) is the Specific heat, against \(K_{B}T\) of the PDM Schrödinger oscillators in a PGM background and a Wu-Yang magnetic monopole.
VI PDM Schrodinger oscillators in a PGM background and a Wu-Yang magnetic monopole subjected to a hard-wall potential
In this section, we consider that the system of PDM Schrodinger oscillators in a PGM background and a Wu-Yang magnetic monopole is now subjected to an impenetrable hard-wall potential at some radial distance \(r_{\circ}=\sqrt{q\left(\rho_{\circ}\right)}\rho_{\circ}\). This would in turn restrict the motion of the PDM Schrodinger oscillators mentioned above to be confined within a spherical-box of radius \(r_{\circ}\) with an impenetrable hard-wall. This would suggest that the the confluent hypergeometric polynomials \(\,{}_{1}F_{1}\left(\frac{L}{2}+\frac{3}{4}-\frac{\mathcal{E}}{4\bar{\omega}},L +\frac{3}{2},\bar{\omega}r^{2}\right)\) in (45) vanish at \(r=r_{\circ}\) to consequently yield that \(R\left(r_{\circ}\right)=0\). One would then appeal to subsection 13.5 on the asymptotic expansions and limiting forms of Abramowitz and Stegun [57] and recollect formula (13.5.14)
\[\lim_{a\rightarrow-\infty}\,{}_{1}F_{1}\left(a,b,x\right)=\Gamma \left(b\right)\,e^{x/2}\,\pi^{-1/2}\left(\frac{bx}{2}-ax\right)^{1/4-b/2}\, \cos\left(\sqrt{\left(2b-4a\right)x}-\frac{b}{2}\pi+\frac{\pi}{4}\right) \left[1+O\left(|\frac{b}{2}-a|^{-1/2}\right)\right], \tag{54}\]
for a real \(x\) and a bounded \(b\). This formula immediately suggests that \(a=\frac{L}{2}+\frac{3}{4}-\frac{\mathcal{E}}{4\bar{\omega}}\), \(b=L+\frac{3}{2}\), and \(x=\bar{\omega}r^{2}\Rightarrow x_{\circ}=\tilde{\omega}r_{\circ}^{2}\). Consequently, only at very high energies of PDM Schrodinger oscillators and/or very small values of the PGM parameter \(\alpha\) (i.e., \(\mathcal{E}=E/\alpha^{2}\)\(\longrightarrow\)\(\infty\)) one would have a vanishing radial function at some \(r=r_{\circ}\), i.e., \(R\left(r_{\circ}\right)=0\). Under such conditions,
\[\cos\left(\sqrt{\left(2b-4a\right)x_{\circ}}-\frac{b}{2}\pi+ \frac{\pi}{4}\right)=0\Rightarrow\sqrt{\left(2b-4a\right)x}-\frac{b}{2}\pi+ \frac{\pi}{4}=\left(n_{r}+\frac{1}{2}\right)\pi \tag{55}\]
one would, in a straightforward manner, obtain
\[E_{n_{r},\ell,q}=\frac{\pi^{2}\alpha^{2}}{4r_{\circ}^{2}}\left[2n_{r}+\sqrt{ \frac{1}{4}+\frac{\ell\left(\ell+1\right)-q^{2}}{\alpha^{2}}}+\frac{3}{2} \right]^{2} \tag{56}\]
Comparing this result with that of (46) we observe that the hard-wall spherical box has indeed changed the corresponding energies for the PDM Schrodinger oscillators in a PGM background and a Wu-Yang magnetic monopole. In Figures 6(a),6(b), and 6(c), we show the hard-wall, at \(r=r_{\circ}=1\), effects on the energy levels, Eq. (56) for \(n_{r}=1\), and \(\ell=0,1,2,3\).. In 6(a) and 6(b) the energy levels are plotted against the PGM parameter \(\alpha\) at \(q=0\) (i.e., no
Figure 6: We show the hard-wall, at \(r=r_{\circ}=1\), effects on the energy levels, Eq. (56) for \(n_{r}=1\), and \(\ell=0,1,2,3\). In (a) and (b) the energy levels are plotted against the PGM parameter \(\alpha\) at \(q=0\) (i.e., no Wu-yang monopole) and \(q=\alpha/4\), respectively. In (c) the energy levels are plotted against the Wu-Yang monopole parameter \(q\) at \(\alpha=0.5\).
Wu-yang monopole) and \(q=\alpha/4\), respectively. In 6(c) the energy levels are plotted against the Wu-Yang monopole parameter \(q\) at \(\alpha=0.5\).
To figure out the hard-wall effects of the PDM Schrodinger oscillators in a PGM background and a Wu-Yang magnetic monopole, we compare between Figures 1(a) and 6(a). We observe that the equidistance between the energy levels in 1(a) is no longer valid in 6(a). The separation between the energy levels in 6(a) quadratically increases with increasing PGM parameter \(\alpha\) as it increases from just above the zero value. Notably, drastic shift-ups in the energy levels are obvious as \(\alpha\) grows up. The same trend of the hard-wall effect is also observed through the comparison between Figures 1(b) and 6(b). This is expected from \(\alpha^{2}\) dependence of \(E_{n_{r},\ell,q}\) in (56). However, the comparison between Figures 2(a) and 6(c), at a fixed \(\alpha=0.5\), again suggests drastic shift-ups in the energy levels, but, in this case, each energy level very slowly decreases to a minimum value of
\[E_{\min}=\frac{\pi^{2}\alpha^{2}}{4r_{\circ}^{2}}\left(2n_{r}+3/2\right)^{2} \tag{57}\]
at \(q=\alpha/2\) (but never converges to the zero value) as \(q\) increases up to its allowed maximum value (mandated by \(\alpha_{\max}=1\) for each \(\ell\) value).
## VII Concluding remarks
In this study, we have shown that a specific transformation/deformation (6) of a PGM spacetime (3) effectively yields a von Roos [39] PDM Schrodinger equation (16). Within such a deformed/transformed PGM spacetime recipe, we have shown that all our PDM Schrodinger oscillators admit isospectrality and invariance with the constant mass Schrodinger oscillators in the regular PGM spacetime and in the presence of a Wu-Yang magnetic monopole. Consequently, the exclusive dependence of the thermodynamical partition function on the energy eigenvalues manifestly suggests that the Schrodinger oscillators and the PDM Schrodinger oscillators share the same thermodynamical properties. Moreover, we have discussed the hard-wall effects on the energy levels PDM Schrodinger oscillators in a global monopole spacetime without and with a Wu-Yang magnetic monopole. Drastic energy levels' shift-ups are observed as a consequence of such hard-wall.
In connection with the energy levels, for both constant mass and PDM Schrodinger oscillators in a PGM spacetime and a Wu-Yang magnetic monopole, our observations are in order. The common characterization of equal spacing between the energy levels at \(\alpha=1\) (flat Minkowski spacetime limit) is only observed for \(q=0\) ( no Wu-Yang monopole effect) for all allowed PGM parameter \(\alpha\) values (i.e., \(0<\alpha\leq 1\)). However, for the feasible correlations \(q=\alpha/4\), and \(q=\alpha/16\) (just two testing toy models), we notice that such equal spacing between energy levels is no longer valid (documented in Figures 1(a), 1(b), and 1(c)). We have also observed that the Wu-Yang monopole yields non-equal spacing between the energy levels (documented in Figures 2(a), 2(b), and 2(c)). Hereby, the energy levels are observed to be shifted up as the PGM parameter \(\alpha\) increases for each value of the Wu-Yang monopole parameter \(q\) (including \(q=0\) for no Wu-Yang monopole). On the other hand, the hard-wall effect is clearly observed through the comparisons between Figures 1(a) and 6(a), and 1(b) and 6(b). Such comparisons suggest that the equidistance between the energy levels is no longer valid and the separation between the energy levels quadratically increases with increasing PGM parameter \(\alpha\) (as it increases from just above the zero value). Notably, such drastic shift-ups are expected from the \(\alpha^{2}\)-dependence of \(E_{n_{r},\ell,q}\) in (56). Nevertheless, the comparison between Figures 2(a) and 6(c), for a fixed \(\alpha=0.5\), again
suggests drastic shift-ups in the energy levels. Moreover, each energy level slowly converges to the minimum value in (57) at \(q=\alpha/2\) (but never converges to the zero value) as \(q\) increases up to its allowed maximum value (mandated by \(\alpha_{\max}=1\) for each \(\ell\) value).
On the thermodynamical properties side, we notice that the Helmholtz free energies \(f(T)\), Eq.(50), and the mean energy \(U\left(T\right)\), Eq. (53), are thermodynamical properties that are directly affected by Wu-Yang [12] magnetic monopole, whereas the entropy \(S\left(T\right)\), Eq. (51), and the specific heat \(c\left(T\right)\), Eq. (52), are not. We have observed that Helmholtz free energies \(f(T)\) converge more rapidly to the zero free energy as the Wu-Yang monopole parameter increases with increasing temperature (documented in Figures 3(a), 3(b), and 3(c)). The mean energy \(U\left(T\right)\) decreases for each value of \(T\) as the Wu-Yang monopole strength increases through \(q=eg\). Yet, we have noticed that the mean energy \(U\left(T\right)\), for all allowed \(\alpha\) values used, tend to cluster at very high temperatures for \(q=0\) (i.e., no Wu-yang monopole), and as \(q\) increases from zero, such clustering is slowed down (documented in Figures 4(a), 4(b), and 4(c)). On the other hand, the entropy \(S\left(T\right)\) increases with increasing temperatures (Figure 5(a)), whereas the specific heat \(c\left(T\right)\) increases with increasing temperature up to a maximum value, mandated by the asymptotic behaviour of Eq. (52) so that the ratio \(c\left(T\right)/K_{B}\to 1\) and consequently \(c\left(T\right)\to K_{B}\) as \(T\rightarrow\infty\), for all allowed values of the point-like global monopole parameter \(\alpha\).
Finally, the energy levels as well as the thermodynamical properties reported in the current methodical proposal, hold true for both constant mass and PDM Schrodinger-oscillators in a point-like global monopole spacetime and a Wu-Yang magnetic monopole. This is authorized by the isospectrality and invariance of the two models considered (i.e., constant mass and PDM Schrodinger-oscillators) in the current study.
|
2308.08739 | Enhancing Phrase Representation by Information Bottleneck Guided Text
Diffusion Process for Keyphrase Extraction | Keyphrase extraction (KPE) is an important task in Natural Language
Processing for many scenarios, which aims to extract keyphrases that are
present in a given document. Many existing supervised methods treat KPE as
sequential labeling, span-level classification, or generative tasks. However,
these methods lack the ability to utilize keyphrase information, which may
result in biased results. In this study, we propose Diff-KPE, which leverages
the supervised Variational Information Bottleneck (VIB) to guide the text
diffusion process for generating enhanced keyphrase representations. Diff-KPE
first generates the desired keyphrase embeddings conditioned on the entire
document and then injects the generated keyphrase embeddings into each phrase
representation. A ranking network and VIB are then optimized together with rank
loss and classification loss, respectively. This design of Diff-KPE allows us
to rank each candidate phrase by utilizing both the information of keyphrases
and the document. Experiments show that Diff-KPE outperforms existing KPE
methods on a large open domain keyphrase extraction benchmark, OpenKP, and a
scientific domain dataset, KP20K. | Yuanzhen Luo, Qingyu Zhou, Feng Zhou | 2023-08-17T02:26:30Z | http://arxiv.org/abs/2308.08739v2 | Enhancing Phrase Representation by Information Bottleneck Guided Text Diffusion Process for Keyphrase Extraction
###### Abstract
Keyphrase extraction (KPE) is an important task in Natural Language Processing for many scenarios, which aims to extract keyphrases that are present in a given document. Many existing supervised methods treat KPE as sequential labeling, span-level classification, or generative tasks. However, these methods lack the ability to utilize keyphrase information, which may result in biased results. In this study, we propose Diff-KPE, which leverages the supervised Variational Information Bottleneck (VIB) to guide the text diffusion process for generating enhanced keyphrase representations. Diff-KPE first generates the desired keyphrase embeddings conditioned on the entire document and then injects the generated keyphrase embeddings into each phrase representation. A ranking network and VIB are then optimized together with rank loss and classification loss, respectively. This design of Diff-KPE allows us to rank each candidate phrase by utilizing both the information of keyphrases and the document. Experiments show that Diff-KPE outperforms existing KPE methods on a large open domain keyphrase extraction benchmark, OpenKP, and a scientific domain dataset, KP20K.
## Introduction
Keyphrase extraction (KPE) aims to extract several _present_ keyphrases from a document that can highly summarize the given document. Many neural network based methods formulate KPE as a token-level sequence labeling problem by predicting a single label for each token [16, 17, 14]. To use the phrase-level semantic information, some methods [15, 16] modeling KPE as a phrase classification task by assigning labels to each text span. Different from predicting label for span phrase, another class of models directly learn to rank each phrase [15, 16, 17, 18]. However, these methods suffer from the following issues. Firstly, most of these methods extract keyphrase by solely relying on the information from local phrases, which may lead to mismatch between the concepts of extracted keyphrases and the document [14]. Secondly, non of them can take the previously extracted keyphrases into account while extracting the next one, which can often lead to diminished diversity in the output and biased results.
In contrast to the traditional approaches mentioned above, generative models have the advantage of attending to the entire input document and generating subsequent keyphrases based on the previous ones during the decoding process [16, 15, 16]. However, these methods suffer from inefficient decoding due to their autoregressive manner. Additionally, the mismatch between training and evaluation data introduces exposure bias, which adversely affects the model's performance [14]. Therefore, the efficient utilization of information from desired keyphrases during the extraction process remains an area that requires further investigation.
We shifted our perspective to another powerful generative model for KPE, namely the diffusion model. Recently, diffusion models have been applied in both token-level generation task [14, 15] and sentence-level extraction task [16]. As a class of deep latent generative models, the diffusion model perturbs the data through a forward diffusion process and then reconstructs the data by learning a reverse diffusion process [17]. During inference, the diffusion model can recover desired data from randomly sampled Gaussian noise. Keyphrase extraction (KPE) aims to extract several _present_ keyphrases from a document that can highly summarize the given document.
Inspired by this, we propose Diff-KPE, a novel diffusion-based keyphrase extraction model. To leverage the keyphrases information while extracting, we first use the diffusion model to recover a list of reference keyphrase embeddings conditioned on the whole document embedding, then the recovered keyphrase embeddings are injected into each phrase representation obtained by Convolution Neural Networks (CNNs). To extract candidate keyphrases, we apply a ranking network to rank each phrase representation. By doing this, we can extract desired top k keyphrases from the ranked list of phrases. In addition, we introduce a supervised Variational Information Bottleneck (VIB) to optimize a classification loss for each phrase. Supervised VIB aims to preserve the information about the target classes in the latent space while filtering out irrelevant information from the input phrase representation. Multitask learning of supervised
VIB can guide the model to generate informative phrase representations, thereby improving the performance of the ranking network. Overall, Diff-KPE incorporates these modules by simultaneously training these components.
where \(1\leq k\leq N\) and \(N\) represents the pre-defined maximum length of phrase. The \(i\)th k-gram phrase representation \(\mathbf{s}_{i}^{k}\) is calculated by its corresponding CNN\({}^{k}\).
### Keyphrase Embeddings Generation
In order to inject reference keyphrases information into each phrase representation, we use a continuous diffusion module to generate desired keyphrase embeddings.
Input EncodingTo allow the diffusion module to generate desired keyphrase embeddings conditioned on the whole document, we first use another BERT model to obtain initial document and keyphrases embeddings. Refer to \(m\) keyphrases and document embedding as \(\mathbf{E}^{kp}=\{\mathbf{e}_{i}^{kp}\}_{i}^{m}\) and \(\mathbf{e}^{D}\), the input encoding of the diffusion module is formatted as:
\[\begin{split}\mathbf{H}^{\mathbf{in}}&=\mathbf{h}^ {D}||\mathbf{H}^{kp}\\ &=\mathbf{TransfomerEncoder}(\mathbf{e}^{D}||\mathbf{E}^{kp})\end{split} \tag{2}\]
where \(\mathbf{H}^{kp}=\{\mathbf{h}_{i}^{kp}\}_{i}^{m}\) and \(\mathbf{h}^{D}\) are the latent embedding of the document and \(m\) keyphrases, \(\mathbf{TransfomerEncoder}\) is a stacked Transformer encoder which embeds the input vector into latent space, and \(\mathbf{e}^{D}\) is the document embedding, i.e., the [CLS] token embedding in BERT model, and \(||\) indicates concatenation operation. Such input encoding enables our continuous diffusion module to generate desired keyphrase embeddings conditional to the current document embeddings \(\mathbf{e}^{D}\).
Diffusion Generation ProcessOnce the input encoding \(\mathbf{H}^{\mathbf{in}}\) is obtained, the diffusion model aims to perturb \(\mathbf{H}^{\mathbf{in}}\) gradually and then recover the original \(\mathbf{H}^{\mathbf{in}}\) by learning a reverse process. To achieve this, a one-step Markov transition \(q(\mathbf{x}_{0}|\mathbf{H}^{\mathbf{in}})\) is performed to obtain the initial state \(\mathbf{x}_{0}\):
\[\begin{split}\mathbf{x}_{0}&=\mathbf{x}_{0}^{D}|| \mathbf{x}_{0}^{kp}\\ &\sim\mathcal{N}(\mathbf{H}^{\mathbf{in}},\beta_{0}\mathbf{I}) \end{split} \tag{3}\]
where \(\beta_{t}\in(0,1)\) adjusts the scale of the variance, \(\mathbf{x}_{0}^{D}\sim\mathcal{N}(\mathbf{h}^{D},\beta_{0}\mathbf{I})\) and \(\mathbf{x}_{0}^{kp}\sim\mathcal{N}(\mathbf{H}^{kp},\beta_{0}\mathbf{I})\) are the latent document embedding and keyphrase embeddings, respectively. We then start the forward process by gradually adding Gaussian noise to the latent keyphrase embeddings \(\mathbf{x}_{t}^{kp}\). Following the previous work [23], we keep the latent document embedding \(\mathbf{x}_{0}^{D}\) unchanged, so that the diffusion module can generate keyphrase embeddings condition to the source document. Formally, at step \(t\) of the forward process \(q(\mathbf{x}_{t}|\mathbf{x}_{t-1})\), the noised latent embedding is \(\mathbf{x}_{t}\):
\[\mathbf{x}_{t}=\mathbf{x}_{0}^{D}||\mathcal{N}(\mathbf{x}_{t}^{kp};\sqrt{1- \beta_{t}}\mathbf{x}_{t-1}^{kp},\beta_{t}\mathbf{I}) \tag{4}\]
where \(t\in\{1,2,...,T\}\) for a total of \(T\) diffusion steps. For more details about the diffusion generation process, please refer to [19].
After adding the noise gradually at a specific time step \(t\) (usually randomly choose between \([1,T]\)), the backward process is performed to recover the keyphrase embeddings \(\mathbf{x}_{t}^{kp}\) by removing the noised. We use another stacked Transformer encoder model \(f_{\theta}\) to conduct this backward process
Figure 1: Diff-KPE is jointly trained with a continuous diffusion module, a variational information bottleneck, and a rank network. The black dashed box is the diffusion module, the blue dashed box is the VIB module and the purple dashed box is the rank network.
to recover the original input encoding \(\mathbf{H}^{kp}\):
\[\mathbf{\tilde{H}^{kp}}=f_{\theta}(\mathbf{x}_{t}^{kp},t) \tag{5}\]
where \(f_{\theta}(\mathbf{x}_{t}^{kp},t)\) is the stacked Transformer network to reconstruct \(\mathbf{H}^{kp}\) at time step \(t\).
Since the main objective of the diffusion generation module is to reconstruct the original input encoding, the objective loss of continuous diffusion module can be defined by:
\[\mathcal{L}_{dif}=\sum_{t=1}^{T}\|\mathbf{H}^{kp}-f_{\theta}(\mathbf{x}_{t}^{kp },t)\|^{2}+\mathcal{R}(\mathbf{x}_{0}) \tag{6}\]
where \(\mathcal{R}(\mathbf{x}_{0})\) is an L2 regularization term for \(\mathbf{x}_{0}\).
### Keyphrase Ranking
After the diffusion generation process, the generated keyphrase embeddings \(\mathbf{\tilde{H}}^{kp}\) are concatenated into each phrase representation \(\mathbf{s}_{i}^{k}\). This aims to inject the information from keyphrases into each phrase, resulting in performance improvement of keyphrase ranking. Specifically, formulate the final phrase representation as:
\[\tilde{\mathbf{s}}_{i}^{k}=\mathbf{s}_{i}^{k}|\textbf{flat}(\mathbf{\tilde{H} }^{kp}) \tag{7}\]
where \(\textbf{flat}(\mathbf{x})\) means that \(\mathbf{x}\) is flattened to a vector. Equation 7 means that the final phrase representation not only contains the original phrase representation but also all the reconstructed keyphrase information.
For training the model to rank each phrase, we introduce a contrastive rank loss. Following the previous work Sun et al. (2021), we first take a feedforward layer to project the input representation \(\tilde{\mathbf{s}}_{i}^{k}\) to a scalar score:
\[r(\tilde{\mathbf{s}}\tilde{\mathbf{s}}_{i}^{k})=\textbf{FeedForward}(\tilde{ \mathbf{s}}\tilde{\mathbf{s}}_{i}^{k}) \tag{8}\]
Then the margin rank loss is introduced to learn to rank keyphrase \(\tilde{\mathbf{s}}\mathbf{s}_{+}\) ahead of non-keyphrase \(\tilde{\mathbf{s}}\mathbf{s}_{-}\) for the given document \(D\):
\[\mathcal{L}_{rank}=\sum_{\tilde{\mathbf{s}}_{+},\tilde{\mathbf{s}}_{-}\in D} \max(0,1-r(\tilde{\mathbf{s}}\mathbf{s}_{+})+r(\tilde{\mathbf{s}}\mathbf{s}_{ -})) \tag{9}\]
### Keyphrase Classification
Combining the keyphrase classification task during training can enhance the phraseness measurement of the phrase Sun et al. (2021); Song et al. (2021). Similar to previous work Xiong et al. (2019); Sun et al. (2021); Song et al. (2021), we introduce a classification loss for each final phrase representation for multi-task learning. We found that the use of supervised VIB significantly improve the ranking performance (See Ablation Study). Supervised VIB aims to preserve the information about the target classes in the latent space while filtering out irrelevant information from the input Voloshynovskiy et al. (2019). Given the final phrase representation \(\tilde{\mathbf{s}}\mathbf{s}_{i}^{k}\), the supervised VIB first compresses the input to a latent variable \(z\sim q_{\phi_{1}}(z|\tilde{\mathbf{s}}\mathbf{s}_{i}^{k})\). We apply two linear layers to construct the parameters \(q\) using the following equations:
\[\begin{split}\mu&=\mathbf{W}_{\mu}\tilde{\mathbf{s} }_{i}^{k}+\mathbf{b}_{\mu}\\ \sigma^{2}&=\mathbf{W}_{\sigma}\tilde{\mathbf{s}}_{ i}^{k}+\mathbf{b}_{\sigma}\end{split} \tag{10}\]
where \(\mu\) and \(\sigma\) are the parameters of a multivariate Gaussian, representing the latent feature space of the phrase; \(\mathbf{W}\) and \(\mathbf{b}\) are weights and biases of the linear layer, respectively. The posterior distribution \(z\sim q_{\phi_{1}}(z|\tilde{\mathbf{s}}\mathbf{s}_{i}^{k})\) is approximated via reparameterisation trick Kingma and Welling (2013):
\[z=\mu+\sigma\epsilon,\text{where }\epsilon\sim\mathcal{N}(0,1) \tag{11}\]
Since the main objective of VIB is to preserve target class information while filtering out irrelevant information from the input, the objective loss function for the supervised VIB is based on classification loss and compression loss. Denoted by \(y\) as the true label of the input phrase, the objective loss of the supervised VIB is defined as:
\[\begin{split}\mathcal{L}_{vib}(\phi)&=\mathbb{E}_{z }[-\log p_{\phi_{2}}(y|z)]\\ &+\alpha\mathbb{E}_{\tilde{\mathbf{s}}_{i}^{k}}[D_{KL}(q_{\phi_{1 }}(z|\tilde{\mathbf{s}}\tilde{\mathbf{s}}_{i}^{k}),pr(z))]\end{split} \tag{12}\]
where \(pr(z)\) is an estimate of the prior probability \(q_{\phi_{1}}(z)\), \(\alpha\) range in \([0,1]\), \(\phi\) is the neural network parameters, and \(D_{KL}\) is the Kullback-Leibler divergence. We use a multi-layer perceptron with one linear layer and softmax function to calculate \(p_{\phi_{2}}(y|z)\). Note that Equation 12 can be approximated by the Monte Carlo sampling method with sample size \(M\).
### Optimization and Inference
We jointly optimize the diffusion module, ranking network, and supervised VIB end-to-end. Specifically, the overall training objective loss can be represented as:
\[\mathcal{L}=\mathcal{L}_{dif}+\mathcal{L}_{vib}+\mathcal{L}_{rank} \tag{13}\]
For inference, the Transformer encoder first obtains the initial document embeddings \(\mathbf{h}^{D}\), and then the one-step Markov \(q(\mathbf{x}_{0}^{D}|\mathbf{h}^{D})\) is performed. To construct the noise keyphrase embedding \(\mathbf{x}_{T}^{kp}\), we random sample \(m\) Gaussian noise embeddings such that \(\mathbf{x}_{T}^{kp}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). Then the reverse process is applied to remove the Gaussian noise of \(\mathbf{x}_{T}=\mathbf{x}_{0}^{D}||\mathbf{x}_{T}^{kp}\) iteratively and get the output keyphrase embeddings \(\mathbf{\tilde{H}}^{kp}=[\tilde{\mathbf{h}}_{1}^{kp},\tilde{\mathbf{h}}_{2}^{ kp},...,\tilde{\mathbf{h}}_{m}^{kp}]\). After that, each original phrase representation \(\tilde{\mathbf{s}}_{i}^{k}\) is concatenated to the flattened keyphrase embeddings \(\mathbf{\tilde{H}}^{kp}\) and input to the ranking network to obtain the final score for each phrase.
## Experiments
### Datasets
In this paper, we use seven KPE benchmark datasets in our experiments.
* **OpenKP**Xiong et al. (2019) consists of around 150K web documents from the Bing search engine. We follow its official split of training (134K), development (6.6K), and testing (6.6K) sets. Each document in OpenKP was labeled with 1-3 keyphrases by expert annotators.
* **KP20K**Meng et al. (2017) consists of a large amount of high-quality scientific metadata in the computer science domain from various online digital libraries Meng et al. (2017). We follow the original partition of training (528K), development (20K), and testing (20K) set.
* [leftmargin=*]
* **SemEval-2010**[16] contains 244 scientific documents. The official split of 100 testing documents is used for testing in our experiments.
* **SemEval-2017**[10] contains 400 scientific documents. The official split of 100 testing documents is isis used for testing in our experiments.
* **Nus**[11] contains 211 scholarly documents. We treat all 211 documents as testing data.
* **Inspec**[12] contains 2000 paper abstracts. We use the original 500 testing papers and their corresponding controlled (extractive) keyphrases for testing.
* **Krapivin**[10] contains 2305 papers from scientific papers in ACM. We treat all 2305 papers as testing data.
Note that in order to verify the robustness of our model, we test the model trained with KP20K on the testing data of SemEval-2010, SemEval-2017, Nus, Inspec, and Krapivin. For all datasets, only the _present_ keyphrases are used for training and testing. The statistics of the training set of OpenKP and KP20k are shown in Table 3.
## Baselines
To keep consistent with previous work [13, 14, 15, 16], we compare our model with two categories of KPE methods. Traditional KPE baselines and Neural KPE baselines.
Traditional KPE baselines consist of two popular unsupervised KPE methods, statistical feature-based method TF-IDF [10] and graph-based method TextRank [12], and two feature-based KPE systems PROD [15] and Maui [11].
Neural KPE baselines consist of a sequence-to-sequence generation-based model named CopyRNN [13]. Previous state-of-the-art method on OpenKP and KP20K, JointKPE [16], including its two variants ChunkKPE and RankKPE. Two phrase-level classification-based models named SKE-Base-Cls [12] and BLING-KPE [15]. We also compare our model with BERT-based span extraction and sequence tagging methods, both of which come from the implementation of [16].
\begin{table}
\begin{tabular}{c|c c|c c c|c c c|c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{8}{c|}{**OpenKP**} & \multicolumn{2}{c}{**KP20k**} \\ \cline{2-11} & \multicolumn{2}{c|}{FI@5 F1@1 R@1} & \multicolumn{2}{c|}{FI@3 F0@3 R@3} & \multicolumn{2}{c|}{FI@5 F0@5 R@5} & \multicolumn{2}{c}{FI@5 F1@10} \\ \hline \multicolumn{1}{c}{**Traditional KPE**} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline TF-IDF & 19.6 & 28.3 & 15 & 22.3 & 18.4 & 28.4 & 19.6 & 13.7 & 34.7 & 10.8 & 13.4 \\ TextRank & 5.4 & 7.7 & 4.1 & 7.6 & 6.2 & 9.8 & 7.9 & 5.5 & 14.2 & 18.0 & 15.0 \\ Maui & - & - & - & - & - & - & - & - & - & 27.3 & 24.0 \\ PROD & 24.5 & 35.3 & 18.8 & 23.6 & 19.5 & 29.9 & 18.8 & 13.1 & 33.1 & - & - \\ \hline \multicolumn{1}{c}{**Neural KPE**} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline CopyRNN & 21.7 & 28.8 & 17.4 & 23.7 & 18.5 & 33.1 & 21 & 14.1 & 41.3 & 32.7 & 27.8 \\ BLING-KPE & 28.5 & 40.4 & 22.0 & 30.3 & 24.8 & 39.0 & 27.0 & 18.8 & 48.1 & - & - \\ SKE-Base-Cls & - & - & - & - & - & - & - & - & 39.2 & 33.0 \\ BERT-Span & 34.1 & 46.6 & 28.9 & 34.0 & 27.7 & 28.9 & 29.3 & 20.3 & 59.3 & 39.3 & 32.5 \\ BERT-SeqTag & 37.0 & 50.2 & 31.5 & 37.4 & 30.5 & 54.1 & 31.8 & 22.2 & 64.2 & 40.7 & 33.5 \\ ChunkKPE & 37.0 & 50.4 & 31.4 & 37.0 & 30.5 & 53.3 & 31.1 & 21.7 & 62.7 & 41.2 & 33.7 \\ RankKPE & 36.9 & 50.2 & 31.5 & 38.1 & 31.1 & 55.1 & 32.5 & 22.7 & 65.5 & 41.3 & 34.0 \\ JointKPE & 37.1 & 50.4 & 31.5 & 38.4 & 31.3 & 55.5 & 32.6 & 22.7 & **65.7** & 41.1 & 33.8 \\ \hline
**Diff-KPE** & **37.8** & **51.4** & **32.2** & **38.5** & **31.4** & **55.6** & **32.7** & **22.8** & **65.7** & **41.7** & **34.3** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overall performance of extractive KPE models on OpenKP development set and KP20k testing set. The results of baselines are obtained from corresponding papers.
\begin{table}
\begin{tabular}{c c c c c|c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**SemEval-2010**} & \multicolumn{2}{c|}{**SemEval-2017**} & \multicolumn{2}{c|}{**Nus**} & \multicolumn{2}{c|}{**Inspec**} & \multicolumn{2}{c}{**Krapivin**} \\ \cline{2-7} & F1@5 F1@10 & F1@5 F1@10 & F1@5 F1@10 & F1@5 F1@10 & F1@5 F1@10 \\ \hline TF-IDF & 12.0 & 18.4 & - & - & 13.9 & 18.1 & 22.3 & 30.4 & 11.3 & 14.3 \\ TextRank & 17.2 & 18.1 & - & - & 19.5 & 19.0 & 22.9 & 27.5 & 17.2 & 14.7 \\ JointKPE & 28.2 & 31.0 & 29.6 & **37.7** & 33.9 & 35.0 & 31.8 & 35.0 & 33.3 & 29.2 \\
**Diff-KPE** & **29.3** & 31.0 & **29.7** & 37.2 & **35.2** & **36.0** & **32.3** & 35.0 & **35.0** & **31.4** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluation results on five small scientific testing sets. The results are evaluated using the models trained on KP20k.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & **Avg.** & **Avg.** & **Avg.** & **\% up to** \\ & **Doc Len.** & **KP Len.** & **\# KP** & **5-gram** \\ \hline OpenKP & 1212.3 & 2.0 & 2.2 & 99.2\% \\ KP20k & 169.3 & 1.9 & 3.5 & 99.8\% \\ \hline SemEval-2010 & 9664.2 & 2.0 & 9.5 & 99.8\% \\ SemEval-2017 & 190.6 & 2.3 & 11.3 & 97.9\% \\ Nus & 8707.4 & 1.9 & 8.0 & 99.8\% \\ Inspec & 138.9 & 2.2 & 6.4 & 99.8\% \\ Krapivin & 9354.1 & 1.9 & 3.8 & 99.9\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Statistics of benchmark datasets, including the average length of the document, the average length of the keyphrase, the average number of extractive keyphrases, and the percentage of keyphrases with a maximum length of 5.
### Evaluation Metrics
We use Precision (P), Recall (R), and F-measure (F1) of the top \(K\) predicted keyphrases for evaluating the performance of the KPE models. Following the prior research [10, 11], we utilize \(K=\{1,3,5\}\) on OpenKP and \(K=\{5,10\}\) on others. When determining the exact match of keyphrases, we first lowercase the candidate keyphrases and reference keyphrases, and then we apply Porter Stemmer [20] to both of them.
### Implementation details
We truncate or zero-pad each document due to the input length limitations (512 tokens). We use the base version of BERT to generate initial word embeddings. We also use the base version of Sentence-BERT [13] to generate initial fixed phrase embeddings for the diffusion module. The maximum length of k-gram is set to \(N=5\) for all datasets. The maximum diffusion time steps \(T\) is set to 100, \(\alpha=2.8e-6\). The hidden size and number of layer in Transformer encoder in the diffusion module are set to 8 and 6 respectively. The latent dimension of the VIB model is set to 128. Sample size \(M=5\). We optimize Diff-KPE using AdamW with 5e-5 learning rate, 0.1 warm-up proportion, and 32 batch size. The training used 8 NVIDIA Tesla V100 GPUs and took about 20 hours on 5 epochs.
## Results and Analysis
In this section, we present the evaluation results of the proposed Diff-KPE on seven widely-used benchmark datasets (OpenKP, KP20k, SemEval-2010, SemEval-2017, Nus, Inspec, Krapivin).
### Overall Performance
Table 1 shows the evaluation results of Diff-KPE and baselines. Based on the results, it is evident that the neural KPE methods outperform all the traditional KPE algorithms. Among the traditional methods, the unsupervised methods TF-IDF and TextRank show stable performance on both OpenKP and KP20k datasets, while the feature-based methods PROD and Maui outperform them on OpenKP and KP20k respectively. This is not surprising, as supervised methods benefit from large annotated data during training.
For neural KPE methods, CopyRNN performs the worst as it also focuses on generating abstractive keyphrases. JointKPE and its variant RankKPE show powerful performance, outperforming other baselines such as the phrase classification-based models BLING-KPE, SKE-Base-Cls, BERT-Span, and the sequence tagging method BERT-SeqTag. It is worth noting that BERT-SeqTag and ChunkKPE exhibit competitive performance compared to RankKPE, indicating their robustness and strong perfor
\begin{table}
\begin{tabular}{c|c c c|c c c c|c c} \hline \hline
**Setting** & \multicolumn{2}{c|}{F1@1 P@1 R@1} & \multicolumn{2}{c}{F1@3 P@3 R@3} & \multicolumn{2}{c}{F1@5 P@5 R@5} \\ \hline
**Diff-KPE** & **37.8** & **51.4** & **32.2** & **38.5** & **31.4** & **55.6** & **32.7** & **22.8** & **65.7** \\ - _w/o_ VIB & 36.5 & 49.4 & 31.09 & 37.7 & 30.8 & 54.5 & 32.1 & 22.4 & 64.8 \\ - _w/o_ diffusion & 36.6 & 49.7 & 31.2 & 37.9 & 31.0 & 54.8 & 32.3 & 22.5 & 65.0 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Evaluation metrics on the OpenKP development set by different settings. “_w/o_ VIB” means Diff-KPE without VIB module, “_w/o_ diffusion” means Diff-KPE without diffusion module.
\begin{table}
\begin{tabular}{|l|} \hline \hline
**(1) Partial Document**: \\... in Comics RealWorld Objects Non canon Adventure Time comic English Share Adventure Time is a comic book \\ series published by BOOM Studios written by Dinosaur Comics creator Ryan North, and illustrated by Shelli Caroline \\ and Braden Lamb. The comic book is released monthly beginning with issue 1 in February 2012... \\ (URL: http:adventuretime.wikia.comwikiAdventure_Time_(comic)) \\
**Reference Keyphrases**: \\ adventure time; ryan north \\ \hline \multicolumn{2}{|l|}{_Without diffusion module_:} \\ adventure time; boom studios; comic book series; dinosaur comics; ryan north \\
**_Diff-KPE_**: \\ adventure time; comic book series; yan north; comic book; dinosaur comics \\ \hline \hline
**(2) Partial Document**: \\ CodeSnip: How to Run Any Oracle Script File Through Shell Script in UNIX... by Deepankar Sarangi... Listing 1... \\ The first line is a comment line which is UNIX kernel specific. In the following approach the available shell is \\ KORN shell... \\ (URL: [http://aspalliance.com/1589](http://aspalliance.com/1589)\_CodeSnip_How_to_Run_Any_Oracle_Script_File_Through_Shell_Script_in_UNIX.4) \\
**Reference Keyphrases**: \\ codeSnip; oracle script \\ \hline \multicolumn{2}{|l|}{_Without diffusion module_:} \\ shell script; oracle script file through; oracle script file; shell scripts; codesnip \\
**Diff-KPE**: \\ univ; shell script; codesnip; oracle script file through; oracle script \\ \hline \hline \end{tabular}
\end{table}
Table 5: Example of keyphrase extraction results on two selected OpenKP development examples. The phrase in red is the desired reference keyphrase.
mance.
Overall, Diff-KPE outperforms all baselines on both OpenKP and KP20k datasets. Compared to traditional KPE approaches, Diff-KPE exhibits significant performance improvements. Furthermore, Diff-KPE also outperforms the previous state-of-the-art neural baseline method JointKPE, with slight improvements in F1@3 and F1@5 but a significant improvement in F1@1. We hypothesize that the joint training of three modules empowers the ranking network, thereby enhancing the extraction performance.
Moreover, to verify the robustness of Diff-KPE, we also evaluate our model trained with the KP20k dataset on five additional small scientific datasets, as shown in Table 2. Diff-KPE demonstrates better or competitive results on all datasets compared to the best baseline JointKPE. We believe this phenomenon arises from the benefit of the diffusion module: during inference, the diffusion model can generate candidate keyphrase embeddings, providing keyphrase information for the ranking network to better rank each phrase.
### Ablation Study
To understand the effect of each component on our Diff-KPE model. We perform the ablation study on the OpenKP development set as following settings:
* - _w/o_ VIB: replace the VIB model with a single feedforward layer for keyphrase classification.
* - _w/o_ diffusion: the diffusion model is removed, and only use the phrase representations obtained from CNNs for ranking and classifying.
* Diff-KPE: the original full joint model.
As shown in Table 4, the absence of the diffusion model or VIB model results in a significant drop in performance across all metrics, particularly in F1@1 (1.2 and 1.3 respectively). This performance decline indicates the crucial role of both the diffusion and VIB models in keyphrase ranking. The strong performance of Diff-KPE can be attributed to two main advantages. Firstly, the diffusion module directly incorporates the semantic information of keyphrases into the final phrase representations. Secondly, the supervised VIB module introduces an external classification loss during training, which indirectly enhances the diffusion module or CNNs to generate more informative n-gram embeddings. Therefore, it is evident that the addition of the diffusion module and supervised VIB significantly contributes to the overall performance improvement.
## Case Study
To further demonstrate the effectiveness of the diffusion module in Diff-KPE, we provide examples of the extracted keyphrases from our different models (Diff-KPE and Diff-KPE without diffusion module). Two typical cases from the development set of OpenKP are shown in Table 5.
In case (1), both Diff-KPE and Diff-KPE without diffusion successfully extract the desired reference keyphrases "adventure time" and "ryan north" within their top 5 ranked prediction phrases. However, Diff-KPE ranks the phrase "ryan north" higher, resulting in a higher F1@3 score in this case. This illustrates that adding the diffusion module helps the desired keyphrase representation obtain a higher rank score.
Similarly, in case (2), Diff-KPE ranks the desired keyphrases "codesinp" and "oracle script" higher compared to the model without diffusion. As a result, Diff-KPE successfully extracts all the reference keyphrases in case (2). The main reason for these results may be that the keyphrase embeddings generated by the diffusion module are directly injected into each phrase representation, enabling the ranking network to better rank each phrase by utilizing the keyphrase information.
We also analyze the generated keyphrase embeddings quality. We apply T-SNE [20] to reduce all the phrase representation's dimensions to 2 in Figure 2. We can find that the oracle keyphrases (green dots) and generated keyphrases (blue dots) are clustered together and far away from most non-keyphrase embeddings (red dots). This finding demonstrates that our diffusion model is powerful in recovering keyphrase embeddings.
## Conclusion
In this paper, we propose Diff-KPE, a novel joint keyphrase extraction (KPE) model composed of three essential modules: the diffusion module, the ranking network, and a supervised VIB module. Each component plays a crucial role in learning expressive phrase representations. The diffusion module is responsible for generating candidate keyphrase embeddings, effectively infusing keyphrase semantic information into the final phrase representation. Simultaneously, the supervised VIB introduces a classification loss for each phrase, encouraging the model to generate more informative representations and ultimately improving the ranking performance. Experimental results on seven keyphrase extraction benchmark datasets demonstrate the effectiveness and superiority of Diff-KPE. In future work, we plan to explore the application of Diff-KPE in abstractive keyphrase generation, leveraging its powerful architecture and flexibility for generating concise and informative keyphrases.
Figure 2: T-SNE visualization of phrase embeddings from OpenKP dataset. |
2303.12413 | Towards Nielsen-Thurston classification for surfaces of infinite type | We introduce and study tame homeomorphisms of surfaces of infinite type.
These are maps for which curves under iterations do not accumulate onto
geodesic laminations with non-proper leaves, but rather just a union of
possibly intersecting curves or proper lines. Assuming an additional finiteness
condition on the accumulation set, we prove a Nielsen-Thurston type
classification theorem. We prove that for such maps there is a canonical
decomposition of the surface into invariant subsurfaces on which the first
return is either periodic or a translation. | Mladen Bestvina, Federica Fanoni, Jing Tao | 2023-03-22T09:22:01Z | http://arxiv.org/abs/2303.12413v2 | # Towards Nielsen-Thurston Classification
###### Abstract.
We introduce and study tame homeomorphisms of surfaces of infinite type. These are maps for which curves under iterations do not accumulate onto geodesic laminations with non-proper leaves, but rather just a union of possibly intersecting curves or proper lines. Assuming an additional finiteness condition on the accumulation set, we prove a Nielsen-Thurston type classification theorem. We prove that for such maps there is a canonical decomposition of the surface into invariant subsurfaces on which the first return is either periodic or a translation.
## 1. Introduction
In this paper, we study the isotopy classes of self-homeomorphisms, or mapping classes, of a connected and oriented surface \(S\) of _infinite_ type. In the classical setting of surfaces of finite type, the Nielsen-Thurston classification theorem states that a mapping class is either periodic, reducible, or pseudo-Anosov. By Nielsen realization, a periodic class is represented by an isometry of some hyperbolic metric on \(S\). Instead a pseudo-Anosov map preserves a pair of transverse geodesic laminations which are minimal and filling. In the reducible case, one can decompose \(S\) along an invariant multicurve such that the first return map to a complementary subsurface is either periodic or pseudo-Anosov. The long term goal of this project is to extend this understanding to surfaces of infinite type. Here, we introduce the notion of _tame_ maps and prove a structure theorem for the subclass of _extra tame_ maps.
To motivate our definitions, let's first consider the various approaches to the Nielsen-Thurston classification and how they may generalize to surfaces of infinite type. Thurston's original proof [22, 1] finds a fixed point in the compactified Teichmuller space. Bers' proof [9] also uses Teichmuller space, but from the point of view of extremal quasiconformal maps. Casson's proof [13] finds an invariant geodesic lamination by iterating a curve, and Nielsen's original approach [18, 19, 20], completed by Miller [17] and Handel-Thurston [14], analyzes the dynamics of the action on the circle at infinity of the lifts to the universal cover. The Bestvina-Handel proof [10] looks for efficient spines of the surface leading to invariant train tracks, and relies on the Perron-Frobenius theorem.
In the infinite-type setting, Teichmuller space is very complicated: in particular, it is infinite-dimensional, it has uncountably many connected components and there are maps that do not preserve any component. So it is hard to imagine how one could adapt Thurston's or Bers' approach to the infinite-type setup. The train track approach would require considering infinite matrices without a good analog of the Perron-Frobenius theory. Instead, the viewpoints of Casson and Nielsen-Handel-Thurston seem more amenable, and in this paper we take Casson's approach.
Given a map \(f\), the first step in Casson's program is to construct a geodesic lamination \(\lambda\) as the limit of a subsequence \(f^{n_{i}}(\alpha)\) of iterates of a curve \(\alpha\), pulled tight. In order to ensure that \(f^{n}(\lambda)\) has no transverse intersections with \(\lambda\) (in which case the closure of \(\bigcup_{n}f^{n}(\lambda)\) is an \(f\)-invariant geodesic lamination), this limiting process should be "robust", in the sense that
Introduction
The study of the _time_ of a given system of equations is a very important topic in the study of the dynamics of a system of equations. The system is a _time_ system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of equations that are invariant under the action of a system of a system of equations that are invariant under the action of
component of \(S_{0}\) homeomorphic to an ideal triangle (see for instance the first map in Figure 18).
For general tame maps, the statement of Theorem A first needs to be adjusted to include maps with irrational rotation behavior. However, it turns out the full analogue of the statement cannot hold, as there is not always a nice decomposition for general tame maps. More precisely, there are tame maps for which the natural invariant pieces are no longer subsurfaces, meaning that although they have non-empty interior, their boundaries accumulate in a complicated way (see the _Strategy of proof_ section for some more details). Even replacing finiteness of limit sets by local finiteness is not enough to address this issue (see Section 6.1). Therefore, we propose the following conjecture for general tame maps.
**Conjecture B**.: _Let \(f\) be a tame map of a surface \(S\) of infinite type. Then there exists an \(f\)-invariant collection \(\Gamma\) of pairwise disjoint lines and curves on \(S\), such that for each component \(X\) of \(S\smallsetminus\Gamma\), the first return of \(f\) to \(X\) is either periodic, a translation, or has irrational rotation behavior._
If \(f\) is not tame, then one could try to construct an \(f\)-invariant geodesic lamination using the Casson technique. This approach has been successfully carried out for the class of _irreducible end periodic_ maps by Handel and Miller [12]. In our vision, a comprehensive structure theorem for an arbitrary map \(f\) should begin by decomposing the surface \(S\) into \(f\)-invariant pieces on which the first return is either tame or preserves two transverse geodesic laminations. Therefore, a structure theorem for tame maps is an essential part of the theory, and Theorem A is the first step in this program.
### Strategy of proof
For any map \(f\) on \(S\), the general strategy toward decomposing \(S\) into \(f\)-invariant pieces is to organize curves according to their behavior under \(f\), and look for the smallest subsurface _spanned_ by curves of the same type. For instance, consider the collection \(\mathcal{C}_{\mathrm{per}}\) of \(f\)-periodic curves. If the smallest subsurface \(S_{\mathrm{per}}\) spanned by \(\mathcal{C}_{\mathrm{per}}\) exists, then \(S_{\mathrm{per}}\) is \(f\)-invariant and it is not hard to see that the action of \(f\) on each component of \(S_{\mathrm{per}}\) is periodic (see Section 8). We can then repeat this process and look for the subsurface \(S_{\infty}\) spanned by the collection \(\mathcal{C}_{\infty}\) of _wandering_ curves (curves that leave every compact set under iteration). Here, with some work, one can show that if \(S_{\infty}\) exists, then the action of \(f\) on each component of \(S_{\infty}\) is by a translation. This characterization can be thought of as the analogue of Brouwer's plane translation theorem ([11]).
**Theorem C**.: _Suppose \(f\) is a homeomorphism of a surface \(S\) such that every curve of \(S\) is wandering. Then there is a hyperbolic metric on \(S\) on which \(f\) is isotopic to an isometric translation._
If \(f\) is tame, to prove the conjecture it remains to show that \(S\smallsetminus S_{\mathrm{per}}\cup S_{\infty}\) can be decomposed into pieces by cutting along lines and curves, and that the map has irrational rotation behavior on all components with nontrivial topology. When \(f\) is extra tame, there is no irrational rotation behavior, so we just need to show that the components of \(S\smallsetminus S_{\mathrm{per}}\cup S_{\infty}\) have trivial topology.
As the above procedure suggests, one of the technical steps is to establish the existence of the subsurface spanned by a collection of curves. With a finite set of curves, this subsurface is always well-defined. To deal with an infinite collection, we can take the span \(S_{n}\) of the first \(n\) curves and consider the limiting object. When the underlying surface \(S\) has finite-type, then the topology of \(S_{n}\)'s stabilizes by Euler characteristic considerations, and the procedure described above leads to the Nielsen-Thurston decomposition of \(f\). On the other hand, if \(S\) has infinite type, then the limiting object may not be a subsurface of \(S\). Indeed, there
are examples of tame maps where the collection of periodic or wandering curves displays this behavior (see Section 6.1). This issue turns out to be a significant obstacle in proving the conjecture in full generality. However, under the extra tameness assumption, the bad behavior disappears, allowing for this fundamental step to be completed successfully.
**Proposition D**.: _If \(f\) is extra tame, then the collection of \(f\)-periodic curves spans a subsurface, and the collection of \(f\)-wandering curves spans a subsurface. Moreover, the complement of their union in \(S\) is a subsurface with no essential non-peripheral curves._
### Connection to actions on graphs
A variation of Nielsen-Thurston classification, perhaps a more tractable problem, is to study the action of mapping classes on certain Gromov-hyperbolic graphs associated to the surface and classify them according to their type as isometries of the graphs. For instance, in the finite-type setting, pseudo-Anosov maps act hyperbolically on the curve graph of \(S\) and all other maps act elliptically.
In the infinite-type setting, the curve graph always has bounded diameter, but for many surfaces there are other interesting infinite-diameter, Gromov-hyperbolic graphs. One example is the ray graph defined by Bavard [8] for the plane minus the Cantor set \(\mathbb{R}^{2}\smallsetminus C\), which has been generalized to surfaces with one isolated end by Aramayona-Fossas-Parlier in [5] and even further by Bar-Natan-Verberne in [7], where they define the _grand-arc graph_. In [8], Bavard produced a map of \(\mathbb{R}^{2}\smallsetminus C\) that acts hyperbolically on the ray graph, and her construction has been generalized to a large class of surfaces by Abbott-Miller-Patel in [2]. However, the classification of isometries remains open at large, and even the following question is unknown.
_Question 1.1_.: Are there maps of \(\mathbb{R}^{2}\smallsetminus C\) that act as parabolic isometries of the ray graph? More generally, are there maps that act as parabolic isometries of the grand arc graphs?
By our work above, extra tame map are elliptic isometries on the ray/grand arc graph, and our conject picture of tame maps implies that they are also elliptic isometries.
### Acknowledgements
The authors would like to thank the American Institute of Mathematics for the hospitality during the workshop "Surfaces of infinite type", during which this work started. M.B. and J.T. gratefully acknowledge the support by the National Science Foundation under grant numbers DMS-1905720 and DMS-1651963 respectively. F.F. thanks Peter Feller for useful conversations.
## 2. Background
By a surface \(S\) we will mean a connected orientable \(2\)-manifold, without boundary unless otherwise stated. When we need to consider surfaces with boundary, we will usually say _bordered_ surface for emphasis.
A surface \(S\) has _finite type_ if \(\pi_{1}(S)\) is finitely generated and _infinite type_ otherwise. By the classification of surfaces, \(S\) is of finite type if and only if \(S\) is homeomorphic to a closed genus \(g\) surface minus finitely many points. A _pair of pants_ is a (possibly bordered) surface homeomorphic to either a three-holed sphere, or a two-holed, once-punctured sphere, or a twice-punctured disk.
To a surface we can associate its _ends_, which can be _planar_ and _nonplanar_. A _puncture_ is a planar isolated end. We refer to [6] for definitions and properties of ends of a surface.
The _mapping class group_ of \(S\) is the group \(\operatorname{Map}(S)\) of isotopy classes of orientation-preserving homeomorphisms of \(S\).
By a _hyperbolic metric_ on \(S\) we always mean a complete hyperbolic metric without funnels or half-planes (i.e. the hyperbolic surface coincides with its convex core, which is also called
a metric of the _first kind_ - see [4]). Hyperbolic metrics on surfaces with boundary will be assumed to contain no funnels or half planes and to have totally geodesic boundary.
_Curves_ on a surface \(S\) are assumed to be simple, closed and homotopically nontrivial. A curve is _essential_ if it is not homotopic to a puncture. If \(S\) has boundary, then a curve is called _peripheral_ if it is homotopic to a boundary component. When \(S\) is endowed with a hyperbolic metric with geodesic boundary, then every essential curve is realized by a unique geodesic \(\alpha^{*}\) representing its homotopy class.
A _line_ on \(S\) is the image of a proper embedding of \(\mathbb{R}\to S\) which admits a geodesic representative with respect to some (and hence any) hyperbolic metric on \(S\). Note that such lines are essential, in the sense that they are not properly homotopic to an end of \(S\). If the image is a geodesic, then we will sometimes call it a _geodesic line_ for emphasis. We will usually consider lines up to proper isotopy relative to its ends.
A _ray_ is the image of \([0,\infty)\) under a proper embedding and an _arc_ is the image of an embedding of \([0,1]\). Note that under our definition arcs are compact, which is not always the case in the literature. If \(S\) has boundary, a line (or ray or arc) is _peripheral_ if it is properly isotopic, relative to its ends, into the boundary of \(S\). The geodesic representative of a non-peripheral ray (respectively, arc) with endpoint(s) on \(\partial S\) is the unique geodesic ray (respectively, arc) orthogonal to \(\partial S\) and properly isotopic to the ray (respectively, arc) relative to \(\partial S\).
A _geodesic lamination_ is a closed subset of \(S\) which is the union of complete simple and pairwise disjoint geodesics (with respect to some hyperbolic metric on \(S\)), called the _leaves_ of the lamination. An essential curve or a line is an example of a geodesic lamination with a single leaf.
The _geometric intersection number_\(i(\alpha,\beta)\) is the minimum number of intersections between representatives of \(\alpha\) and \(\beta\), where \(\alpha\) and \(\beta\) could each be either a line, curve, or a leaf of a lamination. Note that \(i(\alpha,\beta)\) could be infinite, but we have the following characterization of lines.
**Lemma 2.1**.: _Let \(S\) be a surface possibly with boundary and equipped with a hyperbolic metric. Then a complete simple geodesic \(\ell\subset S\) is a line or a curve if and only if \(i(\ell,\alpha)<\infty\) for all curves \(\alpha\)._
Proof.: Clearly, if \(i(\ell,\alpha)=\infty\) for some curve \(\alpha\), then \(\ell\) cannot be proper. Now suppose \(i(\ell,\alpha)<\infty\) for all curves \(\alpha\). Let \(K\subset S\) be a compact subsurface. Since \(\sum_{\alpha\subset\partial K}i(\ell,\alpha)<\infty\), \(\ell\cap K\) has a finite number of arcs. If a component \(\tau\subset\ell\cap K\) has infinite length, then \(\tau\) must accumulate on some geodesic lamination \(\alpha\), or on a boundary component \(\alpha\) of \(K\). But then any curve \(\beta\) intersecting \(\alpha\) will have \(i(\ell,\beta)\geq i(\tau,\beta)=\infty\). This shows \(\ell\cap K\) is always a finite number of arcs, hence \(\ell\) is proper.
### Subsurfaces
A closed subset \(X\subset S\) is called a _subsurface_ if it is the image of a proper embedding of a bordered, possibly disconnected, surface. We further require each compact boundary component of \(X\) to be an essential curve. In particular, no component of \(X\) can be a closed disk with at most one puncture. A connected subsurface \(X\) is called _essential_ if its double has either negative Euler characteristic or is of infinite type. Topologically, this rules out annuli and closed disks with at most two points removed from the boundary. Note that \(X\) is a subsurface if and only if \(Y=\overline{S\setminus X}\) is also a subsurface. We will usually consider a subsurface up to proper isotopy.
Two subsurfaces in \(S\) are _disjoint_ if they have disjoint representatives. Similarly, a curve or line or geodesic lamination is disjoint from a subsurface if they have disjoint representatives. Note that in this definition, the boundary of a subsurface is consider disjoint from the subsurface.
If we fix a hyperbolic metric on \(S\), then for a non-annular subsurface \(X\) of \(S\), the interior of \(X\) admits a canonical representative \(X^{\circ}\) whose metric completion \(X^{*}\) is a hyperbolic surface with totally geodesic boundary homeomorphic to \(X\). We will call \(X^{\circ}\) the _geodesic representative of the interior_ of \(X\).
Let \(\overline{X^{\circ}}\) be the closure of \(X^{\circ}\), and \(\partial X^{\circ}=\overline{X^{\circ}}\smallsetminus X^{\circ}\), which is a disjoint union of simple closed geodesics or geodesic lines. Let \(\partial_{sa}X^{\circ}\) be the the collection of components of \(\partial X^{\circ}\) such that both boundary components of a regular neighborhood are contained in \(X^{\circ}\). We define the _almost geodesic representative_ of \(X\) as
\[X^{*}=\overline{X^{\circ}}\smallsetminus\bigcup_{\alpha\in\partial_{sa}X^{ \circ}}N(\alpha)\]
where the \(N(\alpha)\) are pairwise disjoint open regular neighborhoods of the components. Note that \(\overline{X^{\circ}}\) is homeomorphic to \(X^{*}\) if and only if \(\partial_{sa}X^{\circ}\) is empty.
We say that a collection \(\mathcal{C}\) of curves and lines _fills_ a (sub)surface if every non-peripheral curve has positive intersection number with some curve or line in \(\mathcal{C}\).
### Some results on isometries of surfaces
The following well known statement follows from the proof of [13, Theorem 2.7].
**Lemma 2.2**.: _Let \(F\) be a finite-type surface of negative Euler characteristic and \(f\) a mapping class of \(F\). If \(F\) is filled by a finite collection of \(f\)-periodic curves, then \(f\) has finite order._
_Remark 2.3_.: Note that the order of \(f\) in the previous lemma can be arbitrarily higher than the periods of the curves. For instance, look at the torus \(\mathbb{R}^{2}/\mathbb{Z}^{2}\) and at the curves \(\alpha\) and \(\beta\), obtained as the quotients of the lines \(y=nx\) and \(y=-nx\). Puncture the torus at points that are the projections of \(\left(\frac{1+2k}{2n},0\right)\), for \(k=0,\dots,n-1\), and \(\left(\frac{k}{n},\frac{1}{2}\right)\), for \(k=0,\dots,n-1\). Then the map induced by \((x,y)\mapsto\left(x+\frac{1}{2n},y+\frac{1}{2}\right)\) has order \(2n\), but it fixes both \(\alpha\) and \(\beta\).
We will also need a result about isometries of infinite-type surfaces, whose proof is essentially borrowed from Afton-Calegari-Chen-Lyman's work ([3]).
**Proposition 2.4**.: _Let \(S\) be an infinite-type surface with boundary and \(f\) a homeomorphism of \(S\) with \(f^{n}\) homotopic to the identity for some \(n>0\). For each compact boundary \(\alpha\) of \(S\), choose an \(f\)-invariant positive number \(\ell(\alpha)\). Then there exists a hyperbolic metric on \(S\) with totally geodesic boundary such that each compact boundary \(\alpha\) has length \(\ell(\alpha)\) and a periodic isometry of \(S\) isotopic to \(f\)._
Proof.: Suppose first that \(S\) has only compact boundary components. By using Nielsen's work [21] (which shows that the proposition holds if \(S\) is of finite type and has compact boundary)
Figure 1. An example of a subsurface \(X\) so that \(\overline{X^{\circ}}\) is not homeomorphic to \(X^{*}\)
and Afton-Calegari-Chen-Lyman's argument (see the proof of [3, Theorem 2]), we can prove the proposition in this case.
If there are noncompact boundary components, then let \(D(S)\) be the double of \(S\) along its noncompact boundary components and extend the map \(f\) to a map \(\hat{f}\) of \(D(S)\). Let \(i\) be the involution on \(D(S)\) with quotient \(S\), which commutes with \(\hat{f}\) by construction. By [3], there is an \(i\)-invariant metric on \(S\) and a periodic isometry \(g\) isotopic to \(\hat{f}\). Since \(i\) is an isometry, the boundaries of \(S\) are realized as geodesics in \(D(S)\). Therefore, \(g\) also commutes with \(i\), and we get a periodic isometry of \(S\) isotopic to \(f\).
## 3. Limit sets
Throughout this section, \(S\) will be a surface without boundary and equipped with a hyperbolic metric, though everything we say will be independent of the choice of such metric. Indeed, the following fact holds (see [23, Theorem 3.6] for the result in the infinite-type setting).
**Theorem 3.1**.: _Let \(m\) and \(m^{\prime}\) be two hyperbolic metrics on \(S\). Then the identity map on \(S\) yields a natural identification of the boundary at infinity of the universal cover of \((S,m)\) and the boundary at infinity of the universal cover of \((S,m^{\prime})\). This induces a homeomorphism between complete geodesics in the universal cover of \((S,m)\) and complete geodesics in the universal cover of \((S,m^{\prime})\), which descends to a homeomorphism between complete \(m\)-geodesics and complete \(m^{\prime}\)-geodesics on \(S\)._
Given a complete \(m\)-geodesic \(\ell\), we call the corresponding \(m^{\prime}\)-geodesic the \(m^{\prime}\)_-straightening_ of \(\ell\). For a collection of \(m\)-geodesics, its \(m^{\prime}\)_-straightening_ is the union of the \(m^{\prime}\)-straightenings of its geodesics. A consequence of Theorem 3.1 is the following:
**Lemma 3.2**.: _Let \(m\) and \(m^{\prime}\) be two hyperbolic metrics on \(S\). If two complete \(m\)-geodesics \(\ell_{1}\) and \(\ell_{2}\) are simple and disjoint, then the \(m^{\prime}\)-straightenings \(\ell_{1}\) and \(\ell_{2}\) are also simple and disjoint. If \(\lambda\) is an \(m\)-geodesic lamination, its \(m^{\prime}\)-straightening is an \(m^{\prime}\)-geodesic lamination._
The reason why the lemma holds is that (self-)intersections correspond to linked pairs of endpoints at infinity of lifts and convergence of geodesics corresponds to convergence of pairs of endpoints at infinity of lifts. Moreover, we have:
**Proposition 3.3**.: _Let \(m\) and \(m^{\prime}\) be hyperbolic metrics on \(S\) and \(\lambda\) an \(m\)-geodesic lamination. Then there is an ambient isotopy which maps \(\lambda\), leaf by leaf, to its \(m^{\prime}\)-straightening \(\lambda^{\prime}\)._
Proof.: Pick an \(m\)-geodesic pants decomposition \(\mathcal{P}\) which doesn't contain any leaf of \(\lambda\) (this can be done by choosing a pants decomposition and doing enough elementary moves until none of its curves are in \(\lambda\)). In particular \(\mathcal{P}\) is in minimal position with respect to \(\lambda\). We can then find an ambient isotopy \(H_{1}\) of \(S\) sending \(\mathcal{P}\) to \(\mathcal{P}^{\prime}\), the \(m^{\prime}\)-straightening of \(\mathcal{P}\). Let \(\lambda_{1}\) be the image under the isotopy of \(\lambda\). Note that \(\mathcal{P}^{\prime}\) is in minimal position with respect to \(\lambda_{1}\) and doesn't contain any leaf of \(\lambda_{1}\).
Next, given a curve \(\alpha\) in the pants decomposition \(\mathcal{P}^{\prime}\), lift \(\alpha\), \(\lambda_{1}\) and \(\lambda^{\prime}\) to the universal cover of \((S,m^{\prime})\). Look at a lift \(\beta\) of \(\alpha\) and pick an orientation on it; every lift \(\ell_{1}\) of a leaf in \(\lambda_{1}\) intersecting \(\beta\) corresponds to a lift \(\ell^{\prime}_{1}\) of a leaf of \(\lambda^{\prime}\) with the same endpoints, thus \(\ell^{\prime}_{1}\) intersects \(\beta\) as well. Moreover, if \(\ell_{2}\) is another lift of a leaf in \(\lambda_{1}\) intersecting \(\beta\) and \(\ell^{\prime}_{2}\) the corresponding \(m^{\prime}\)-geodesic, the intersections of \(\ell_{1}\) and \(\ell_{2}\) with \(\beta\) come in the same order as the intersections of \(\ell^{\prime}_{1}\) and \(\ell^{\prime}_{2}\) with \(\beta\). So we can find an order-preserving homeomorphism from \(\tilde{\lambda_{1}}\cap\beta\) to \(\tilde{\lambda^{\prime}}\cap\beta\) sending \(\ell\cap\beta\) to \(\ell^{\prime}\cap\beta\), for every \(\ell\subset\tilde{\lambda_{1}}\), where \(\ell^{\prime}\) is the \(m^{\prime}\)- straightening of \(\ell\). Extend this to an equivariant isotopy of a small neighborhood of \(\beta\). By doing this equivariantly at each lift
and on disjoint neighborhoods, we get an isotopy of the universal cover which descends to an ambient isotopy \(H_{2}\) of \(S\), sending \(\lambda_{1}\) to \(\lambda_{2}\) such that for every \(\alpha\) in \(\mathcal{P}^{\prime}\), \(\lambda_{2}\cap\alpha=\lambda^{\prime}\cap\alpha\).
Look at a pair of pants \(P\) of \(\mathcal{P}^{\prime}\). By the transversality of \(\lambda_{2}\) and of \(\lambda^{\prime}\), \(\lambda_{2}\cap P\) (respectively, \(\lambda^{\prime}\cap P\)) is a union of arcs from \(\partial P\) to \(\partial P\). We claim that we can find an isotopy of \(P\), fixing the boundary pointwise, sending \(\lambda_{2}\cap P\) to \(\lambda^{\prime}\cap P\). Indeed, we can divide the arcs according to their homotopy class relative to the boundary. There are at most three classes, which are the same for the arcs in \(\lambda_{2}\) and in \(\lambda\), by how \(\lambda_{2}\) was constructed. For each class we can find a rectangle \(R\) with two sides on \(\partial P\) and two arcs from \(\lambda_{2}\cap P\) in the given homotopy class containing all arcs of \(\lambda_{2}\cap P\) in the homotopy class. We can do the same for \(\lambda^{\prime}\) and get a rectangle \(R^{\prime}\), with the extra assumption that the four corners of \(R\) are the same as the four corners of \(R\). We can then choose an ambient isotopy, fixing \(\partial P\) pointwise, sending \(R\) to \(R^{\prime}\) and all arcs of \(\lambda_{2}\cap R\) to the corresponding arcs of \(\lambda^{\prime}\cap R^{\prime}\).
By doing this for all homotopy classes we get the required isotopy of \(P\). Repeating the procedure in each pair of pants, we get an ambient isotopy \(H_{3}\) of \(S\), fixing all curves in \(P\) pointwise, and sending \(\lambda_{2}\) to \(\lambda^{\prime}\).
The composition of the three ambient isotopies is the required isotopy.
Let \(\lambda\subset S\) be a closed subset. We say that a sequence of curves \(\{\alpha_{n}\mid n\in\mathbb{N}\}\)_converges_ to \(\lambda\), and that \(\lambda\) is the _limit_ of \(\{\alpha_{n}\mid n\in\mathbb{N}\}\), if for every finite-type subsurface \(K\subset S\) with compact boundary, \(\alpha_{n}^{*}\cap K\) converges to \(\lambda\cap K\) with respect to the Hausdorff distance. We will also denote this by \(\alpha_{n}\to\lambda\).
Since we restrict to finite-type surfaces to define convergence, the standard proof (see e.g. [13]) applies to show:
**Lemma 3.4**.: _If a closed subset \(\lambda\subset S\) is the limit of a sequence of curves \(\{\alpha_{n}\mid n\in\mathbb{N}\}\), then \(\lambda\) is a geodesic lamination._
The _limit set_ of a sequence of curves \(\{\alpha_{n}\mid n\in\mathbb{N}\}\) is the set \(\mathcal{L}(\{\alpha_{n}\mid n\in\mathbb{N}\})\) given by all complete geodesics contained in the limit of some subsequence of \(\{\alpha_{n}\mid n\in\mathbb{N}\}\).
Changing the metric on \(S\) does not change the limit set in the following sense.
Figure 3. Isotopying rectangles
**Lemma 3.5**.: _Let \(m\) and \(m^{\prime}\) be two hyperbolic metrics on \(S\). For a sequence \(\{\alpha_{n}\}\) of curves, let \(\mathcal{L}(\{\alpha_{n}\mid n\in\mathbb{N}\},m)\) and \(\mathcal{L}(\{\alpha_{n}\mid n\in\mathbb{N}\},m^{\prime})\) be their respective limit sets in the two metrics. Then a subsequence \(\{\alpha_{n_{j}}\}_{j\in\mathbb{N}}\) converges to \(\lambda\subset\mathcal{L}(\{\alpha_{n}\mid n\in\mathbb{N}\},m)\) if and only if \(\{\alpha_{n_{j}}\}_{j\in\mathbb{N}}\) converges to \(\lambda^{\prime}\subset\mathcal{L}(\{\alpha_{n}\mid n\in\mathbb{N}\},m^{\prime})\), where \(\lambda^{\prime}\) is the \(m^{\prime}\)-straightening of \(\lambda\)._
Proof.: As convergence of geodesics corresponds to convergence of pairs of endpoints at the boundary at infinity, this follows from Theorem 3.1.
This justifies our notation \(\mathcal{L}(\{\alpha_{n}\mid n\in\mathbb{N}\})\), which makes no reference to the metric on \(S\).
### Limit sets under a homeomorphism
For a homeomorphism \(f\) of \(S\) and a curve \(\alpha\), we say \(\alpha\) is \(f\)_-periodic_ if there exists \(n\geq 1\) such that \(f^{n}(\alpha)\) is isotopic to \(\alpha\). The smallest such \(n\) is called the \(f\)_-period_ of \(\alpha\). We say \(\alpha\) is \(f\)-_forward wandering_ if \(\{f^{n}(\alpha)^{*}\}_{n\geq 0}\) leaves every compact set of \(S\), and \(f\)_-backward wandering_ if it is \(f^{-1}\)-forward wandering. The curve \(\alpha\) is \(f\)-_wandering_ if it is both \(f\)-forward and \(f\)-backward wandering. These properties are independent of the hyperbolic metric on \(S\) as well as perturbing \(f\) by an isotopy. Thus, we will also adopt the same definition for a mapping class of \(S\). We will usually suppress the reference to \(f\) when the context is clear.
Note that we can extend the definition of \(f\)-wandering to lines, non-peripheral arcs or rays with endpoints on the boundary of \(S\), as well as to subsurfaces of \(S\), by which we mean the geodesic representatives (of the interior, in case of subsurfaces) of their images under iterations by \(f\) leave every compact set in \(S\).
For \(\alpha\) a curve, line, or non-peripheral arc or ray with endpoints on the boundary of \(S\), let \(\mathcal{L}^{+}(\alpha):=\mathcal{L}(\{f^{n}(\alpha)\mid n\geq 0\})\) be the _forward limit_ of \(\alpha\) under \(f\), and \(\mathcal{L}^{-}(\alpha):=\mathcal{L}(\{f^{n}(\alpha)\mid n\leq 0\})\) the _backward limit_. Set \(\mathcal{L}(\alpha):=\mathcal{L}^{+}(\alpha)\cup\mathcal{L}^{-}(\alpha)\). When \(\alpha\) is \(f\)-periodic, then
\[\mathcal{L}^{\pm}(\alpha)=\{\alpha^{*},f(\alpha)^{*},\ldots,f^{n-1}(\alpha)^{* }\},\]
where \(n\geq 1\) is the \(f\)-period of \(\alpha\). Moreover, \(\alpha\) is forward wandering if and only if \(\mathcal{L}^{+}(\alpha)=\emptyset\), and it is backward wandering if and only if \(\mathcal{L}^{-}(\alpha)=\emptyset\).
We establish some basic facts.
**Lemma 3.6**.: _Let \(f\) be a homeomorphism and \(\alpha\) a curve. Suppose \(\lambda\) is the limit of a sequence of iterates \(\{f^{n_{j}}(\alpha)\mid j\in\mathbb{N}\}\). Then for any curve \(\beta\), \(i(\beta,\lambda)\neq 0\) if and only if \(i(\beta,f^{n_{j}}(\alpha))\neq 0\) for all sufficiently large \(j\). Moreover, if \(i(\beta,\lambda)\geq k\), then \(i(\beta,f^{n_{j}}(\alpha))\geq k\) for all sufficiently large \(j\)._
Proof.: If we look at an annular neighborhood \(K\) of \(\beta\), \(f^{n_{j}}(\alpha)^{*}\cap K\) converges to \(\lambda\cap K\) in the Hausdorff metric, so the first statement follows. If \(\beta\) intersects \(\lambda\) at least \(k\) times, then we can find \(k\) arcs in \(K\cap\lambda\). Choose sufficiently small neighborhood about each arc so that they are pairwise disjoint. Then for all sufficiently large \(j\), \(f^{n_{j}}(\alpha)\cap K\) will pass through each of these neighborhoods, so \(i(\beta,f^{n_{j}}(\alpha))\geq k\).
**Lemma 3.7**.: _Let \(f\) be a homeomorphism. Then for any two (not necessarily distinct) curves \(\alpha\) and \(\beta\) we have:_
\[i(\alpha,\mathcal{L}^{+}(\beta))\neq 0\;\Leftrightarrow\;i(\mathcal{L}^{-}( \alpha),\beta)\neq 0.\]
Proof.: Suppose \(i\left(\alpha,\mathcal{L}^{+}(\beta)\right)\neq 0\). Let \(n_{j}\to\infty\) be a sequence so that \(f^{n_{j}}(\beta)\to\lambda\subset\mathcal{L}^{+}(\alpha)\) with \(i(\alpha,\lambda)\neq 0\). By Lemma 3.6, for every \(j\) large enough,
\[i(\alpha,f^{n_{j}}(\beta))\neq 0\]
and thus
\[i(f^{-n_{j}}(\alpha),\beta)\neq 0.\]
In particular, \(\alpha\) is not backward wandering. By taking a further subsequence so that \(f^{-n_{j_{k}}}(\alpha)\to\lambda^{\prime}\subset\mathcal{L}^{-}(\alpha)\), we have, again by Lemma 3.6, \(i(\lambda^{\prime},\beta)\neq 0\), i.e. \(i(\mathcal{L}^{-}(\alpha),\beta)\neq 0\). By replacing \(f\) by \(f^{-1}\) we get the other implication.
**Lemma 3.8**.: _Let \(f\) be a homeomorphism. Then a curve \(\alpha\) is forward wandering if and only if \(i(\alpha,\mathcal{L}^{-}(\beta))=0\) for all curves \(\beta\). Similarly, \(\alpha\) is backward wandering if and only if \(i(\alpha,\mathcal{L}^{+}(\beta))=0\) for all \(\beta\)._
Proof.: If \(\alpha\) is not forward wandering, then we can choose a curve \(\beta\) with \(i(\mathcal{L}^{+}(\alpha),\beta)\neq 0\). By Lemma 3.7, this implies \(i(\alpha,\mathcal{L}^{-}(\beta))\neq 0\). Conversely, if \(i(\alpha,\mathcal{L}^{-}(\beta))\neq 0\), then again by Lemma 3.7, \(i(\mathcal{L}^{+}(\alpha),\beta)\neq 0\). In particular \(\mathcal{L}^{+}(\alpha)\neq\emptyset\), so \(\alpha\) is not forward wandering.
**Corollary 3.9**.: _Let \(f\) be a homeomorphism. If \(\alpha\) is a periodic curve and \(\beta\) is wandering, \(i(\alpha,\beta)=0\)._
Proof.: If \(i(\alpha,\beta)\neq 0\), \(i(\mathcal{L}^{+}(\alpha),\beta)\neq 0\) (as \(\alpha\in\mathcal{L}^{+}(\alpha)\)), so \(\beta\) is not wandering by Lemma 3.8.
## 4. Tame maps
As usual, assume \(S\) is a surface without boundary endowed with a hyperbolic metric. We say a homeomorphism \(f\) of \(S\) is _tame_ if for every finite-type subsurface \(K\subset S\) with compact boundary and every curve \(\alpha\) there are \(N=N(K,\alpha)\in\mathbb{N}\) and finitely many isotopy (relative to \(\partial K\)) classes of arcs from \(\partial K\) to \(\partial K\) and curves in \(K\) such that, for every \(n\in\mathbb{N}\), the intersection \(f^{n}(\alpha)^{*}\cap K\) has at most \(N\) components and each is in one of the given isotopy classes. Tameness can be characterized by looking at intersections of curves, as the following lemma shows.
**Lemma 4.1**.: _Let \(f\) be a homeomorphism of \(S\). The following statements are equivalent._
1. \(f\) _is tame._
2. \(f^{-1}\) _is tame._
3. _For curves_ \(\alpha\) _and_ \(\beta\)_,_ \(i(f^{n}(\alpha),\beta)\) _is uniformly bounded for all_ \(n\geq 0\)_._
4. _For curves_ \(\alpha\) _and_ \(\beta\)_,_ \(i(f^{n}(\alpha),\beta)\) _is uniformly bounded for all_ \(n\leq 0\)_._
5. _For curves_ \(\alpha\) _and_ \(\beta\)_,_ \(i(f^{n}(\alpha),\beta)\) _is uniformly bounded for all_ \(n\)_._
_In particular, the notion of tame is independent of the hyperbolic metric on \(S\)._
Proof.: The equivalence of (3), (4), and (5) are immediate as the action of \(f\) preserves geometric intersection number. That (1) implies (3) follows from taking the compact surface to be an annular neighborhood of the curve \(\beta\). On the other hand, if \(f\) is not tame, then there exists a finite-type subsurface \(K\) with compact boundary, a curve \(\alpha\), and a subsequence \(\{n_{j}\}\subset\mathbb{N}\) such that \(f^{n_{j}}(\alpha)^{*}\cap K\) has \(N_{j}\) components with \(N_{j}\to\infty\) and \(j\to\infty\), or includes components with increasing multiplicity. Then there exists a curve \(\beta\subset K\) such that \(i(f^{n_{j}}(\alpha),\beta)\to\infty\). This shows the equivalence of (1) and (3), as well as (2) and (4) by symmetry, whence the equivalence of all statements. Property (5) is independent of the hyperbolic metric on \(S\), so the same is true of being tame.
We say a mapping class \(f\in\operatorname{Map}(S)\) is _tame_ if it has a tame representative. From the characterization of Lemma 4.1, if \(f\) has a tame representative, then all of its representatives are tame. We call a tame homeomorphism \(f\)_extra tame_ if the limit set \(\mathcal{L}(\alpha)\) is finite for every curve \(\alpha\). This property is also inherited by the mapping class of \(f\).
As an example, for a map of a finite-type surface it is equivalent to be tame, extra tame, periodic and isotopic to an isometry for some hyperbolic structure. In the infinite-type case,
the situation is more complicated. Being periodic implies being isotopic to an isometry (with respect to some hyperbolic structure), which implies being extra tame, and extra tame maps are tame by definition. No implication is an equivalence, though. Consider for instance a translation of a surface \(S\), which -- as mentioned in the introduction -- is a map \(f\) that generates an infinite cyclic group acting properly on \(S\). If \(S\) has infinite type, then the quotient surface \(X=S/\langle f\rangle\) has negative Euler characteristic or has infinite type, so we can lift a hyperbolic metric from \(X\) to \(S\) so that \(f\) acts by an (infinite-order) isometry. In this case, every curve on \(S\) is wandering, so \(f\) is (extra) tame.
Another example is what we call an _irrational rotation_. Here, we start with a homeomorphism of the circle with minimal invariant subset a Cantor set \(C\) (see the Denjoy construction [16, Section 12.2]). Think of the circle as the equator of the two-sphere \(S^{2}\), and extend the map to a homeomorphism of \(S=S^{2}\smallsetminus C\). This map is tame but _not_ extra tame: every curve has an infinite limit set, given by lines which can intersect. More complicated examples of (extra) tame maps can be found in Sections 6.1 and 8.
On the other hand, a Dehn twist is not tame: if \(\tau\) is the twist about a curve \(\alpha\), let \(\beta\) be a curve intersecting \(\alpha\) essentially. If \(K\) is a finite-type subsurface with compact boundary containing \(\alpha\) and \(\beta\), the curves \(\tau^{n}(\beta)\cap K=\tau^{n}(\beta)\) are all distinct. Another example of a map which is not tame is a homeomorphism which restricts to a pseudo-Anosov on some finite-type subsurface (with compact boundary).
The goal of the remainder of this section is to describe properties of limit sets of curves under tame homeomorphisms. The first result is that if a curve is not periodic, its limit set can only contain lines.
**Proposition 4.2**.: _Let \(f\) be a tame homeomorphism and \(\alpha\) a non-periodic curve. If \(\alpha\) is not forward wandering, then \(\mathcal{L}^{+}(\alpha)\) is a non-empty collection of lines. Similarly, if \(\alpha\) is not backward wandering, then \(\mathcal{L}^{-}(\alpha)\) is a non-empty collection of lines._
Proof.: Assume that \(\alpha\) is not forward wandering. For simplicity, let \(\alpha_{n}=f^{n}(\alpha)^{*}\). If \(\mathcal{L}^{+}(\alpha)\) is empty, then for every compact subsurface \(K\subset S\), \(\bigcup_{n\geq 0}\alpha_{n}\cap K\) has no accumulation points. That is, \(\bigcup_{n\geq 0}\alpha_{n}\cap K\) is given by finitely many arcs or curves. If two geodesics coincide along a non-trivial arc, then they are equal. Since \(\alpha\) is not \(f\)-periodic, we must have \(\alpha_{n}\cap K=\emptyset\) for all sufficiently large \(n\). But this contradicts the assumption that \(\alpha\) is not forward wandering. So \(\mathcal{L}^{+}(\alpha)\) is a non-empty union of complete simple geodesics by Lemma 3.4. If \(\mathcal{L}^{+}(\alpha)\) has a non-proper leaf \(\ell\), then there exists a curve \(\beta\) such that \(i(\beta,\ell)=\infty\), by Lemma 2.1. Let \(\lambda\subset\mathcal{L}^{+}(\alpha)\) be the limit of some subsequence \(\{\alpha_{n_{j}}\}\) containing \(\ell\). By Lemma 3.6\(i(f^{n_{j}}(\alpha),\beta)=i(\alpha_{n_{j}},\beta)\to\infty\), contradicting the characterization of tameness given by Lemma 4.1. If \(\mathcal{L}^{+}(\alpha)\) has a compact component \(\beta\), then since \(\mathcal{L}^{+}(\alpha)\) has no non-proper leaves, \(\beta\) is the limit of some subsequence \(\{\alpha_{n_{i}}\}\), which has to be eventually constant (again by tameness). But this contradicts the non-periodicity of \(\alpha\). Thus \(\mathcal{L}^{+}(\alpha)\) is a non-empty union of lines. By replacing \(f\) by \(f^{-1}\) we get the second statement.
As \(\mathcal{L}^{\pm}(\alpha)\) is well defined for mapping classes, we have the following immediate corollary.
**Corollary 4.3**.: _For a tame mapping class \(f\) and a curve \(\alpha\), the following statements hold._
1. \(\mathcal{L}^{+}(\alpha)=\emptyset\) _if and only if_ \(\alpha\) _is forward wandering._
2. \(\mathcal{L}^{-}(\alpha)=\emptyset\) _if and only if_ \(\alpha\) _is backward wandering._
3. \(\mathcal{L}(\alpha)=\emptyset\) _if and only if_ \(\alpha\) _is wandering._
4. \(\mathcal{L}(\alpha)\) _contains a curve if and only if_ \(\alpha\) _is periodic, in which case_ \(\mathcal{L}(\alpha)=\mathcal{L}^{+}(\alpha)=\mathcal{L}^{-}(\alpha)\) _is the orbit of_ \(\alpha\) _under_ \(f\)
Another consequence of tameness is that limits of iterates of curves can only intersect finitely many times.
**Lemma 4.4**.: _Let \(f\) be a tame homeomorphism. Then for any two (not necessarily distinct) curves \(\alpha\) and \(\beta\), if \(f^{n_{j}}(\alpha)\to\lambda\subset\mathcal{L}(\alpha)\) and \(f^{m_{k}}(\beta)\to\lambda^{\prime}\subset\mathcal{L}(\beta)\), for some sequences \(n_{j}\) and \(m_{k}\), then \(i(\lambda,\lambda^{\prime})<\infty\)._
Proof.: If \(i(\lambda,\lambda^{\prime})=\infty\), then for every \(N\) there are \(j,k\) such that \(i(f^{n_{j}}(\alpha),f^{m_{k}}(\beta))\geq N\). Thus, \(i(\alpha,f^{m_{k}-n_{j}}(\beta))\geq N\), so \(f\) is not tame by Lemma 4.1.
## 5. Subsurfaces and spanning sets of curves and lines
Let \(\mathcal{C}\) be a collection of curves or lines on \(S\). Consider the collection of subsurfaces \(\Sigma\subset S\), defined up to proper isotopy, such that \(\Sigma\) contains the proper isotopy class of every element of \(\mathcal{C}\), possibly as a boundary component. We allow \(\Sigma\) to coincide with \(S\). The collection of all such subsurfaces is nonempty and partially ordered by inclusion. A minimal subsurface within this collection, if it exists, is said to be _spanned_ by \(\mathcal{C}\).
When \(\mathcal{C}\) is a finite collection, then there is always a subsurface \(F\) (of finite type) spanned by \(\mathcal{C}\). Namely, put the elements in \(\mathcal{C}\) in general position. Then \(F\) is obtained by taking a regular neighborhood of the union of the curves and lines and filling in all disks with at most one puncture. More generally, a subsurface spanned by \(\mathcal{C}\) exists if \(\mathcal{C}\) is _locally finite_, by which we mean every compact set of \(S\) essentially intersects only finitely many elements in \(\mathcal{C}\). In general, however, there are collections of curves and lines which do not span a subsurface - see for instance Figure 4. The main goal of this section is to give a condition on \(\mathcal{C}\) which ensures that it spans a subsurface.
### Concatenations of curves
Fix a hyperbolic metric on \(S\). We say a curve \(\alpha\) is a _concatenation_ of curves in \(\mathcal{C}\) if it is homotopic to a finite concatenation of piecewise geodesic arcs coming from the geodesic representatives of curves in \(\mathcal{S}\). We say \(\mathcal{C}\) is _closed under concatenation_ if every curve obtained as a concatenation of curves in \(\mathcal{C}\) belongs to \(\mathcal{C}\).
Note that if \(\mathcal{C}\) spans a subsurface \(\Sigma\), then necessarily \(\mathcal{C}\)_fills_\(\Sigma\), i.e. every essential and non-peripheral curve or line in \(\Sigma\) intersects some element of \(\mathcal{C}\). We further have:
**Lemma 5.1**.: _Let \(\mathcal{C}\) be a collection of curves spanning a subsurface \(\Sigma\). Then any essential curve in \(\Sigma\) is a concatenation of curves in \(\mathcal{C}\)._
Proof.: Fix a hyperbolic structure on \(S\). Let \(\gamma\) be a curve in \(\Sigma\) and suppose first it is non-peripheral. Enumerate the curves in \(\mathcal{C}\) as \(\mathcal{C}=\{\gamma_{1},\gamma_{2},\dots\}\) and assume that they are geodesic. As \(\gamma\) is compact, we can find a finite-type subsurface \(K\) of \(\Sigma\) with compact totally geodesic boundary such that \(\gamma\subset K\) and is non-peripheral in \(K\). Inductively define the following curves:
* \(\beta_{1}=\gamma_{i}\), where \(i\) is the smallest index so that \(\gamma_{i}\cap K\neq\emptyset\);
Figure 4. The beginning of a collection of curves not spanning a subsurface
* given \(\beta_{1},\ldots,\beta_{j}\), let \(\beta_{j+1}\) be \(\gamma_{i}\), where \(i\) is the smallest index so that \(\gamma_{i}\cap K\neq\emptyset\) and the components of \(\gamma_{i}\cap K\) are not all among the homotopy classes of the components of \(\bigcup_{k\leq j}\beta_{k}\cap K\) -- unless such a \(\gamma_{i}\) doesn't exist, in which case we stop the process.
Since \(K\) is of finite type, the process stops and we end up with a finite collection \(\mathcal{C}_{K}=\{\beta_{1},\ldots\beta_{m}\}\). Let \(F\) be the subsurface spanned by \(\mathcal{C}_{K}\). Note that \(K\subset F\) (up to homotopy): otherwise there is some curve \(\alpha\subset K\smallsetminus F\), and since \(\mathcal{C}\) fills \(\Sigma\), there is some \(\gamma_{i}\in\mathcal{C}\) intersecting \(\alpha\). Therefore the components of \(\gamma_{i}\cap K\) are not all among the homotopy classes of the components of \(\bigcup_{k\leq m}\beta_{k}\cap K\), a contradiction.
Since \(F\) can be constructed by taking a regular neighborhood of \(\bigcup_{k\leq m}\beta_{k}\) and filling in disks with at most one puncture, and \(\gamma\subset F\), it is easy to show that \(\gamma\) is concatenation of curves in \(\mathcal{C}_{K}\), and thus in \(\mathcal{C}\).
If now \(\gamma\) is peripheral, one can show that it is the concatenation of non-peripheral curves in \(\Sigma\). By using the result for these curves, we deduce that also \(\gamma\) is a concatenation of curves in \(\mathcal{C}\).
**Lemma 5.2**.: _Let \(f\) be a mapping class on \(S\). Every closed curve obtained as a finite concatenation of \(f\)-periodic curves is periodic, and every closed curve obtained as a finite concatenation of wandering curves is wandering._
Proof.: We first consider periodic curves. A curve is a finite concatenation of curves \(\gamma_{1},\ldots,\gamma_{n}\) if and only if it is a curve in the subsurface \(F\) spanned by \(\{\gamma_{1},\ldots,\gamma_{n}\}\) (by Lemma 5.1). Note that \(F\) is connected and has negative Euler characteristic. Since each \(\gamma_{i}\) is \(f\)-periodic, we can take a suitable power so that \(f^{k}\) fixes \(\gamma_{i}\) for all \(i\). Since \(F\) is spanned by \(\{\gamma_{i}\}\), \(f^{k}(F)\) is spanned by \(\{f^{k}(\gamma_{i})\}=\{\gamma_{i}\}\), so \(f^{k}(F)\) is isotopic to \(F\). Since \(F\) has negative Euler characteristic, \(f^{k}\) restricted to \(F\) is isotopic to a periodic map of \(F\), so every curve in \(F\) is \(f\)-periodic.
The proof for wandering curves is similar. Let \(F\) be the subsurface spanned by a finite collection \(\{\gamma_{1},\ldots,\gamma_{n}\}\) of wandering curves. It is enough to prove that for all compact subsurface \(K\subset S\), there exists \(k_{0}\in\mathbb{N}\) such that \(f^{\pm k}(F)\) can be homotoped to be disjoint from \(K\) for all \(k\geq k_{0}\). Since \(\{\gamma_{1},\ldots,\gamma_{n}\}\) are wandering, we can find such \(k_{0}\) so that \(f^{\pm k}(\gamma_{i})^{*}\cap K=\emptyset\) for all \(k\geq k_{0}\) and all \(i\). Then \(f^{\pm k}(F)\) is spanned by \(\{f^{\pm k}(\gamma_{i})\}\), so \(f^{\pm k}(F)\) can be homotoped away from \(K\).
### Good collection of curves
In the following, fix a hyperbolic metric on \(S\) and let \(\mathcal{C}\) be a collection of curves. We would like to construct a candidate for the subsurface spanned by \(\mathcal{C}\), by enumerating the curves in \(\mathcal{C}\) and taking the geodesic subsurface spanned by the first \(n\) curves and then taking their union. However, this union might not be the right candidate, if there are annular components or boundaries that are homotopic -- think for instance of the subsurface spanned by all curves disjoint from a given one. To deal with these issues, we collect those curves in \(\mathcal{C}\) that contribute to the annular components, namely let
\[\mathcal{C}_{\mathrm{is}}:=\{\gamma\in\mathcal{C}\mid\forall\gamma^{\prime} \in\mathcal{C},\;i(\gamma,\gamma^{\prime})=0\}.\]
Enumerate the remaining curves: \(\mathcal{C}_{\mathrm{int}}:=\mathcal{C}\smallsetminus\mathcal{C}_{\mathrm{is}} =\{\gamma_{0},\gamma_{1},\ldots\}\). Up to reordering, we can assume that \(i(\gamma_{0},\gamma_{1})\neq 0\). Then for every \(i\geq 1\), let
\[\mathcal{C}_{\mathrm{int}}^{i}:=\{\gamma_{j}\mid j\leq i\text{ and }\exists\;j^{ \prime}\leq i\text{ such that }i(\gamma_{j},\gamma_{j^{\prime}})\neq 0\}.\]
Note that \(\mathcal{C}_{\mathrm{int}}^{1}=\{\gamma_{0},\gamma_{1}\}\), \(\mathcal{C}_{\mathrm{int}}^{i}\subset\mathcal{C}_{\mathrm{int}}^{i+1}\), and \(\mathcal{C}_{\mathrm{int}}=\bigcup_{i\geq 1}\mathcal{C}_{\mathrm{int}}^{i}\). Moreover, to go from \(i\) to \(i+1\), either we add a curve that intersects some curve in \(\mathcal{C}_{\mathrm{int}}^{i}\), or we add two curves that intersect each other. In particular, each component of the subsurface spanned by \(\mathcal{C}_{\mathrm{int}}^{i}\) has negative Euler characteristic.
We now let \(F^{i}=F^{i}(\mathcal{C}_{\mathrm{int}})\) be the geodesic representative of the _interior_ of the subsurface spanned by \(\mathcal{C}_{\mathrm{int}}^{i}\) and define
\[F=F(\mathcal{C}_{\mathrm{int}}):=\bigcup_{i\geq 1}F^{i}.\]
By construction every connected component of \(F^{i}\) is an open subset (homotopic to a subsurface), and thus \(F\) is an open subset of \(S\). We call \(F\) the _set spanned_ by \(\mathcal{C}_{\mathrm{int}}\).
_Definition 5.3_.: We will a collection \(\mathcal{C}\) of curves is _good_ if:
1. the curves in \(\mathcal{C}_{\mathrm{is}}\) do not accumulate anywhere, and
2. for every \(p\in\partial F\) there is a closed disk \(B\) centered at \(p\) such that \(B\cap F\) is either \(B\) minus a diameter or a connected component of \(B\) minus a diameter.
For instance, the collection of curves depicted in Figure 4 is not good, because isolated curves accumulate somewhere. Similarly, the collection in Figure 5 is not good either, because condition (b) is not satisfied at the point \(p\) in the drawing (onto which the curves accumulate). In both examples, the collection of curves do not span a subsurface. Indeed, we will show that the condition of being good is equivalent to \(\mathcal{C}\) admitting a spanning subsurface.
Let \(K\subset S\) be a finite-type subsurface with compact totally geodesic boundary. Each \(F^{i}\) intersects \(K\) in a disjoint union of open subsurfaces and disks with at most one puncture. By a _rectangle_ we mean a connected component \(R\), homeomorphic to a closed disk, of \(K\cap F^{i}\), with boundary split into _horizontal sides_ lying in \(\partial F^{i}\) and _vertical sides_ lying in \(\partial K\). All components of \(K\cap F^{i}\) which are not rectangles are called _essential_. Let \(K_{e}^{i}\) be the union of all the essential components of \(K\cap F^{i}\).
**Lemma 5.4**.: _For all \(i\geq 2\), \(K_{e}^{i}\subset K_{e}^{i+1}\), and for large \(i\) it stabilizes. More precisely, there exists an open subsurface \(K_{e}\subset K\) and such that for all sufficiently large \(i\), \(K_{e}^{i}\) is isotopic to \(K_{e}\)._
Proof.: This follows from Euler characteristic considerations.
Figure 5. A collection of curves which is not good
Figure 6. A compact subsurface \(K\), whose boundary is in green, and in orange the boundary of a subsurface \(F^{i}\). The shaded part is the intersection, formed by an essential component and a rectangle
So for any finite-type subsurface \(K\) with compact totally geodesic boundary, there exists \(I\) such that for all \(i\geq I\), the topology of the essential components \(K_{e}^{i}\) stabilizes, and further increasing \(i\) only adds new rectangles or adjoins previous ones. We define:
\[\mathcal{R}(K)=\{R:R\subset K\cap F^{i}\text{ is a rectangle for some }i\geq I\}.\]
Two parallel rectangles are _equivalent_ if they are contained in the same connected component of \(F^{(i)}\cap K\) for some \(i\).
The following result is not hard to prove, and we leave it as an exercise.
**Lemma 5.5**.: _The relation define above is an equivalence relation on \(\mathcal{R}(K)\). Moreover, suppose \(R_{1}\), \(R_{2}\), \(R_{3}\) are three parallel rectangles with \(R_{2}\) contained in the rectangle between \(R_{1}\) and \(R_{3}\). If \(R_{1}\) and \(R_{3}\) are equivalent, then all three are equivalent._
So we can naturally organize parallel rectangles into equivalence classes of rectangles. As \(i\) further increases the number of equivalence classes may increase. We have the following proposition.
**Proposition 5.6**.: _Suppose \(\mathcal{C}\) is a collection of curves closed under concatenation. The following statements are equivalent:_
1. _there is a subsurface spanned by_ \(\mathcal{C}\)_;_
2. _the curves in_ \(\mathcal{C}_{\mathrm{is}}\) _do not accumulate anywhere and for every compact subsurface_ \(K\) _with totally geodesic boundary, the number of equivalence classes in_ \(\mathcal{R}(K)\) _is finite;_
3. \(\mathcal{C}\) _is good._
_Moreover, if there is a subsurface spanned by \(\mathcal{C}\), it is properly homotopic to \(F\)._
Proof.: [(1) \(\Rightarrow\) (2)] Let \(\Sigma\) be a surface spanned by \(\mathcal{C}\). By contradiction, suppose the curves in \(\mathcal{C}_{\mathrm{is}}\) accumulate somewhere. Then we can find a sequence of curves \(\gamma_{j}\in\mathcal{C}_{\mathrm{is}}\) and a curve \(\alpha\) such that
\[i(\gamma_{j},\alpha)\neq 0.\]
\(\Sigma\) is a subsurface, so \(\Sigma\cap\alpha\) consists of finitely many arcs, with at least one, denoted \(a\), intersecting at least three curves, say \(\gamma_{j_{1}},\gamma_{j_{2}}\) and \(\gamma_{j_{3}}\). Assume that \(\gamma_{j_{2}}\cap a\) is between \(\gamma_{j_{1}}\cap a\) and \(\gamma_{j_{3}}\cap a\). Then the pair of pants containing \(\gamma_{j_{1}}\cup a\cup\gamma_{j_{3}}\) is in \(\Sigma\) and contains a curve \(\beta\) intersecting \(\gamma_{j_{2}}\) essentially. As \(\beta\) is in \(\Sigma\), by Lemma 5.1\(\beta\) is a concatenation of curves in \(\mathcal{C}\), and since \(\mathcal{C}\) is closed under concatenation, \(\beta\in\mathcal{C}\), which contradicts the definition of \(\mathcal{C}_{\mathrm{is}}\). So the curves in \(\mathcal{C}_{\mathrm{is}}\) cannot accumulate anywhere.
If instead there is a finite-type subsurface \(K\) with compact totally geodesic boundary such that \(\mathcal{R}(K)\) has infinitely many equivalence classes, we find infinitely many parallel rectangles \(R_{i}\) which are not equivalent. Let \(\gamma_{i}\) be a curve in \(F\) passing through \(R_{i}\). As the \(\gamma_{i}\) are, by construction, concatenations of pieces of curves in \(\mathcal{C}\), they all are in \(\Sigma\). As \(\Sigma\) is a subsurface, \(\Sigma\cap K\) is a finite union of connected components. In particular there is some component \(X\) of \(\Sigma\cap K\) containing \(\gamma_{i}\cap K\) for infinitely many \(i\). This implies that we can find distinct curves \(\gamma_{i_{1}}\) and \(\gamma_{i_{2}}\) such that the rectangle \(R\) between \(\gamma_{i_{1}}\cap K\) and \(\gamma_{i_{2}}\cap K\) is included in \(\Sigma\). Thus the boundary of this rectangle is a concatenation of curves in \(\mathcal{C}\), which implies that the rectangle between them is contained in \(F^{(l)}\cap K\) for some \(l\), so \(R_{i_{1}}\) and \(R_{i_{2}}\) are equivalent, a contradiction.
[(2) \(\Rightarrow\) (3)] We just need to show property (b). By construction, \(\partial F\) is a union of limits of simple closed geodesics. We claim that for every finite-type subsurface \(K\) with compact totally geodesic boundary, \(\partial F\cap K\) has finitely many connected components. If not, as \(K\) is of finite type, there are infinitely many components \(l_{j}\), \(j\in\mathbb{N}\), of \(\partial F\cap K\) which are parallel to
a given arc from \(\partial K\) to \(\partial K\). Thus there are infinitely many rectangles between the \(l_{j}\) which are not in \(F\) and therefore infinitely many rectangles that are not equivalent, a contradiction.
So for every \(p\in\ell\subset F\) we can find a sufficiently small ball \(B\) such that \(\ell\cap B\) coincided with \(\partial F\cap B\) and it is a single arc from \(\partial B\) to \(\partial B\). In particular, \(B\smallsetminus\partial F=B\smallsetminus\ell\) has two components, each of which is either entirely contained in \(F\) or in \(S\smallsetminus F\). Moreover, since \(\ell\subset\partial F\), at least one of the components intersects, and hence is contained in, \(F\). So \(B\) is the required disk.
[(3) \(\Rightarrow\) (1)] The condition on the boundary points shows that \(\partial F\) is a union of lines and curves (i.e. it contains no non-proper component). Moreover, since each component is the limit of geodesics, for any \(\ell\subset\partial F\), either
* for every \(p\in\ell\) there is a closed disk \(B\) centered at \(p\) such that \(B\cap F\) is a connected component of a disk minus a diameter, or
* for every \(p\in\ell\) there is a closed disk \(B\) centered at \(p\) such that \(B\cap F\) is a a disk minus a diameter.
Let \(T_{1}\) be the collection of lines and curves in \(\partial F\) for which the first condition holds and \(T_{2}\) the other lines and curves in \(\partial F\).
For every \(\ell\in T_{2}\cup\mathcal{C}_{\mathrm{is}}\) we can find an open regular neighborhood \(N(\ell)\) such that:
* \(N(\ell)\) is disjoint from \(\partial F\cup\mathcal{C}_{\mathrm{is}}\smallsetminus\{\ell\}\), and
* if \(\ell_{1}\neq\ell_{2}\), then \(N(\ell_{1})\cap N(\ell_{2})=\emptyset\), and
* the neighborhoods don't accumulate anywhere.
We can find such neighborhoods by properness of components of \(\partial F\) and by the non-accumulation assumption for \(\mathcal{C}_{\mathrm{is}}\) (we can just fix a compact exhaustion of the surface and construct the neighborhoods piece by piece).
Define
\[\Sigma:=\left(F\cup\bigcup_{\ell\in T_{1}}\ell\cup\bigcup_{\gamma\in\mathcal{ C}_{\mathrm{is}}}N(\gamma)\right)\smallsetminus\bigcup_{\ell\in T_{2}}N(\ell).\]
Let \(X\) be the metric completion of
\[F\cup\bigcup_{\gamma\in\mathcal{C}_{\mathrm{is}}}N(\gamma).\]
Then \(X\) is a surface with boundary and we can construct a proper embedding of \(X\) into \(S\) so that its image is \(\Sigma\) (as before, we can look at a compact exhaustion of \(S\) and construct proper embeddings of subsurfaces of \(X\) to subsurfaces of \(\Sigma\) so that their limit is the required embedding). So \(\Sigma\) is a subsurface. Moreover, since \(\Sigma\) is properly homotopic to \(F\), it is spanned by \(\mathcal{C}\).
We record here a consequence of having a non-good collection of curves. We will use it to prove that certain collections of curves span a subsurface.
**Lemma 5.7**.: _Suppose \(\mathcal{C}\) is not a good collection of curves but closed under finite concatenations. Then there are simple closed geodesics \(\alpha,\beta_{i},\gamma_{j}\), for \(i,j\in\mathbb{N}\), an arc \(a=[p,q]\subset\alpha\) and a sequence \(\{i_{j}\}\subset\mathbb{N}\) going to infinity so that:_
1. \(\alpha,\gamma_{j}\notin\mathcal{C}\)_, for every_ \(j\) _and_ \(\beta_{i}\in\mathcal{C}\) _for every_ \(i\)_,_
2. _for every_ \(i\)_,_ \(a\cap\beta_{i}\neq\emptyset\) _and_ \(q\) _is a limit of a sequence of intersections of_ \(a\) _with the_ \(\beta_{i}\)_,_
3. _for every_ \(j\)_, there is a subarc_ \(\tau_{j}\) _of a from_ \(\beta_{i_{j}}\) _to_ \(\beta_{i_{j+1}}\) _so that the union of the three spans a pair of pants whose third boundary component is_ \(\gamma_{j}\) _and_ \(\tau_{j}\) _is disjoint from_ \(\beta_{i_{k}}\) _for_ \(k\neq j,j+1\)
_Remark 5.8_.: In the situation of the previous lemma, and given a homeomorphism \(f\), fix an index \(j\). Look at \(f^{n}(\alpha),f^{n}(\beta_{i_{j}})\) and \(f^{n}(\beta_{i_{j+1}})\) and consider an ambient isotopy which sends them to their geodesic representatives. The image through this isotopy of \(f^{n}(\tau_{j})\) is an arc \(\widetilde{f^{n}(\tau_{j})}\). Then we denote by \(\tau_{j}^{n}\) the geodesic arc in the homotopy class, relative endpoints, of \(\widetilde{f^{n}(\tau_{j})}\).
Proof.: By Proposition 5.6, either there is a compact subsurface \(K\) with totally geodesic boundary such that \(\mathcal{R}(K)\) contains infinitely many equivalence classes, or the curves in \(\mathcal{C}_{\mathrm{is}}\) accumulate somewhere.
In the first case, we can find a sequence \(R_{i}\in\mathcal{R}(K)\) which are all parallel and pairwise not equivalent. Let \(\alpha\) be a boundary curve \(K\) containing vertical sides of all \(R_{i}\). Note that since \(\alpha\) is compact, the vertical sides of the rectangles have accumulation points on \(\alpha\). Up to passing to a subsequence, we can assume that there is a compact subarc \(a\subset\alpha\) with endpoints \(p\) and \(q\) such that, if we orient \(a\) from \(p\) to \(q\), we can find points \(v_{i}\in R_{i}\cap a\) which converge monotonically to \(q\).
For every \(i\) we can moreover find a curve \(\beta_{i}\in\mathcal{C}\) passing through \(R_{i}\); let \(b_{i}\) be an arc of \(\beta_{i}\cap K\) homotopic to \(R_{i}\).
Let \(\beta_{i_{j}}\) be a subsequence of pairwise distinct curves so that:
* all strands of \(\beta_{i_{j}}\) passing though the rectangles intersect \(a\) before all the strands of \(\beta_{i_{j+1}}\), and
* between the last strand of \(\beta_{i_{j}}\) and the first strand of \(\beta_{i_{j+1}}\), \(a\) intersects at least two other \(\beta_{k}\).
Let \(\tau_{j}\) be the subarc of \(a\) between the last intersection of \(a\) and \(\beta_{i_{j}}\) and the first intersection of \(a\) and \(\beta_{i_{j+1}}\). Note that \(\beta_{i_{j}}\cup\tau_{j}\cup\beta_{i_{j+1}}\) spans a pair of pants with \(\beta_{i_{j}}\) and \(\beta_{i_{j+1}}\) as two boundary components; let \(\gamma_{j}\) be the third boundary component. Since the \(R_{i}\) are not equivalent, \(\gamma_{j}\notin\mathcal{C}\).
Fix an index \(j\). Look at \(f^{n}(\alpha),f^{n}(\beta_{i_{j}})\) and \(f^{n}(\beta_{i_{j+1}})\) and consider an ambient isotopy which sends them to their geodesic representatives. We denote by \(\tau_{j}^{n}\) the image through this isotopy of \(f^{n}(\tau_{j})\). Then \(\tau_{j}^{n}\) is a geodesic arc between the geodesic representative of \(f^{n}(\beta_{i_{j}})\) and the geodesic representative of \(f^{n}(\beta_{i_{j+1}})\).
If the curves in \(\mathcal{C}_{\mathrm{is}}\) accumulate somewhere, we can find a curve \(\alpha\) intersecting infinitely many \(\beta_{i}\subset\mathcal{C}_{\mathrm{is}}\). As before, up to passing to a subsequence, we can find a compact subarc \(a\) of \(\alpha\) with endpoints \(p\) and \(q\) such that, if we orient \(a\) from \(p\) to \(q\), \(v_{i}:=\beta_{i}\cap a\) converges to \(q\) monotonically. We then define \(\beta_{i_{j}}\), \(\gamma_{j}\) and \(\tau_{j}^{n}\) as before. Since each \(\gamma_{j}\) intersects a curve in \(\mathcal{C}_{\mathrm{is}}\), \(\gamma_{j}\notin\mathcal{C}\).
Figure 7. A choice for \(a\) for the curves in Figure 4, on the left-hand side; on the right-hand side, the general situation of Lemma 5.7
### Diagonal closure of lines
Given two rays \(r_{1}\) and \(r_{2}\), we say that they _cobound a half-strip_ if there is a proper embedding \(\varphi:\mathbb{R}_{\geq 0}\times[0,1]\to S\) such that \(\varphi(\mathbb{R}_{\geq 0}\times\{0\})=r_{1}\) and \(\varphi(\mathbb{R}_{\geq 0}\times\{1\})=r_{2}\). We say that two lines \(\ell_{1}\) and \(\ell_{2}\) are _asymptotic_ in some direction if they contain rays \(r_{1}\subset\ell_{1}\) and \(r_{2}\subset\ell_{2}\) which cobound a half-strip.
If \(L\) is a finite collection of proper lines, we denote by \(\langle\!\langle L\rangle\!\rangle\) the subsurface of \(S\) obtained as follows. First realize the lines in \(L\) as geodesics. Then take the subsurface \(\langle L\rangle\) spanned by \(L\), which is obtained by taking the union of the regular neighborhoods of the lines in \(L\) and fill in every disk with at most one puncture. Then for every two (not necessarily distinct) lines and a direction in which they are asymptotic, we add to \(\langle L\rangle\) the half-strip between two rays. Then fill in any additional disks with at most one puncture. We call \(\langle\!\langle L\rangle\!\rangle\) the _diagonal closure_ of \(L\), and the following lemma gives a characterization of \(\langle\!\langle L\rangle\!\rangle\).
**Lemma 5.9**.: _Let \(L=\{\ell_{1},\ldots,\ell_{N}\}\) be a finite collection of lines that do not end in an isolated planar end of \(S\) and any two lines pairwise intersect finitely many times. Then for any line \(\ell\) that does not end in an isolated planar end, the following statements are equivalent._
1. \(\ell\) _can be homotoped into_ \(\langle\!\langle L\rangle\!\rangle\)_;_
2. _for any curve_ \(\alpha\)_, if_ \(i(\alpha,\ell)\neq 0\)_, then_ \(i(\alpha,\ell_{j})\neq 0\) _for some_ \(j\)_;_
3. _for every compact subsurface_ \(K\)_,_ \(\ell\) _can be homotoped to_ \(\ell^{\prime}\) _such that_ \(\ell^{\prime}\cap K\subset\langle L\rangle\)_._
_Moreover there is a finite union of curves \(\mu\) such that if \(\ell\) is a line in \(\langle\!\langle L\rangle\!\rangle\), then \(i(\mu,\ell)\neq 0\)._
Proof.: We first prove the equivalence of the three conditions.
[(1) \(\Rightarrow\) (3)] For any compact subsurface \(K\), there is a homotopy that homotopes all half-strips of \(\langle\!\langle L\rangle\!\rangle\) away from \(K\). We can then set \(\ell^{\prime}\) to be the image under this homotopy of \(\ell\).
[(3)\(\Rightarrow\)(2)] Let \(\alpha\) be a curve with \(i(\alpha,\ell)\neq 0\). Let \(K\) be an annulus around \(\alpha\) and \(\ell^{\prime}\) a line so that \(\ell^{\prime}\cap K\subset\langle L\rangle\). Since \(\ell^{\prime}\) intersects \(\alpha\), there is an arc of \(\langle L\rangle\) crossing \(\alpha\), so there is some \(j\) such that \(i(\alpha,\ell_{j})\neq 0\).
[(2)\(\Rightarrow\) (1)] We prove that if \(\ell\) cannot be homotoped into \(\langle\!\langle L\rangle\!\rangle\), there is some curve \(\alpha\) intersecting \(\ell\) and disjoint from all \(\ell_{j}\). Assume that all lines are geodesic and \(\ell\) is in minimal position with respect to \(\langle\!\langle L\rangle\!\rangle\).
Note that \(\langle\!\langle L\rangle\!\rangle\) is the union of a compact surface with finitely many half-strips. In particular the boundary of \(\langle\!\langle L\rangle\!\rangle\) is a finite union of circles and lines. So the same holds for \(\Sigma:=\overline{S\smallsetminus\langle\!\langle L\rangle\!\rangle}\). Note moreover that \(\Sigma\) doesn't contain rays in boundary cobounding a half-strip in \(\Sigma\), because such a half-strip would be contained in \(\langle\!\langle L\rangle\!\rangle\) by construction.
Let \(a\) be a component of \(\ell\cap\Sigma\) and \(F\) the component of \(\Sigma\) containing \(a\). If \(a\) is nonseparating in \(F\), we can find a curve contained in \(F\) intersecting \(a\) once and we are done. If \(a\) is separating and both components of \(F\smallsetminus a\) are not contractible, we can find two non-nullhomotopic curves \(\alpha_{1}\) and \(\alpha_{2}\), one in each component of \(F\smallsetminus a\), and a simple arc \(b\) connecting them. The regular neighborhood of \(\alpha_{1}\cup b\cup\alpha_{2}\) is a pair of pants whose third boundary component is the required curve. So we can assume that \(a\) is separating and at least one component \(C\) of \(F\smallsetminus a\) is
Figure 8. Two asymptotic lines, with the half-strip cobounded by two rays
contractible. But then, by the classification of contractible surfaces with boundary, \(C\cup a\) is a closed disk with points removed from the boundary. Since the boundary of \(\Sigma\) contains only finitely many components, \(C\cup a\) is a closed disk with finitely many points removed from its boundary. As \(a\) cannot be homotoped into \(\langle\!\langle L\rangle\!\rangle\), one such point is not an end of \(a\). But then \(\Sigma\) contains a half-strip between two rays in its boundary, a contradiction.
For the last statement, write \(\langle\!\langle L\rangle\!\rangle\) as
\[K\sqcup S_{1}\sqcup\dots\sqcup S_{k},\]
where \(K\) is a compact surface and the \(S_{i}\) are pairwise disjoint half-strips. For every \(j=1,\dots,k\) there is a line \(\ell_{i_{j}}\) going through \(S_{j}\). As no line ends in a puncture, there is a sequence of essential separating curves, pairwise not homotopic, converging to the end of \(\ell_{i_{j}}\) contained in \(S_{j}\); we can choose one such curve \(\alpha_{j}\) sufficiently far out so that \(\alpha_{j}\) intersects \(\ell_{i_{j}}\cap S_{j}\) and is disjoint from \(K\). Let
\[\mu=\bigcup_{j=1}^{k}\alpha_{j}.\]
If \(\ell\) is a line in \(\langle\!\langle L\rangle\!\rangle\), by properness \(\ell\) contains the ray \(\ell_{i_{j}}\cap S_{j}\) for some \(j\). So it intersects the corresponding \(\alpha_{j}\) and thus \(\mu\).
_Remark 5.10_.: At first, one might think that condition (2) in the previous lemma is equivalent to \(\ell\) being in \(\langle L\rangle\) (up to proper homotopy). The example depicted in Figure 10 shows that this is not the case: condition (2) holds, but \(\ell\) cannot be properly homotoped into \(\langle L\rangle\).
Figure 10. \(L\) is the union of the two green lines and \(\ell\) is the orange one. Then \(\ell\) is not properly homotopic into \(\langle L\rangle\), but it is in \(\langle\!\langle L\rangle\!\rangle\).
Figure 9. The situation in the proof of Lemma 5.9
## 6. Extra tame maps: canonical decomposition
By a _decomposition_ of \(S\) we mean a collection \(\{X_{i}\}\) of pairwise disjoint subsurfaces of \(S\) (each of which \(X_{i}\) may be disconnected), such that we can find representatives \(\tilde{X}_{i}\) so that \(S=\bigcup\tilde{X}_{i}^{*}\) and the only overlap between \(\tilde{X}_{i}\) and \(\tilde{X}_{j}\), \(i\neq j\), are common boundary components. When \(f\) is a homeomorphism of \(S\), a decomposition \(\{X_{i}\}\) of \(S\) is called \(f\)-invariant if \(f(X_{i})\) is isotopic to \(X_{i}\) for all \(i\). For a subsurface \(X\subset S\), we say that \(f\)_returns to_\(X\) if there is a power \(k>0\) such that \(f^{k}(X)\) is isotopic to \(X\). We call any such \(k>0\) a _returning time_ of \(f\) to \(X\). Note that all returning times of \(f\) to \(X\) are multiples of the smallest returning time.
Our main proposition in this section is the following.
**Proposition 6.1**.: _Suppose \(f\) is extra tame. Then there is an \(f\)-invariant decomposition of \(S\) into three subsurfaces, \(S_{\text{per}}\), \(S_{\infty}\), and \(S_{0}\), with the following properties._
* _A curve in_ \(S\) _is periodic if and only if it can be homotoped into_ \(S_{\text{per}}\)_._
* _A curve in_ \(S\) _is wandering if and only if it can be homotoped into_ \(S_{\infty}\)__
* \(S_{0}\) _contains no essential, non-peripheral curves._
_We will call \(S_{\text{per}}\), \(S_{\infty}\), and \(S_{0}\) the canonical decomposition of \(f\) on \(S\)._
The decomposition is defined as follows. Denote by \(\mathcal{C}_{\text{per}}\) the collection of \(f\)-periodic curves, and \(\mathcal{C}_{\infty}\) the collection of \(f\)-wandering curves. Let \(\tilde{S}_{\text{per}}\) be the surface spanned by \(\mathcal{C}_{\text{per}}\) (whose existence is proven in Lemma 6.3), \(S_{\infty}\) the subsurface spanned by \(\mathcal{C}_{\infty}\) (whose existence is proven in Lemma 6.2)
Relative to a hyperbolic metric on \(S\), we will pick an almost geodesic representative for the components of \(\tilde{S}_{\text{per}}\) and \(S_{\infty}\) as follows. For each non-annular component \(X\), let \(X^{*}\) be the almost geodesic representative of \(X\). Recall this means that we first take the geodesic representative \(X^{\circ}\) of the interior of \(X\) and then remove a small regular neighborhood of a boundary component of \(X^{\circ}\) which is homotopic to another boundary component of \(X^{\circ}\). By an abuse of notation, we will continue to use \(\tilde{S}_{\text{per}}\) and \(S_{\infty}\) as the union of the representatives of their components. We define
\[\tilde{S}_{0}:=\overline{S\smallsetminus(\tilde{S}_{\text{per}}\cup S_{\infty })},\]
whose isotopy class is independent of the choice of the hyperbolic metric on \(S\).
Then:
* \(S_{\text{per}}\) is the components of the union of \(\tilde{S}_{\text{per}}\) (the subsurface spanned by periodic curves) and the negative Euler characteristic components of \(\tilde{S}_{0}\),
* \(S_{\infty}\) is -- as already said -- the subsurface spanned by wandering curves,
* \(S_{0}\) is the union of all components in \(\tilde{S}_{0}\) which have non-negative Euler characteristic.
As just seen, a fundamental step in the proof of the proposition is to show that, for an extra tame map, \(\mathcal{C}_{\text{per}}\) and \(\mathcal{C}_{\infty}\) span subsurfaces \(\tilde{S}_{\text{per}}\) and \(S_{\infty}\). We then need to show that \(\tilde{S}_{0}\) has no interesting topology. All three proofs are done via contradiction. Namely, we will show that if the required statement s false, then we can use Lemma 5.7 to find an essential curve \(\alpha\) with infinite limit set \(\mathcal{L}(\alpha)\). The proofs in the cases of the \(\tilde{S}_{\text{per}}\) and \(S_{\infty}\) are independent of each other. The proof for \(\tilde{S}_{0}\) is more technical, and it relies on the fact that we already know it is a subsurface, being the complement of two subsurfaces, and further no curve in \(\tilde{S}_{0}\) can be periodic or wandering, and thus must limit onto a non-empty collection of lines.
Let us then prove that \(S_{\infty}\) and \(\tilde{S}_{\text{per}}\) are subsurfaces.
**Lemma 6.2**.: _If \(f\) is extra tame, then \(\mathcal{C}_{\infty}\) spans a subsurface \(S_{\infty}\). Every curve in \(S_{\infty}\) is wandering, and if \(\alpha\) is any curve with \(\mathcal{L}(\alpha)\neq\emptyset\), then \(\mathcal{L}(\alpha)\) does not intersect \(S_{\infty}\) essentially._
Proof.: Suppose there is no subsurface spanned by \(S_{\infty}\). Then we can find curves \(\alpha\), \(\beta_{i}\) and \(\gamma_{j}\) and arcs \(\tau_{j}\) as in Lemma 5.7 and Remark 5.8. Since \(\gamma_{j}\) is not wandering, \(\mathcal{L}(\gamma_{j})\) is nonempty and by construction it is contained in \(\mathcal{L}(\alpha)\), which is finite. So there is some \(\ell\in\mathcal{L}(\alpha)\) which is contained in infinitely many \(\mathcal{L}(\gamma_{j})\).
Let \(\delta\) be a curve intersecting \(\ell\); then \(\mathcal{L}(\delta)\) intersects infinitely many \(\gamma_{j}\). By finiteness of \(\mathcal{L}(\delta)\), there is \(\ell^{\prime}\in\mathcal{L}(\delta)\) which intersects infinitely many \(\gamma_{j}\). If \(i(\ell^{\prime},\beta_{i_{k}})\neq 0\), then \(i(\delta,\mathcal{L}(\beta_{i_{k}}))\neq 0\), which is a contradiction (since the \(\beta_{i_{k}}\) are wandering). So \(\ell^{\prime}\) needs to intersect infinitely many \(\tau_{j}\), and thus it intersects \(\alpha\) infinitely many times, a contradiction.
Every curve \(\beta\) in \(S_{\infty}\) is a concatenation of wandering curves and hence is wandering by Lemma 5.2. If \(\mathcal{L}(\alpha)\) intersects the interior of \(S_{\infty}\), since \(S_{\infty}\) is spanned by wandering curves, there would be some \(\beta\subset S_{\infty}\) intersecting \(\mathcal{L}(\alpha)\), but this contradicts Lemma 3.8.
**Lemma 6.3**.: _If \(f\) is extra tame, then \(\mathcal{C}_{\text{per}}\) spans a subsurface \(\tilde{S}_{\text{per}}\), and every curve in \(\tilde{S}_{\text{per}}\) is periodic._
Proof.: Suppose there is no subsurface spanned by periodic curves. Then we can find curves \(\alpha\), \(\beta_{i}\) and \(\gamma_{j}\) and arcs \(\tau_{j}\) as in Lemma 5.7 and Remark 5.8. We know from the lemma that \(\alpha\) is not periodic; moreover, since it intersects periodic curves, it is not wandering either. Thus \(\mathcal{L}(\alpha)\) is non-empty and contains only lines.
As the pair of pants spanned by \(\beta_{i_{j}}\cup\tau_{j}\cup\beta_{i_{j+1}}\) is not \(f\)-periodic, the length of \(\tau_{j}^{n}\) is not bounded (because there are finitely many isotopy classes of arcs of bounded length between the geodesic representative of \(f^{n}(\beta_{i_{j}})\) and the geodesic representative of \(f^{n}(\beta_{i_{j+1}})\), for \(n\geq 0\)). Moreover, the smallest angle of intersection of \(\tau_{j}^{n}\) with \(f^{n}(\beta_{i_{j}})\) and \(f^{n}(\beta_{i_{j+1}})\) is bounded away from zero, otherwise there is a sequence of iterates of \(\alpha\) wrapping around some iterate of \(\beta_{i_{j}}\) or \(\beta_{i_{j+1}}\) more and more times, contradicting tameness. So the \(\tau_{j}^{n}\) leave every compact, but also intersect periodic curves. Hence the sequence \(\{\tau_{j}^{n}\mid n\geq 0\}\) accumulates somewhere, and the accumulation set contains rays starting at iterates of \(\beta_{i_{j}}\) or \(\beta_{i_{j+1}}\). For every \(j\), fix one such ray \(r_{j}\). By construction, \(\tau_{j}\) intersects only \(\beta_{i_{k}}\) for \(k=j,j+1\), so the \(r_{j}\) doesn't intersect any iterate of any \(\beta_{i_{k}}\) for \(k\neq j,j+1\). In particular the rays \(r_{2j}\) are pairwise distinct and not nested and each of them is contained in some line of \(\mathcal{L}(\alpha)\). As \(\mathcal{L}(\alpha)\) is finite, this gives us a contradiction.
Every curve in \(S_{\text{per}}\) is a concatenation of periodic curves and hence periodic by Lemma 5.2.
**Lemma 6.4**.: _If \(f\) is extra tame, then \(\tilde{S}_{\text{per}}\) and \(S_{\infty}\) are disjoint subsurfaces._
Proof.: Since they are subsurfaces spanned by curves, if they do not have disjoint representatives, then there is curve \(\alpha\) in \(\tilde{S}_{\text{per}}\) and a curve \(\beta\) in \(S_{\infty}\) such that
\[i(\alpha,\beta)>0,\]
which is a contradiction.
**Lemma 6.5**.: _If \(f\) is extra tame, \(\tilde{S}_{0}\) has no essential, non-peripheral curves._
Proof.: We prove this by contradiction. Suppose there is a curve \(\delta\) in \(\tilde{S}_{0}\) which is essential and non-peripheral. Then \(\delta\) cannot be wandering or periodic, so \(L:=\mathcal{L}(\delta)\) is a nonempty collection of lines. Let \(L^{\pm}:=\mathcal{L}^{\pm}(\delta).\) We define
\[\mathcal{C}=\{\beta\subset S_{0}\mid\mathcal{L}^{\pm}(\beta)\subset\langle \!\langle L^{\pm}\rangle\!\rangle\}.\]
Note that \(\mathcal{C}\) is closed under taking finite concatenation of curves in \(\mathcal{C}\): indeed, if a curve \(\beta\) is a concatenation \(b_{1}*\cdots*b_{k}\) of pieces \(b_{i}\subset\beta_{j}\in\mathcal{C}\), as the \(\beta_{j}\) are in \(S_{0}\), so is \(\beta\). Moreover, suppose
a line \(\ell\in\mathcal{L}^{\pm}(\beta)\) were not in \(\langle\!\langle L^{\pm}\rangle\!\rangle\). Then by Lemma5.9 there is a curve \(\eta\) intersecting \(\ell\) and disjoint from all lines in \(L^{\pm}\). Thus there are infinitely many iterates of \(\beta\) intersecting \(\eta\) and hence some \(j\) so that infinitely many iterates of \(\beta_{j}\) intersect \(\eta\). Thus \(\mathcal{L}^{\pm}(\beta_{j})\) contains a line intersecting \(\eta\), but this contradicts Lemma5.9 as \(\mathcal{L}^{\pm}(\beta_{j})\subset\langle\!\langle L^{\pm}\rangle\!\rangle\).
**Claim 6.6**.: _There is a subsurface \(S_{L}\) spanned by \(\mathcal{C}\)._
Proof.: By contradiction, suppose there is no subsurface spanned by \(S_{L}\). Then we can find curves \(\alpha\), \(\beta_{i}\) and \(\gamma_{j}\) and arcs \(\tau_{j}\) as in Lemma5.7 and Remark5.8. Note that we can choose \(\alpha\subset\tilde{S}_{0}\), because if \(S_{L}\) were not tame, it wouldn't be homotopic to a subsurface of \(\tilde{S}_{0}\) either, since \(\tilde{S}_{0}\) is a subsurface.
Let \(\mathcal{L}^{\pm}(\tau_{j})\) be \(\mathcal{L}^{\pm}(\{\tau_{j}^{n}\mid n\in\mathbb{Z}\})\). Note that for every \(j\) at least one between \(\mathcal{L}^{+}(\tau_{j})\) and \(\mathcal{L}^{-}(\tau_{j})\) is nonempty, otherwise \(\mathcal{L}^{\pm}(\gamma_{j})\subset\langle\!\langle\mathcal{L}^{\pm}(\beta_ {j})\cup\mathcal{L}^{\pm}(\beta_{j+1})\rangle\!\rangle\subset\langle\!\langle L ^{\pm}\rangle\!\rangle\), so \(\gamma_{j}\subset S_{L}\), which is impossible. So \(\mathcal{L}^{\pm}(\tau_{j})\) is a nonempty collection of arcs, rays and lines contained in \(\mathcal{L}^{\pm}(\alpha)\smallsetminus\langle\!\langle L^{\pm}\rangle\!\rangle\), where the segments go from \(\partial\langle\!\langle L^{\pm}\rangle\!\rangle\) to \(\partial\langle\!\langle L^{\pm}\rangle\!\rangle\) and rays start from \(\partial\langle\!\langle L^{\pm}\rangle\!\rangle\). Note moreover that \(\ell\smallsetminus\langle\!\langle L^{\pm}\rangle\!\rangle\) has finitely many components for every \(\ell\in\mathcal{L}^{\pm}(\alpha)\). As \(\mathcal{L}^{\pm}(\alpha)\) and \(\langle\!\langle L^{\pm}\rangle\!\rangle\) are \(f\)-invariant, up to passing to a power we can assume that each component of \(\ell\smallsetminus\langle\!\langle L^{\pm}\rangle\!\rangle\) is \(f\)-invariant, for every \(\ell\in\mathcal{L}^{\pm}(\alpha)\). Let \(\{A_{1},\ldots,A_{k}\}\) be the collection of all these components.
Without loss of generality, assume that \(L^{+}\neq\emptyset\); up to passing to a subsequence, we can assume that \(\mathcal{L}^{+}(\tau_{j})\neq\emptyset\) for every \(j\).
Suppose that an arc, ray or line \(A\) belongs to \(\mathcal{L}^{+}(\tau_{j})\). Then there is some sequence \(l_{i}\to\infty\) so that \(\tau_{j}^{l_{i}}\) converges and the limit contains \(A\). But as all components of \(\ell\smallsetminus\langle\!\langle L^{+}\rangle\!\rangle\), for all \(\ell\), are \(f\)-invariant, \(\tau_{j}^{n}\) has the same limit containing \(A\) as \(n\to\infty\).
Fix a compact subsurface intersecting all components of \(\ell\smallsetminus\langle\!\langle L^{+}\rangle\!\rangle\), for every \(\ell\). By tameness, for every component \(A_{i}\) there is a number \(N(A_{i})\) so that for every \(n\), \(f^{n}(\alpha)\cap K\) has at most \(n(A_{i})\) strands parallel to \(A_{i}\cap K\).
Let \(J=\sum_{i=1}^{k}N(A_{i})+1\) and consider \(\tau_{1},\ldots,\tau_{J}\). By our assumptions, for every \(i\) there are at most \(N(A_{i})\) among the \(\tau_{j}\) with \(A_{i}\subset\mathcal{L}^{+}(\tau_{j})\). But then there is at least one \(\tau_{j}\) which cannot have any \(A_{i}\) in the limit, a contradiction.
Up to replacing \(f\) by its inverse, we can assume that \(L^{+}\) is nonempty. By Lemma5.9, we can find a finite collection of curves \(\{\delta_{1}^{+},\ldots,\delta_{k}^{+}\}\) intersecting all lines in \(\langle\!\langle L^{+}\rangle\!\rangle\). Then by Lemma3.7, for every curve \(\gamma\) in \(S_{L}\), either
1. \(\mathcal{L}^{+}(\gamma)=\emptyset\), or
2. \(i(\gamma,\mathcal{L}^{-}(\delta_{i}))\neq 0\) for some \(i\).
If all curves in \(S_{L}\) satisfy (b), then every connected component of \(S_{L}\) is filled by a finite collection of lines, so it is of finite type. If one connected component is \(f\)-periodic, it contains a periodic curve; otherwise all connected components go to infinity (they cannot accumulate, because \(S_{L}\) is a subsurface), so \(S_{L}\) contains a wandering curve. In either situation, we have a contradiction.
So we can assume that there is a curve \(\gamma\) in \(S_{L}\) satisfying (a), which is then contained in \(S_{L}\smallsetminus\mathcal{L}^{-}(\delta_{i})\). Note that \(S_{L}\smallsetminus\mathcal{L}^{-}(\delta_{i})\) is an \(f\)-invariant subsurface.
Let \(\{\delta_{j}^{-}\}\) a finite collection of curves intersecting all lines in \(\langle\!\langle L^{-}\rangle\!\rangle\). If \(\gamma\subset S_{L}\smallsetminus\mathcal{L}^{-}(\delta_{2})\), \(\mathcal{L}^{+}(\gamma)=\emptyset\). As \(\gamma\) isn't wandering, \(\mathcal{L}^{-}(\gamma)\neq\emptyset\). Since \(\mathcal{L}^{-}(\gamma)\subset\langle\!\langle L^{-}\rangle\!\rangle\), by Lemma3.7\(i(\gamma,\mathcal{L}^{+}(\delta_{3}))\neq 0\). Hence every connected component of \(S_{L}\smallsetminus\mathcal{L}^{-}(\delta_{2})\) is filled by finitely many lines that intersect finitely many times and hence is of finite type. As before we deduce that there is either a periodic curve or a curve going to infinity in \(S_{L}\smallsetminus\mathcal{L}^{-}(\delta_{2})\), a contradiction.
A consequence of the lemma is the following.
**Corollary 6.7**.: _Any negative Euler characteristic component of \(\tilde{S}_{0}\) is a pair of pants._
Proof.: If the Euler characteristic were smaller than \(-1\), there would be essential non-peripheral curves. So the Euler characteristic is \(-1\). If the component is not a pair of pants, it is a pair of pants with some points removed from its boundary, and therefore contains an essential non-peripheral curve, a contradiction.
### Examples
We end this section with some examples of tame but not extra tame maps for which the conclusions of Proposition 6.1 are false.
**Lemma 6.8**.: _There exists a tame map \(f\) such that \(\mathcal{L}(\alpha)\) is locally finite for every curve \(\alpha\), but the collection of periodic curves does not span a subsurface._
Proof.: Consider the cylinder \(\Sigma:=S^{1}\times\mathbb{R}\) and fix a Cantor set \(K\subset S^{1}\). Let \(S\) be the surface obtained as
\[S:=\Sigma\smallsetminus\bigcup_{n=-\infty}^{\infty}K\times\{t_{n}\},\]
where \(t_{\pm\infty}=\pm 1\), \(t_{0}=0\), \(t_{n}=-1+\frac{1}{2^{-n}}\) for \(n<0\) and \(t_{n}=1-\frac{1}{2^{n}}\) for \(n>0\) (see Figure 11).
Fix a monotone non-increasing homeomorphism \(s:\mathbb{R}\to\mathbb{R}\) such that
\[s(t_{\pm\infty}) =t_{\pm\infty}\] \[s(t_{k}) =t_{k+1}\ \ \ \forall k\in\mathbb{Z}.\]
Then the map \(\operatorname{id}\times s:\Sigma\to\Sigma\) induces a tame homeomorphism \(f\) of \(S\), which has no wandering curves. The periodic curves are also fixed curves, which are those that can be homotoped to be vertical between height \(-1\) and height \(1\). In particular \(\mathcal{C}_{\operatorname{per}}\) does not span a subsurface.
**Lemma 6.9**.: _There exists a tame map \(f\) such that \(\mathcal{L}(\alpha)\) is locally finite for every curve \(\alpha\), but \(\mathcal{C}_{\infty}\) does not span a subsurface. Moreover, we can construct such an example so that a single component of \(S_{\infty}\) is not a subsurface._
Proof.: In the plane, for any \(k\in\mathbb{N}\), let \(\ell_{k}\) be the boundary of
\[[k+1,\infty)\times\left[\frac{1}{k+1}+\varepsilon_{k},\frac{1}{k}-\varepsilon _{k}\right],\]
Figure 11. A sketch of the surface in Lemma 6.8, with a periodic curve (in green) and the Cantor sets in light blue
where \(\varepsilon_{k}=\frac{1}{10k^{2}}\) (see Figure 12 for a schematic picture). Let \(S\) be the plane punctured at the points \(\left(\frac{1}{n},m\right)\), for \(n\in\mathbb{N}\) and \(m\in\mathbb{Z}\), at the points \((0,m)\), for \(m\in\mathbb{Z}\), and for every line \(\ell_{k}\), \(k\in\mathbb{N}\), at a sequence of points on the line not accumulating anywhere.
The homeomorphism \(f\) that we consider is given by:
* the map induced by \((x,y)\mapsto(x+1,y)\) on a small regular neighborhood of each horizontal line \(y=\frac{1}{n}\) (\(n\in\mathbb{N}\)) and \(y=0\),
* a puncture shift supported on a small regular neighborhood of \(\ell_{k}\), for every \(k\) tapered to the identity on the rest of the surface.
One can show that \(f\) is tame. Moreover, \(F(\mathcal{C}_{\infty})\) is the disjoint union of one strip per line \(y=\frac{1}{n}\), \(n\in\mathbb{N}\), and of the strips \((k,\infty)\times\left(\frac{1}{k+1}+\frac{\varepsilon_{k}}{2},\frac{1}{k}- \frac{\varepsilon_{k}}{2}\right)\). The closure of each component is a subsurface, but the horizontal strips accumulate onto \(y=0\), so \(\mathcal{C}_{\infty}\) doesn't span a subsurface. Note moreover that there are no periodic curves.
We can also modify the surface by adding handles connecting consecutive horizontal strips in a translation invariant way (see Figure 13 for a schematic picture). The map induced by \(f\) on this new surface is also tame and has no periodic curves. Now though \(F(\mathcal{C}_{\infty})\) is given by one component per line \(\ell_{k}\) and a single nonplanar component. The nonplanar component is not homotopic to a subsurface.
## 7. Characterization of translations
In this section, we give a characterization of a translation in terms of the dynamics of its action on curves. The main application will be to an extra tame map \(f\) and its action on the invariant subsurface \(S_{\infty}\) in its canonical decomposition. Therefore, in this section, we need to
Figure 12. The lines \(\ell_{k}\) in the construction of Lemma 6.9
Figure 13. The modified surface in Lemma 6.9
work with surfaces with boundary. Everything in this section is independent of the previous sections.
**Lemma 7.1**.: _Let \(X\) be a surface possibly with boundary and \(f\) a homeomorphisms of \(X\) such that every curve in \(X\) is \(f\)-wandering. Then there are ends \(e_{\pm}\) of \(X\), possibly \(e_{+}=e_{-}\), such that for every curve \(\alpha\) in \(X\)\(f^{\pm n}(\alpha)^{*}\) converges to \(e_{\pm}\) as \(n\to\pm\infty\)._
We will call \(e_{+}\) and \(e_{-}\) respectively the \(f\)_-attracting_ and \(f\)_-repelling_ ends of \(X\).
Proof.: Let \(\alpha\) be a curve in \(X\). We first show that \(f^{n}(\alpha)^{*}\) converges to an end. Let
\[\alpha^{*}=\alpha_{0},\alpha_{1},\ldots\alpha_{k}=f(\alpha)^{*}\]
be a chain of geodesic curves in \(X\) connecting \(\alpha^{*}\) and \(f(\alpha)^{*}\). Given any compact set \(K\) there is \(n_{0}\) so that if \(n>n_{0}\), the \(f^{n}(\alpha_{i})^{*}\) are disjoint from \(K\). In particular, \(f^{n}(\alpha)^{*}\) and \(f^{n+1}(\alpha)^{*}\) are in the same complementary component of \(K\), for \(n>n_{0}\), so \(f^{n}(\alpha)^{*}\) goes to a single end \(e_{+}\) as \(n\to\infty\). The same argument shows that \(f^{n}(\alpha)^{*}\) converges to an end \(e_{-}\) as \(n\to-\infty\).
If \(\beta\) is another curve in \(X\), by looking at a chain of curves in \(X\) connecting \(\alpha^{*}\) to \(\beta^{*}\) and repeating the same argument as above, we deduce that \(f^{n}(\beta)^{*}\to e_{\pm}\) as \(n\to\pm\infty\).
Note that a surface \(X\) does not necessarily coincide with the subsurface spanned by its curves. This is for instance not the case for a surface which doesn't contain any curve (such as a closed disk with points removed from the boundary) or for surfaces with rays in the boundary cobounding a half-strip (for instance, a one-holed torus with points removed from the boundary). We say that \(X\) is _filled by its curves_ if it does coincide with the subsurface spanned by its curves.
**Lemma 7.2**.: _Let \(X\) be a surface possibly with boundary and \(f\) a homeomorphism on \(X\) such that every curve in \(X\) is \(f\)-wandering. Suppose \(X\) is filled by its curves and the \(f\)-attracting and repelling ends \(e_{+}\) and \(e_{-}\) are distinct. Then for any a hyperbolic metric on \(X\), there is a curve or a finite collection of arcs \(\delta\) separating \(e_{+}\) and \(e_{-}\), and \(f^{n}(\delta)^{*}\) is disjoint from \(\delta^{*}\) for all sufficiently large \(n\)._
Proof.: Let \(D(X)\) be the double of \(X\) along its boundary (which coincides with \(X\) if \(X\) has no boundary components), with the double hyperbolic structure. The map \(f\) induces a map \(\hat{f}\) on the double. Let \(i:X\hookrightarrow D(X)\) be the inclusion map and \(i_{*}\) the induced map from \(\operatorname{Ends}(X)\) to \(\operatorname{Ends}(D(X))\). Note that \(i_{*}\) is injective, because if \(e_{1},e_{2}\in\operatorname{Ends}(X)\) are distinct ends, there is a compact subset \(K\) separating them, and therefore \(i_{*}(e_{1})\) and \(i_{*}(e_{2})\) are separated by the double of \(K\) in \(D(X)\).
Since \(D(X)\) has no boundary, there is a curve \(\hat{\delta}\subset D(X)\) separating \(i_{*}(e_{+})\) and \(i_{*}(e_{-})\). Assume \(\hat{\delta}\) is in minimal position with respect to \(i(\partial X)\) and let \(\delta:=i^{-1}(\hat{\delta})\). Since the boundary of \(X\) are realized as geodesic lines in \(D(X)\), \(\hat{\delta}\) intersects each boundary component of \(i(X)\) at most finitely many times, and the boundaries of \(i(X)\) in \(D(X)\) cannot accumulate, so \(\delta\) has only finitely many components.
**Claim 7.3**.: \(\delta\) _separates \(e_{+}\) and \(e_{-}\)._
Proof.: If not, we can find a line \(\ell\) in \(X\) from \(e_{+}\) to \(e_{-}\) disjoint from \(\delta\), and hence a line \(i(\ell)\) from \(i_{*}(e_{+})\) to \(i_{*}(e_{-})\) disjoint from \(\hat{\delta}\).
Let \(H^{+}\) be the component of \(X\smallsetminus\delta^{*}\) containing \(e_{+}\). We claim there exists \(n\geq 1\) such that \(f^{n}(\delta)^{*}\subset H^{+}\), and hence in particular that \(\delta^{*}\) and \(f^{n}(\delta)^{*}\) are disjoint. This is immediate if \(\delta\) is a curve since it is wandering and converging to \(e_{+}\). If \(\delta\) is an arc, then let \(L_{1},L_{2}\) be the
boundary components of \(X\) containing the endpoints of \(\delta\). Since \(X\) is filled by its curves, we can find a finite-type subsurface \(F\subset X\) such that \(\delta^{*}\smallsetminus F^{*}\) is contained in \(N_{1}\cup N_{2}\), where \(N_{i}\) is a regular neighborhood about \(L_{i}\) and \(F^{*}\) is the geodesic representative of \(F\). Since \(F\) is filled by curves in \(X\), for all sufficiently large \(n\), \(f^{n}(F)^{*}\subset H^{+}\). If \(f^{n}(\delta)^{*}\) meets \(\delta\), then \(f^{n}(\delta)^{*}\) would have to intersect \(\delta\) essentially in \(N_{1}\cup N_{2}\), but this is not possible. Thus the ends of \(f^{n}(\delta)^{*}\) also have to lie in \(H^{+}\). This shows \(f^{n}(\delta)^{*}\subset H^{+}\) for all sufficiently large \(n\). If \(\delta\) is a finite union of arcs, we can repeat the same argument for every component.
**Proposition 7.4**.: _Let \(X\) be a surface possibly with boundary and \(f\) a homeomorphism on \(X\) such that every curve in \(X\) is \(f\)-wandering and \(X\) filled by its curves. Then every arc in \(X\) with endpoints on \(\partial X\) is \(f\)-wandering._
Proof.: Fix a complete hyperbolic structure on \(X\) with totally geodesic boundary, so that \(X\) has injectivity radius greater than some positive constant \(\epsilon\). Let \(\delta=\delta(\epsilon)\) be a constant such that if \(\beta\) is a piecewise geodesic loop with each segment at least \(\epsilon\) long and angles between consecutive segments at least \(\pi/2\), then \(\beta\) is \(\delta\) close to its geodesic representative \(\beta^{*}\).
Note that there is nothing to show if \(\ell\) is \(f\)-wandering. Thus we can assume that \(f^{k}(\ell)=\ell\) for some \(k\geq 1\).
For any arc \(\gamma\), let \(\gamma^{*}\) be its geodesic representative which is orthogonal to the boundary of \(X\). Our goal is to show \(f^{n}(\gamma)^{*}\) leaves every compact set of \(X\). It is enough to show \(f^{kn}(\gamma)^{*}\to e_{+}\) for some \(k\geq 1\), as this implies that \(f^{kn+m}(\gamma)^{*}\to e_{+}\) for every \(m\). Thus up to replacing \(f\) by a power, we may assume \(f(\ell)=\ell\) and \(\gamma\) is an arc with both endpoints on \(\ell\).
Set \(\gamma_{n}=f^{n}(\gamma)\). As for curves, we denote by \(\mathcal{L}^{\pm}(\gamma)\) the accumulation set of \(\gamma_{n}^{*}\) as \(n\to\pm\infty\). Applying the same argument as in Lemma 3.7 to arcs, we have that
\[\mathcal{L}(\gamma)\cap\beta\neq\emptyset\Longleftrightarrow\mathcal{L}( \beta)\cap\gamma\neq\emptyset,\]
where \(\beta\) is any curve or arc in \(S\). Since curves in \(X\) are wandering, \(\mathcal{L}(\gamma)\) cannot intersect the interior of \(X\) essentially. Parametrize \(\ell\) by \(\mathbb{R}\) isometrically, and let \(a_{n},b_{n}\in\mathbb{R}\) be the endpoints of \(\gamma_{n}^{*}\) with \(a_{n}<b_{n}\).
**Claim 7.5**.: _Both sequences \(a_{n}\) and \(b_{n}\) converge simultaneously to \(\infty\) or to \(-\infty\)._
To see the claim, let \(\alpha_{n}\) be the curve in \(X\) obtained by concatenating \(\gamma_{n}^{*}\) with the segment \([a_{n},b_{n}]\subset\ell\). Since \(f^{n}(\gamma)\) and \(f^{n}(\gamma^{*})\) have the same geodesic representative \(\gamma_{n}^{*}\), \(f^{n}(\alpha_{0})^{*}=\alpha_{n}^{*}\), so \(\alpha_{n}^{*}\) converges to an end of \(X\). If there is a subsequence \(a_{n_{i}}\) that converges to \(a\in\ell\), then \(\gamma_{n_{i}}^{*}\) would converge to a ray in \(X\) emanating orthogonally from \(a\), but this impossible since such ray meets the interior of \(X\) essentially, and thus a curve in \(X\). Similarly for \(b_{n}\). On the other hand, if some subsequence \(a_{n_{i}}\to\pm\infty\) but \(b_{n_{i}}\to\mp\infty\), then for sufficiently large \(i\), \(\alpha_{n_{i}}\) is a quasi-geodesic with uniform constants, so it stays close to its geodesic representative \(\alpha_{n_{i}}^{*}\). But in this case, \(\alpha_{n_{i}}^{*}\) cannot converge to an end of \(X\). This finishes the proof of the claim.
Henceforth, without loss of generality assume that \(a_{n},b_{n}\to\infty\). Choose \(i\geq 1\) so that \(a_{0}<b_{0}<a_{i}<b_{i}\), and note that then \(a_{n}<b_{n}<a_{n+i}<b_{n+i}\) for all \(n\geq 0\). If \(\gamma\) is not wandering, then there is a compact convex subsurface \(K\) that intersects infinitely many \(\gamma_{n}^{*}\), for which we can assume that \(a_{n},b_{n}\notin K\).
**Claim 7.6**.: _There exists a boundary component \(\tau\) of \(K\) and \(n\) such that for all \(j\geq 0\), there exists a quadrilateral \(Q_{j}\) bounded above by an arc of \(\tau\), laterally by an arc of \(\gamma_{n}^{*}\) and an arc of \(\gamma_{n+ij}^{*}\), and below by \([a_{n},b_{n+ij}]\)._
We form another family of curves by surgery as follows. Let \(\beta_{n}\) be the closed (not necessarily simple) curve obtained by joining \(\gamma_{n}^{*}\cup\gamma_{n+i}^{*}\cup[b_{n},a_{n+i}]\cup[a_{n},b_{n+i}]\). Again, because \(f^{n}(\gamma)\) and \(f^{n}(\gamma^{*})\) have the same geodesic representative, \(f^{n}(\beta_{0})^{*}=\beta_{n}^{*}\).
Let \(N_{2\delta}(K)\) be the \(2\delta\) neighborhood of \(K\). Since \(\alpha_{0}\) and \(\beta_{0}\) are wandering, there exists \(N\) such that \(\alpha_{n}^{*}\) and \(\beta_{n}^{*}\) are disjoint from \(N_{2\delta}(K)\) for all \(n\geq N\). Fix some \(n\geq N\) so that \(\gamma_{n}^{*}\) intersects \(K\) and \(a_{n},b_{n}\notin K\). Let \(\tau\subset\partial K\) be the first boundary component that intersects \(\gamma_{n}^{*}\) as we start from \(a_{n}\). We will prove the claim by induction on \(j\).
First note that since \(n+ij\geq N\), \(\alpha_{n+ij}^{*}\) is disjoint from \(K\), so all intersections of \(\alpha_{n+ij}\) with \(K\), if they exist, are inessential. For \(j\geq 0\), if \(\gamma_{n+ij}^{*}\) meets \(\tau\), let \(A_{j}\) be the subarc of \(\gamma_{n+ij}\) that starts from \(a_{n+ij}\) until its first meeting point \(p_{j}\in\tau\), and \(B_{j}\) the subarc starting from \(b_{n+ij}\) until \(q_{j}\in\tau\). Since \(p_{j}\) and \(q_{j}\) are inessential intersections, there is a subarc \([p_{j},q_{j}]\subset\tau\) such that the geodesic loop \(A_{j}\cup[p_{j},q_{j}]\cup B_{j}\cup[a_{n+ij},b_{n+ij}]\) bounds a quadrilateral \(R_{j}\).
When \(j=0\), \(\tau\) was chosen so it intersects \(\gamma_{n}^{*}\), so the existence of \(Q_{0}=R_{0}\) is the base case. Note that \(q_{0}\) is the first place that \(\gamma_{n}^{*}\) intersects \(K\) starting from \(b_{n}\). For \(j\geq 0\), we will assume that \(Q_{j}\) has top \([p_{0},q_{j}]\subset\tau\) and sides \(A_{0}\) and \(B_{j}\). Moreover \(q_{j}\) is the first place that \(\gamma_{n+ij}^{*}\) intersects \(K\) starting from \(b_{n+ij}\).
Consider the pieceswise geodesic curve \(\beta_{n+ij}\). If \(a_{n+i(j+1)}-b_{n+ij}\geq\epsilon\), then \(\beta_{n+ij}\) is a quasi-geodesic that stays \(\delta\) close to \(\beta_{n+ij}^{*}\). Since \(n+ij\geq N\), \(\beta_{n+ij}^{*}\) misses \(N_{2\delta}(K)\), so the \(\delta\)-neighborhood of \(\beta_{n+ij}^{*}\) misses \(K\). But this is a contraction since \(\beta_{n+ij}\) intersects \(\tau\) (as \(\gamma_{n+ij}^{*}\) does) and hence \(K\). So \(a_{n+i(j+1)}-b_{n+ij}<\epsilon\).
Note that if \(\gamma_{n}^{*}\) and \(\gamma_{m}^{*}\) are at most \(\varepsilon\)-apart, they are homotopic, which is impossible since the \(a_{n}\) go to infinity. So we can start from \(b_{n+ij}\) and \(a_{n+i(j+1)}\) and travel along \(\gamma_{n+ij}^{*}\) and \(\gamma_{n+i(j+1)}^{*}\) respectively until the first moment they become \(\epsilon\)-apart, say at \(x\in\gamma_{n+ij}^{*}\) and \(y\in\gamma_{n+i(j+1)}^{*}\). Let \(B_{j}^{\prime}\) and \(A_{j+1}^{\prime}\) be the corresponding subarcs, and \([x,y]\) the segment of length \(\epsilon\). Since the injectivity radius of \(X\) is greater than \(\epsilon\), the piecewise geodesic loop \(B_{j}^{\prime}\cup[x,y]\cup A_{j+1}^{\prime}\cup[b_{n+ij},a_{n+i(j+1)}]\) bounds a quadrilateral \(Q^{\prime}\). Homotope the inner segment \([b_{n+ij},a_{n+i(j+1)}]\) of \(\beta_{n+ij}\) along \(Q^{\prime}\) until \([x,y]\). If \(b_{n+i(j+1)}-a_{n+ij}<\epsilon\), we can also homotope the outer segment of \(\beta_{n+ij}\) along a (shorter) quadrilateral until they are \(\epsilon\)-apart. The result is a piecewise geodesic curve \(\beta_{n+ij}^{\prime}\) homotopic to \(\beta_{n+ij}\), but now \(\beta_{n+ij}^{\prime}\) is a quasi-geodesic contained in the \(\delta\)-neighborhood of \(\beta_{n+ij}^{*}\), so it cannot intersect \(K\). In particular, the homotopy through \(Q^{\prime}\) passes through \(K\). Since \(\tau\) is the first boundary of \(K\) that meets \(\gamma_{n+ij}^{*}\) starting from \(b_{n+ij}\), it must intersect \(B_{j}^{\prime}\) and hence \(A_{j+1}^{\prime}\). In other words, \(Q^{\prime}\) contains a subquadrilateral \(Q\) bounded above by \([q_{j},p_{j+1}]\subset\tau\) and laterally by \(B_{j}\subset B_{j}^{\prime}\) and \(A_{j+1}\subset A_{j+1}^{\prime}\). In particular, \(\gamma_{n+i(j+1)}\)
Figure 14. The situation of Claim 7.6
Figure 15. The curve \(\beta_{n}\), in green
intersects \(\tau\), so \(R_{j+1}\) exists. The desired \(Q_{j+1}\) is the extension of \(Q_{j}\) by \(Q\) and \(R_{j+1}\). This finishes the proof of the claim.
To finish the proof of the proposition, note that \(Q_{j}\) has right angles at \(a_{n}\) and \(b_{n+ij}\). By basic hyperbolic geometry, we have
\[\sinh(d_{j}/2)\leq\frac{\sinh(s_{j}/2)}{\cosh(h_{j})},\]
where \(d_{j}=b_{n+ij}-a_{n}\), \(s_{j}\) is the length of the top of \(Q_{j}\), and \(h_{j}\) is the minimal length of the two sides of \(Q_{j}\). Since \(s_{j}\) is bounded by the length of \(\tau\) and \(h_{j}\geq 0\), \(d_{j}\) is bounded but this contradicts the fact that \(b_{k}\to\infty\).
**Corollary 7.7**.: _Let \(X\) be a surface possibly with boundary and \(f\) a homeomorphism on \(X\) such that every curve in \(X\) is \(f\)-wandering and \(X\) filled by its curves. Let \(D(X)\) be its double and the map induced on it. Then every curve in \(D(X)\) is \(\hat{f}\)-wandering._
Proof.: Let \(\mathcal{A}\) be the collection of all (essential, i.e. not homotopic into the boundary) arcs with both endpoints on a boundary component of \(X\). Then the double \(D(X)\) is filled by the curves in the two copies of \(X\) and by the curves obtained as doubles of arcs in \(\mathcal{A}\). By Proposition 7.4, all of these curves are wandering, so by Lemma 5.2 all curves in \(D(X)\) are wandering.
**Theorem 7.8**.: _Let \(X\) be a (possibly bordered) surface which is filled by its curves. Let \(f\) be a homeomorphism of \(X\) such that every curve in \(X\) is \(f\)-wandering. Then there is a hyperbolic metric on \(X\) and an isometric translation on \(X\) isotopic to \(f\)._
Proof.: By Lemma 7.1 we know that there are ends \(e_{+}\) and \(e_{-}\) of \(X\) to which all curves converge under forward/backward iteration. In particular, \(f(e_{\pm})=e_{\pm}\). There are now two cases.
Case 1: \(e_{+}\neq e_{-}\). Let \(\delta\) and \(n\) be as in Lemma 7.2. Set \(\delta_{i}=f^{n_{i}}(\delta)^{*}\), each of which is an arc separating \(e_{+}\) and \(e_{-}\), and \(\delta_{i+1}\subset H_{i}^{+}\), where \(H_{i}^{+}\) is the component of \(X\smallsetminus\delta_{i}\) containing \(e_{+}\). This shows that the dual graph is a line on which \(f^{n}\) acts as a translation. We can homotope \(f^{n}\) so it is a translation on \(X\), with the region between \(\delta_{0}\) and \(\delta_{1}\) a fundamental domain for the action of \(f^{n}\).
We now argue that \(f\) acts as an isometric translation of \(X\). Let \(Q=X/<f^{n}>\) be the quotient hyperbolic surface for the action of \(f^{n}\). Since \(f\) commutes with \(f^{n}\), \(f\) descends to a map on \(Q\), whose \(n\)-th power is the identity map on \(Q\), so \(f\) is a periodic map of \(Q\). By Proposition 2.4, there is an hyperbolic metric on \(Q\) for which \(f\) acts as an isometry of \(Q\). Lifting this metric to \(X\) also makes \(f\) an isometry of \(X\). Since \(f^{n}\) is a translation of \(X\), \(f\) acts on \(X\) as a covering transformation, so it is also a translation of \(X\).
Case 2: \(e_{+}=e_{-}\). Let \(D(X)\) be the double of \(X\) along its boundary, if \(\partial X\neq\emptyset\), otherwise \(D(X)=X\). By Corollary 7.7, every curve in \(D(X)\) is wandering with respect to the map induced by \(f\) (which we will also denote by \(f\), abusing notation). Let \(e\) the end of \(D(X)\) which is the image of \(e^{\pm}\) under the natural map \(\operatorname{Ends}(X)\to\operatorname{Ends}(D(X))\). Note that all curves of \(D(X)\) converge to \(e\) under forward and backward iteration of \(f\). If \(D(X)\neq X\), we will do every operation and make every choice below respecting the symmetry given by the doubling.
The goal is to construct an exhaustion of \(D(X)\) by \(f\)-invariant subsurfaces, each of which falls in the previous case. To construct this exhaustion, fix a curve \(\alpha_{1}\) and choose a sequence of curves
\[\alpha_{1},\alpha_{2},\ldots,\alpha_{k}=f(\alpha_{1})\]
in \(D(X)\) so that \(i(\alpha_{i},\alpha_{i+1})>0\). Let \(F\) be the subsurfaces spanned by \(\{\alpha_{i}\}\), which by construction is connected, and \(F\cap f(F)\) is also non-empty. But since the \(\alpha_{i}\)'s converge to
\(e\), \(F\cap f^{n}(F)=\emptyset\) for all sufficiently large \(n\). Set \(T_{1}=\bigcup f^{i}(F)\), which is \(f\)-invariant up to isotopy. Note that \(T_{1}\) is a subsurface, as it is spanned by the locally finite collection of iterates of \(\alpha_{i}\)'s.
Consider the nerve \(N\) of the cover of \(T_{1}\) by the translates of \(F\). We claim \(N\) is \(2\)-ended. Indeed, \(N\) is a locally-finite simplicial complex, since \(f^{n}(F)\cap F=\emptyset\) for \(|n|\) large. The map \(f\) acts on \(N\) by simplicial automorphism. Moreover, \(\langle f\rangle\cong\mathbb{Z}\) acts properly discontinuously and cocompactly. It follows that \(N\) has two ends -- in fact it is quasi-isometric to \(\mathbb{Z}\). Since the iterates of \(F\) converge to \(e\) in both directions, the two ends of \(N\) are two copies of \(e\), say \(t_{+}\) and \(t_{-}\), with \(f^{i}(F)\to t_{\pm}\) for \(i\to\pm\infty\). We claim these two ends \(t_{+}\) and \(t_{-}\) are distinct in \(T_{1}\).
Choose a finite subcomplex of \(N\) that separates the ends -- say the subcomplex spanned by the vertices \(\{f^{i}(F)\mid i\in[-m,m]\}\). If \(t_{+}\) and \(t_{-}\) are the same in \(T_{1}\), then there is a path that joins \(f^{n}(F)\) with \(f^{-n}(F)\) for a large \(n\gg m\), and misses \(\bigcup_{i\in[-m,m]}f^{i}(F)\). But then tracing the translates of \(F\) that the path intersects we get a path in \(N\) that joins points close to the two ends and misses the finite subcomplex that separates them, a contradiction. It follows that \(T_{1}\) has two ends \(t_{\pm}\) such that all curves in \(T_{1}\) go to these ends under forward/backward iteration. Modify \(f\) by an isotopy so that \(f(T_{1})=T_{1}\).
We now do the same for a larger collection of curves and we get a sequence of subsurfaces \(T_{1}\subset T_{2}\subset\cdots\) exhausting \(D(X)\), each of which is \(2\)-ended and \(f\)-invariant.
As in case 1, we can find exponents \(n_{i}\) so that \(Q_{i}:=T_{i}/<f^{n_{i}}>\) is a surface and \(f\) induces a finite-order map on each \(Q_{i}\); we can assume that for every \(i\), \(n_{i}\) divides \(n_{i+1}\). Note moreover that the \(Q_{i}\) are of finite-type, since the \(T_{i}\) are spanned by finitely many orbits of curves. By Proposition 2.4, we can find a hyperbolic structure \(Y_{1}\) on \(Q_{1}\) which is \(f\)-invariant and thus can be lifted to a \(f\)-invariant hyperbolic structure \(X_{1}\) on \(T_{1}\). Lift \(Y_{1}\) to a hyperbolic structure \(Y_{1}^{\prime}\) on \(T_{1}/<f^{n_{2}}>\subset T_{2}/<f^{n_{2}}>=Q_{2}\). By [21], we can find a hyperbolic structure \(Y_{2}\) on \(Q_{2}\) which is \(f\)-invariant and extends \(Y_{1}^{\prime}\). Lift \(Y_{2}\) to a hyperbolic structure \(X_{2}\) on \(T_{2}\) which is \(f\)-invariant and note that \(X_{2}\) extends \(X_{1}\). Repeating this procedure we obtain a sequence of hyperbolic structures \(X_{i}\) on \(T_{i}\) such that \(X_{i}\) restricts to \(X_{i-1}\) on \(T_{i-1}\). So we get a hyperbolic structure on \(D(X)\) with respect to which \(f\) is a translation. By restricting the hyperbolic metric to \(X\), we get the required structure.
Figure 16. An example of the exhaustion \(T_{1}\subset T_{2}\subset\dots\), where the map is the horizontal shift to the right, \(T_{1}\) is the surface spanned by the orbit of the green curve and \(T_{2}\) the surface spanned by the orbit of the green and the orange curve
_Remark 7.9_.: In the previous proof, in the case \(e_{+}=e_{-}\), if we don't look at the double and we follow the same procedure, we are not sure that the surfaces \(T_{i}\) exhaust \(X\): their union will contain the interior of \(X\), but not necessarily all of the boundary components. If this were the case, we would not get a hyperbolic structure on the whole of \(X\) (some of its boundary components might be at infinity with respect to the metric on the interior of \(X\)).
## 8. Structure theorem of extra tame maps
Our goal in this section is to prove the main theorem of the introduction. We first develop some properties of the canonical decomposition for an extra tame map \(f\). Namely, we will establish how the components of \(S_{\mathrm{per}}\), \(S_{\infty}\), and \(S_{0}\) can neighbor each other. We will also show that, for each component \(X\) of the decomposition, \(f\) returns to \(X\). We then combine the results of Section 7 and the work of Afton-Calegari-Chen-Lyman [3] to prove the main theorem.
### Properties of the canonical decomposition
For an extra tame map, recall the almost geodesic representatives of \(S_{\mathrm{per}}\), \(S_{\infty}\), and \(S_{0}\) defined in Section 6. An essential component of \(S_{\mathrm{per}}\) and \(S_{\infty}\) either has infinite type or has negative Euler characteristic; in particular, such a component always has an essential curve.
Two components of the decomposition are _adjacent_ if they have boundary components which are properly isotopic (or equivalently, they have representatives sharing at least one boundary component). A component \(X\) is _self-adjacent_ along \(\alpha\) if \(\alpha\) is properly homotopic to two distinct boundary components of \(X\). Equivalently, \(X\) is self-adjacent if there is a non-essential component \(Y\) which shares two boundary components with \(X\).
One of the goals in this section is to show there are in fact no self-adjacent components (Lemma 8.10) and no strips (Corollary 8.11). We will also show that there is no wandering component of the decomposition and prove that the first return map to each component is either periodic or a translation, and can be realized as an isometry for some hyperbolic structure with totally geodesic boundary (Lemmas 8.4, 8.5 and 8.8). Finally we determine the topological types of the components of \(S_{0}\) (Lemma 8.12).
We start by showing that certain boundary components cannot be wandering.
**Lemma 8.1**.: _If two components of \(S_{\infty}\) share a common boundary \(\alpha\), then \(\alpha\) is not wandering. Similarly, if a component of \(S_{\infty}\) is self-adjacent along \(\alpha\), then \(\alpha\) is not wandering._
Proof.: First let \(X\) and \(Y\) be components of \(S_{\infty}\) with a common boundary \(\alpha\). By contradiction, assume that \(\alpha\) is wandering. Neither \(X\) nor \(Y\) is inessential, so we can find a curve \(\beta\subset\mathrm{int}(X)\cup\mathrm{int}(Y)\cup\alpha\) that intersects \(\alpha\) essentially. As \(\beta\) is not contained in \(S_{\infty}\), it is not wandering, so \(\mathcal{L}(\beta)\neq\emptyset\). Let \(\gamma\) be a curve intersecting \(\mathcal{L}(\beta)\). By Lemma 3.7, \(\mathcal{L}(\gamma)\) intersects \(\beta\); in particular, \(\mathcal{L}(\gamma)\) is nonempty. Let \(L\in\mathcal{L}(\gamma)\) be such that \(L\) intersect \(\beta\). By 6.2, \(L\) cannot intersect \(X\) and \(Y\) essentially, but it intersects \(\beta\), so \(L=\alpha\). By \(f\)-invariance of limits, the orbits of \(\alpha\) is contained in \(\mathcal{L}(\gamma)\), which contradicts the finiteness of \(\mathcal{L}(\gamma)\). The proof that \(X\) is a component of \(S_{\infty}\) with self-adjacency along \(\alpha\) is similar.
**Lemma 8.2**.: _Let \(X\) be a component of \(\tilde{S}_{0}\). If \(X\) either_
* _shares at least one boundary component with_ \(S_{\infty}\) _but contains a non-contractible curve not homotopic to that boundary component; or_
* _shares at least two boundary components with_ \(S_{\infty}\)_,_
_then \(X\) is not wandering._
Proof.: In the first case, let \(\alpha\) be a non-contractible curve in \(X\) and let \(Y\) be a component of \(S_{\infty}\) sharing another boundary component \(\alpha^{\prime}\) with \(X\), such that \(\alpha\) is not homotopic to \(\alpha^{\prime}\). \(Y\) is essential, so we can find an essential curve \(\beta\) in \(X\cup Y\) crossing \(\alpha^{\prime}\), by joining a curve in \(Y\) along an arc to \(\alpha\) and taking the boundary of their regular neighborhood. In the second case, let \(Y\) and \(Z\) be components of \(S_{\infty}\) (possibly \(Y=Z\)) sharing boundary components (called \(\alpha^{\prime}\) and \(\alpha^{\prime\prime}\)) respectively with \(X\). We now find a curve \(\beta\) in \(Y\cup X\cup Z\) intersecting \(\alpha\) and \(\alpha^{\prime\prime}\), by joining a curve in \(Y\) and a curve in \(Z\) by an arc that crosses \(\alpha^{\prime}\) and \(\alpha^{\prime\prime}\). We will show that if \(X\) is wandering, then \(\beta\) is wandering, which is a contradiction.
Indeed, suppose \(\beta\) is not wandering. Then \(\mathcal{L}(\beta)\neq\emptyset\) but \(\mathcal{L}(\beta)\) cannot intersect \(f^{n}(Y)\) or \(f^{n}(Z)\) essentially. On the other hand, \(f^{n}(\beta)\) is contained in \(f^{n}(X)\cup f^{n}(X)\cup f^{n}(Z)\), so some component \(L\) of \(\mathcal{L}(\beta)\) must be homotopic into \(f^{n}(X)\) for some \(n\). Since \(\mathcal{L}(\beta)\) is \(f\)-invariant and \(X\) is wandering, there must be infinitely many iterates of \(L\) in \(\mathcal{L}(\beta)\), but this contradicts the finiteness of \(\mathcal{L}(\beta)\).
We then show that all boundary components of \(S_{\infty}\) are lines.
**Lemma 8.3**.: _No component of \(S_{\infty}\) has a compact boundary component._
Proof.: We argue by contradiction. First suppose \(X\) is non-annular and \(\alpha\) is a compact boundary of \(X\), since \(X\) is spanned by curves, \(\alpha\) is a wandering curve. So the neighboring component \(Y\) of \(X\) joined along \(\alpha\) cannot belong to \(S_{\mathrm{per}}\) (since boundary curves of \(S_{\mathrm{per}}\) are periodic) or \(S_{\infty}\) (by Lemma 8.1).
So suppose \(Y\) is in \(S_{0}\). If there is no \(n>0\) such that \(f^{n}(Y)\) is isotopic to \(Y\), then \(Y\) is wandering (as \(S_{0}\) is a subsurface). By Lemma 8.2, \(Y\) cannot have a non-contractible curve different from \(\alpha\). But \(Y\) cannot be a disk or a punctured-disk, so it must have at least two boundary components, both of which are wandering. Hence the two boundary components belong to \(S_{\infty}\), but this contradicts Lemma 8.1.
So for some \(n>0\), \(f^{n}(Y)\) is isotopic to \(Y\). As \(\alpha\) is wandering, \(Y\) contains infinitely many iterates of \(\alpha\), and therefore an essential non-peripheral curve, contradicting Proposition 6.5.
If \(X\) is annular, then it has a neighbor \(Y\) which cannot belong in \(S_{\mathrm{per}}\) since \(X\) is wandering, and \(Y\) cannot belong in \(\mathcal{S}_{0}\) since it would have to have essential non-peripheral curves. So \(Y\) belongs to \(S_{\infty}\), but this contradicts the previous case.
Using the previous results, we can deduce that no component of \(\tilde{S}_{0}\) is wandering, and even more, that for each of them the first return map has finite order.
**Lemma 8.4**.: _For every component \(X\) of \(\tilde{S}_{0}\), \(f\) returns to \(X\) and there is a hyperbolic structure, with compact boundary components of length one, such that the first return \(f^{k}\) is isotopic to a periodic isometry of \(X\)._
Proof.: Recall that \(X\) has no essential, non-peripheral curves, and by our construction of \(\tilde{S}_{\mathrm{per}}\) and \(S_{\infty}\), \(X\) cannot be a closed disk or a closed disk with one puncture. If \(f\) doesn't return to \(X\), then since \(\tilde{S}_{0}\) is a subsurface and \(f\)-invariant, \(X\) is wandering. In particular, \(X\) cannot border \(\tilde{S}_{\mathrm{per}}\), so it cannot have a compact boundary by Lemma 8.3. In this case, \(X\) is a disk with at most one puncture and points removed from the boundary, from which we can get a contradiction by Lemma 8.2.
So \(f\) returns to \(X\). Up to replacing \(f\) with a power, we can assume \(f(X)=X\). If \(X\) has finitely many boundary components, then \(f\) is finite order. Otherwise, \(X\) is a disk with infinitely many points removed from the boundary and possibly a puncture or a disk removed from its interior. Look at the action of \(f\) on the non-compact boundary components. If the \(f\)-orbit of every component is infinite, then \(f\) is semi-conjugate to an irrational rotation of
the disk, which violates tameness. If there exists a finite orbit, then all periodic components of \(\partial X\) have the same period. Let \(Y\subset X\) be the smallest convex subsurface containing the puncture or the compact boundary component of \(X\), if it exists, all the periodic boundary components of \(X\) and all the periodic ends of \(X\). Up to replacing \(f\) by a further power if necessary, we can assume \(f\) fixes \(Y\). We claim \(X\smallsetminus Y\) belongs to \(S_{\infty}\). Let \(Z\) be a component of \(X\smallsetminus Y\). Topologically, this is disk with points removed from the boundary which shares a unique common boundary \(L\) with \(Y\), which is fixed by \(f\), and all other boundary components are wandering. Further, there exist ends \(e_{\pm}\) (possibly \(e_{+}=e_{-}\)) of \(L\), such that for any other \(L^{\prime}\subset\partial Z\), \(f^{i}(L^{\prime})\) converges to \(e)\pm\) as \(i\to\pm\infty\). Fix a wandering boundary component \(L^{\prime}\), and let \(L_{0}\) be the line in \(Z\) spanned by \(e\) and the end of \(L^{\prime}\) such that \(f^{i}(L^{\prime})\), \(i\geq 0\), lies on one side of \(Z\smallsetminus L_{0}\). Let \(Z_{0}\) be the subsurface corresponding to that side, and let \(Z_{i}=f^{i}(Z_{0})\). Then \(Z_{i}\) converges to \(e_{\pm}\) as \(i\to\pm\infty\). Each wandering component of \(\partial Z\) must be a boundary component of \(S_{\infty}\). Thus we can find some curve \(\alpha\) that intersects \(L^{\prime}\) and \(X\) essentially, and \(\alpha\subset Z_{0}\cup S_{\infty}\). Such curve must have \(\mathcal{L}(\alpha)\neq\emptyset\), since \(\alpha\) does not belong to \(S_{\infty}\), but this is impossible since \(f^{i}(\alpha)\subset Z_{i}\). This shows \(X\smallsetminus Y\) cannot have any wandering boundary components, so \(X=Y\).
Let \(k\) be the first returning time to \(X\). Then we can get a hyperbolic metric on \(X\) with respect to which \(f^{k}\) is isotopic to a finite order isometry by looking at the quotient \(X/\langle f^{k}\rangle\), choosing a hyperbolic structure with corners on the quotient and lifting it to \(X\).
We can now establish the properties of the first return map for components of \(S_{\mathrm{per}}\).
**Lemma 8.5**.: _Let \(X\) be a component of \(S_{per}\). Then \(f\) returns to \(X\) and there is a hyperbolic metric on \(X\) with compact boundary components of length one such that the first return \(f^{k}\) is isotopic to a periodic isometry of \(X\)._
Proof.: Since \(X\) is connected and every curve in \(X\) is \(f\)-periodic, \(f\) must return to \(X\). Up to replacing \(f\) by a power we may assume \(f(X)\) is isotopic to \(X\). If \(X\) has bounded topology, i.e. \(X\) is finite-type or the double of \(X\) has finite type, then \(f\) is isotopic to a periodic map of \(X\). Otherwise, choose a finite collection \(\mathcal{C}\) of orbits of curves in \(X\) and let \(F=F(\mathcal{C})\) be the subsurface spanned by \(\mathcal{C}\). After possibly enlarging \(\mathcal{C}\), we can assume \(F\) is connected and has negative Euler characteristic. Since \(\mathcal{C}\) is \(f\)-invariant so is \(F\), and since all the curves in \(\mathcal{C}\) are \(f\)-periodic, \(f|_{F}\) is isotopic to a periodic map of \(F\) by Lemma 2.2. Let \(n>0\) be the period of \(f|_{F}\). For any curve \(\alpha\) not in \(F\), we can find a larger connected \(f\)-invariant finite-type subsurface \(F^{\prime}\) containing both \(F\) and \(\alpha\). By the same reasoning, \(f\) is also isotopic to a periodic map on \(F^{\prime}\), but since \(F^{\prime}\supset F\), \(f|_{F^{\prime}}\) also has period \(n\) (by the classical Nielsen-Thurston theory). In particular, \(f^{n}(\alpha)\) is isotopic to \(\alpha\) for all \(\alpha\). By the Alexander method ([15]), this implies that \(f|_{X}\) is isotopic to a periodic map of \(X\).
The fact that we can realize the first return \(f^{k}\) to \(X\) by a periodic isometry is a direct consequence of Proposition 2.4 (see [3]).
As a consequence, we can describe more precisely what changes when modifying the decomposition from \(S_{\infty}\), \(\tilde{S}_{\mathrm{per}}\) and \(\tilde{S}_{0}\) to \(S_{\infty}\), \(S_{\mathrm{per}}\) and \(S_{0}\).
**Corollary 8.6**.: _Each pair of pants component of \(\tilde{S}_{0}\) yields a pair of pants component of \(S_{per}\), which contains at least one annular component of \(\tilde{S}_{per}\). Furthermore, no component of \(\tilde{S}_{0}\) is self-adjacent._
Proof.: Since a pair of pants component \(X\) of \(\tilde{S}_{0}\) has only compact boundary components, it must border at least one component \(Y\) of \(\tilde{S}_{\mathrm{per}}\). If \(Y\) is not an annulus, we can construct a curve \(\alpha\subset X\cup Y\) intersecting the boundary of \(X\) essentially. As some power of \(f\) is periodic
on both \(X\) and \(Y\), and by tameness there is no twisting allowed along the boundary, \(\alpha\) is periodic, a contradiction.
If a component \(X\) of \(\tilde{S}_{0}\) were self-adjacent along a boundary component \(\alpha\), then \(\alpha\) must be a curve -- otherwise there is a component \(Y\) which is a strip with both boundary components in \(X\). As \(Y\) contains no curves, it is contained in \(\tilde{S}_{0}\), but then by construction \(X\) and \(Y\) would be part of the same component of \(\tilde{S}_{0}\). But we know by Lemma 8.4 that \(\alpha\) is periodic, so the annulus \(Z\) about \(\alpha\) is a component of \(\tilde{S}_{\mathrm{per}}\). Since \(X\) has at least two compact boundary components, it is a pair of pants, but then as before we could construct a \(\beta\) contained in the genus-one surface \(X\cup Z\) which intersects \(\alpha\) essentially and is periodic, a contradiction.
Our next step is to exclude (self-)adjacency of components of \(S_{\mathrm{per}}\).
**Lemma 8.7**.: _No two components of \(S_{\mathrm{per}}\) can be adjacent, neither can a component of \(S_{\mathrm{per}}\) be self-adjacent._
Proof.: First suppose \(X\) and \(Y\) are essential components of \(S_{\mathrm{per}}\) and let \(\alpha\) be a common boundary component. Choose a regular neighborhood \(N(\alpha)\) about \(\alpha\) contained in \(X\cup Y\). There exists \(n\) such that \(f^{n}\) is isotopic to the identity on \(X\) and \(Y\). Isotope \(f^{n}\) so that it takes \(N(\alpha)\) to \(N(\alpha)\) fixing \(\partial N(\alpha)\) pointwise, and further isotope \(f^{n}\) so that it's the identity on truncated subsurfaces \(X\setminus N(\alpha)\) and \(Y\setminus N(\alpha)\). If \(N(\alpha)\) is a strip, then we can further isotope \(f^{n}\) (rel boundary) inside of \(N(\alpha)\) to the identity map. If \(\alpha\) is a curve, then since \(f\) is (extra) tame, it cannot twist about \(\alpha\), so \(f^{n}\) is also isotopic (rel boundary) to the identity map on \(N(\alpha)\). But this implies \(f^{n}\) is isotopic to the identity on \(X\cup Y\), and in particular, we can find an essential curve \(\beta\) in \(X\cup Y\) crossing \(\alpha\) which is \(f\)-periodic.
The proof that a component \(X\) has self-adjacency along a periodic annular component \(N(\alpha)\) is similar. In this case, we can isotope \(f^{n}\) to be identity on \(X\cup N(\alpha)\), which contradicts that \(X\) and \(N(\alpha)\) are components of \(S_{\mathrm{per}}\).
With all these results at hand, we can prove the wanted properties of the first return map of components of \(S_{\infty}\).
**Lemma 8.8**.: _No component \(X\) of \(S_{\infty}\) has a wandering boundary component. In particular, \(f\) returns to \(X\), and there is a hyperbolic metric on \(X\), such that, the first return \(f^{n}\) is isotopic to an isometric translation on \(X\)._
Proof.: Let \(\alpha\) be a wandering boundary component of \(X\). Since \(f\) returns to each component of \(S_{\mathrm{per}}\) and \(S_{0}\) and the first return maps are periodic, the neighbor \(Y\) of \(X\) along \(\alpha\) must belong to \(S_{\infty}\), but this contradicts Lemma 8.1. Therefore \(X\) has no wandering boundary component, so \(f\) must return to \(X\). By Theorem 7.8, the first return \(f^{n}\) up to isotopy preserves a hyperbolic metric on \(X\) on which it acts as an isometric translation.
Our next goal is to show that there is no self-adjacent component of the decomposition. We need a preliminary result.
**Lemma 8.9**.: _A component \(X\) of \(S_{0}\) can share at most one boundary with \(S_{\mathrm{per}}\), and if \(X\) shares a boundary \(\alpha\) with \(S_{\mathrm{per}}\), then every non-contractible curve in \(X\) is homotopic to \(\alpha\)._
Proof.: We now know \(X\) is \(f\)-periodic, so the proof is similar to 8.7. The assumptions on \(X\) are to rule out the existence of a periodic curve \(\beta\) crossing the common boundary of \(X\) with \(S_{\mathrm{per}}\) essentially.
We can now show the no self-adjacency result.
**Lemma 8.10**.: _No component of the canonical decomposition is self-adjacent._
Proof.: Suppose \(X\) is a self-adjacent component. By Corollary 8.6 and Lemma 8.7, \(X\) is a component of \(S_{\infty}\). Since \(S_{\infty}\) has no compact boundary components by Lemma 8.3, there is a strip component \(Y\) of \(S_{0}\), with core line \(\ell\), so that \(X\) is the only neighbor of \(Y\). By Lemma 8.8, there is some \(n\geq 1\) so that \(f^{n}\) fixes \(\ell_{1}\) (and therefore \(\ell_{2}\), since they are homotopic). Since \(Y\) is a strip, an orientation on \(\ell\) will induce orientations on \(\ell_{1}\) and \(\ell_{2}\). Now pick a curve \(\alpha\) contained in \(X\cup Y\) and intersecting both \(\ell_{1}\) and \(\ell_{2}\) once. Then \(\alpha\cap X\) is an arc from \(\partial X\) to \(\partial X\), and by Proposition 7.4 it is wandering. This implies that \(f^{n}\) acts as a positive translation on \(\ell_{1}\) (with respect to the chosen orientation) if and only if it acts as a positive translation on \(\ell_{2}\). But then we can homotope \(f^{n}\) on the strip, leaving it invariant on its boundary, so that it's a translation, and therefore \(\alpha\) is wandering, a contradiction.
An easy consequence of the previous lemma is the fact that there are no components of the decomposition which are strips:
**Corollary 8.11**.: _No component of \(S_{0}\), \(S_{per}\) or \(S_{\infty}\) is a strip._
Proof.: Since by construction every component of \(S_{per}\) and \(S_{\infty}\) contains an essential (possibly peripheral) curve, if there is a strip, it must be a component of \(S_{0}\). But by construction, a strip can arise only if a component of \(S_{per}\) or \(S_{\infty}\) is self-adjacent, which is impossible by Lemma 8.10.
The next result is a description of the possible components of \(S_{0}\).
**Lemma 8.12**.: _The following are the topological possibilities for a component \(X\) of \(S_{0}\):_
1. _A closed disk with at least three points removed from the boundary. In this case, at most one neighbor of_ \(X\) _is in_ \(S_{per}\)_._
2. _A once-punctured closed disk with at least one point removed from the boundary. In this case, all neighbors of_ \(X\) _are in_ \(S_{\infty}\)_._
3. _A cylinder with one compact boundary and at least one point removed from the other boundary (a crown) In this case,_ \(X\) _has exactly one neighbor in_ \(S_{per}\) _along its compact boundary._
Proof.: Note first that \(X\) is not a sphere, being a subsurface of \(S\). It also cannot be a disk or a once-punctured disk, by the construction of \(S_{\infty}\) and \(S_{per}\). Moreover by construction \(X\) has non-negative Euler characteristic, thus \(\chi(X)\) is either \(1\) or \(0\).
If \(\chi(X)=1\), then \(X\) is a disk with \(n\) points removed from its boundary. If \(n\) were one, the boundary of \(X\) would be homotopically trivial, which is impossible. Moreover, \(n\neq 2\) by Corollary 8.11, so \(n\geq 3\). Since the boundary components of \(X\) are non-compact, \(X\) cannot border annular components of \(S_{per}\), so it has at most one neighbor in \(S_{per}\).
If \(\chi(X)=0\), then the interior of \(X\) is an open annulus. We already know that \(X\) cannot be the once-punctured disk, and it cannot be a closed annulus. Thus, \(X\) is either a once-punctured disk or a closed annulus with at least one point removed from its boundary. If it
Figure 17. The topological possibilities for the components of \(S_{0}\)
is an annulus with points removed from both boundaries, then it would contain an essential non-peripheral curve, contradicting Proposition 6.5. If \(X\) is a once-punctured disk with points removed from its boundary, then \(X\) has a non-contractible curve which is non-peripheral, so it cannot border \(S_{\mathrm{per}}\). If \(X\) is an annulus with points removed from one of its boundary, then the core curve of \(X\) is not homotopic to the non-compact boundaries of \(X\), so \(X\) has at most one neighbor in \(S_{\mathrm{per}}\) joined along its only compact boundary.
We end this section with some examples showing the optimality of some of our results. First, we note that all topological possibilities from Lemma 8.12 occur for components of \(S_{0}\), and that there can be pairs of pants components of \(\tilde{S}_{0}\). Indeed, Figure 18 exhibits examples of tame mapping classes with different topological types for the components of \(\tilde{S}_{0}\) and \(S_{0}\). In each case, the map is a puncture shift in each shaded strip, in the direction indicated by the arrow, and it's the identity outside the strips.
Since an extra tame map returns to each component of \(S_{0}\), \(S_{\mathrm{per}}\) and \(S_{\infty}\), one might wonder if there is a uniform returning time for all components. The answer is negative, as shown by the following lemma. Moreover, given a component of \(S_{\infty}\), there is not always a uniform power of \(f\) fixing all of its boundary components at once.
**Lemma 8.13**.: _There is an extra tame map so that there is no uniform returning time for all components of the canonical decomposition. Furthermore, the map can be chosen so that there is a component of \(S_{\infty}\) so that there isn't a uniform returning time for its boundary components._
Proof.: Consider the bordered surface
\[P=(\mathbb{R}_{\geq 0}\times\mathbb{R})\smallsetminus\bigcup_{n\geq 1}B_{n},\]
where \(B_{n}\) is the open disk of center \((n,0)\) and radius \(\frac{1}{4}\). Let \(a_{n}\) be the oriented segment of the \(x\)-axis from \(B_{n}\) to \(B_{n+1}\) and \(a_{0}\) the oriented segment of the \(x\)-axis from the \(y\)-axis to \(B_{1}\)
Figure 18. Examples of extra tame mapping classes
Define the map \(\psi:\pi_{1}(P)\to\mathbb{Z}\) by
\[\psi(\alpha)=\sum_{n\geq 0}2^{n}\hat{\iota}(\alpha,a_{n}),\]
where \(\hat{\iota}(\cdot,\cdot)\) is the algebraic intersection form.
Let \(p:\Sigma\to P\) be the associated cover and \(f\) a generator of the deck group. Then \(\Sigma\) is a bordered subsurface, whose boundary is a union of lines, given by lifts of the \(y\)-axis and of the curves \(\partial B_{n}\). Note that, for any \(n\), the number of lifts of \(\partial B_{n}\) is \(2^{n}-2^{n-1}\) and they are cyclically permuted by \(f\). So there is no uniform power of \(f\) fixing all boundary components of \(\Sigma\).
To turn this into an example of an extra tame map on a (borderless) surface, glue to each boundary component of \(\Sigma\) a copy of the thrice-punctured half-plane \(H\). We get a surface \(S\) and a homeomorphism \(\bar{f}\) of \(S\) so that \(\bar{f}|_{\Sigma}\) is \(f\) and for every copy of \(H\), the first return map is the identity.
The map \(\bar{f}\) is extra tame and:
* \(S_{\infty}=\Sigma\) and \(\bar{f}|_{S_{\infty}}=f\), so there is no bound on the first returning time of the boundary components of \(S_{\infty}\);
* \(S_{\text{per}}\) is the union, for every copy of \(H\), of a disk with three punctures. The first returning time of a component of \(X\) is the same as the first returning time of the boundary component of the copy of \(H\) in which \(X\) is contained, so also these are unbounded;
* \(S_{0}\) is the union, for every copy of \(H\), of an annulus with a point removed from the boundary, and again there is no uniform bound on the first returning times of components of \(S_{0}\).
### Proof of main theorem
Our main theorem is now an easy consequence of the results proved so far.
**Theorem 8.14**.: _Let \(f\) be an extra tame map of a surface \(S\) of infinite type. There is a canonical decomposition of \(S\) into three \(f\)-invariant subsurfaces \(S_{\text{per}}\), \(S_{\infty}\) and \(S_{0}\) and a hyperbolic metric on \(S\) such that for every component \(X\) of \(S_{\text{per}}\), \(S_{\infty}\) and \(S_{0}\), \(f\) returns to \(X\) and is isotopic to:_
* _a periodic isometry, if_ \(X\subset S_{\text{per}}\cup S_{0}\)_,_
* _an isometric translation, if_ \(X\subset S_{\infty}\)
Figure 19. The surface \(P\) with the arcs \(a_{n}\) in the proof of Lemma 8.13
_Furthermore, components of \(S_{0}\) contain no essential non-peripheral curves and at most one essential (peripheral) curve._
Proof.: Let \(S_{0}\), \(S_{\mathrm{per}}\) and \(S_{\infty}\) be given by Proposition 6.1. HThe hyperbolic structure on \(S\) is given by gluing the hyperbolic structures provided by Lemmas 8.5 and 8.4 and Theorem 7.8: since there are no strips (Corollary 8.11), the hyperbolic structure contains no funnels mor half-planes. Moreover the same results guarantee that for every component of the decomposition the first return map of \(f\) exists and is isotopic to a periodic isometry or an isometric translation as required.
### Constructing extra tame maps
Theorem 8.14 shows that every extra tame map is obtained by gluing together translations and periodic maps supported on disjoint subsurfaces, not accumulating anywhere. If we want to construct extra tame maps by following this procedure, we need to be careful when two supporting subsurfaces share a compact boundary component: we have to make sure that the gluing doesn't create a Dehn twist.
On the other hand, there is no need for special care when gluing along lines. More precisely:
**Lemma 8.15**.: _Let \(\{\Sigma_{i}\}_{i\in I}\) be a decomposition of a surface \(S\) into essential subsurfaces without self-adjacency, not accumulating anywhere and without compact boundary components. For every \(i\), let \(f_{i}\) be either a periodic map or a translation of \(\Sigma_{i}\), and assume that for every line \(\ell\subset\partial\Sigma_{i}\cap\partial\Sigma_{j}\), \(f_{i}(\ell)=f_{j}(\ell)\). Then there is an extra tame mapping class \(f\) on \(S\) such that the restriction of \(f\) on each \(\Sigma_{i}\) is properly homotopic to \(f_{i}\)._
Proof.: Define \(S^{\prime}\) to be the surface, homeomorphic to \(S\), given by inserting strips at each line in the boundary of the decomposition. More precisely, we take the disjoint union of the \(\Sigma_{i}\) and of a strip \(S_{\ell}=\ell\times[0,1]\) for every \(\ell\) in the boundary, and if \(\ell\subset\partial\Sigma_{i}\cap\partial\Sigma_{j}\), we glue \(\ell\times\{0\}\) with the copy of \(\ell\) in \(\Sigma_{i}\) and \(\ell\times\{1\}\) with the copy of \(\ell\) in \(\Sigma_{j}\). Define then \(f_{\ell}\) on \(S_{\ell}\) by linearly interpolating \(f_{i}\) and \(f_{j}\):
\[f_{\ell}(p,t)=(1-t)f_{i}(p)+tf_{j}(p)\]
where we have fixed an identification of \(\ell\) with \(\mathbb{R}\). Then we obtain a homeomorphism \(f\) of \(S^{\prime}\) by gluing the \(f_{i}\) and the \(f_{\ell}\). We claim that \(f\) is extra tame.
Indeed, let \(\alpha\) and \(\beta\) be two curves in \(S^{\prime}\). By the properties of the decomposition, and up to modifying the curves by a homotopy, we know that there are finitely many indices \(j\in I\) and lines \(\ell\) in the boundary of the decomposition so that \(\alpha\cap\Sigma_{j}\), \(\beta\cap\Sigma_{j}\), \(\alpha\cap S_{\ell}\) or \(\beta\cap S_{\ell}\) are not empty. By tameness of the \(f_{j}\),
\[i(f_{j}^{n}(\alpha\cap\Sigma_{j}),\beta\cap\Sigma_{j})\]
is uniformly bounded and
\[i(f_{\ell}(\alpha\cap S_{\ell}),\beta\cap S_{\ell})\]
is bounded by the number of intersection of \(\alpha\) and \(\beta\) with the boundary of the decomposition, so it is also uniformly bounded. Therefore \(i(f^{n}(\alpha),\beta)\) is uniformly bounded and hence \(f\) is tame. A similar argument shows the finiteness of the limit set of \(\alpha\).
Finally, note that if we glue periodic maps and translation and we obtain an extra tame map, the canonical decomposition might not coincide with the collection of supporting subsurfaces of the maps we are gluing. For instance, let \(S=\mathbb{R}^{2}\smallsetminus\mathbb{Z}^{2}\), \(\Sigma_{1}=S\cap\{y\geq\frac{1}{2}\}\) and \(\Sigma_{1}=S\cap\{y\leq\frac{1}{2}\}\). Let \(f_{1}\) be the map \((x,y)\mapsto(x+1,y)\) and \(f_{2}\) a map homotopic to \((x,y)\to(x+2,y)\) and agreeing with \(f_{1}\) on \(\partial\Sigma_{1}=\partial\Sigma_{2}\). Then the map obtained by gluing \(f_{1}\) and \(f_{2}\) is a translation on \(S\). In particular, \(S=S_{\infty}\). |
2301.04408 | GPT as Knowledge Worker: A Zero-Shot Evaluation of (AI)CPA Capabilities | The global economy is increasingly dependent on knowledge workers to meet the
needs of public and private organizations. While there is no single definition
of knowledge work, organizations and industry groups still attempt to measure
individuals' capability to engage in it. The most comprehensive assessment of
capability readiness for professional knowledge workers is the Uniform CPA
Examination developed by the American Institute of Certified Public Accountants
(AICPA). In this paper, we experimentally evaluate OpenAI's `text-davinci-003`
and prior versions of GPT on both a sample Regulation (REG) exam and an
assessment of over 200 multiple-choice questions based on the AICPA Blueprints
for legal, financial, accounting, technology, and ethical tasks. First, we find
that `text-davinci-003` achieves a correct rate of 14.4% on a sample REG exam
section, significantly underperforming human capabilities on quantitative
reasoning in zero-shot prompts. Second, `text-davinci-003` appears to be
approaching human-level performance on the Remembering & Understanding and
Application skill levels in the Exam absent calculation. For best prompt and
parameters, the model answers 57.6% of questions correctly, significantly
better than the 25% guessing rate, and its top two answers are correct 82.1% of
the time, indicating strong non-entailment. Finally, we find that recent
generations of GPT-3 demonstrate material improvements on this assessment,
rising from 30% for `text-davinci-001` to 57% for `text-davinci-003`. These
findings strongly suggest that large language models have the potential to
transform the quality and efficiency of future knowledge work. | Jillian Bommarito, Michael Bommarito, Daniel Martin Katz, Jessica Katz | 2023-01-11T11:30:42Z | http://arxiv.org/abs/2301.04408v1 | # GPT as Knowledge Worker:
###### Abstract
The global economy is increasingly dependent on knowledge workers to meet the needs of public and private organizations. While there is no single definition of knowledge work, organizations and industry groups still attempt to measure individuals' capability to engage in it. The most comprehensive assessment of capability readiness for professional knowledge workers is the Uniform CPA Examination developed by the American Institute of Certified Public Accountants (AICPA). In this paper, we experimentally evaluate OpenAI's text-davinci-003 and prior versions of GPT on both a sample Regulation (REG) exam and an assessment of over 200 multiple-choice questions based on the AICPA Blueprints for legal, financial, accounting, technology, and ethical tasks. First, we find that text-davinci-003 achieves a correct rate of 14.4% on a sample REG exam section, significantly underperforming human capabilities on quantitative reasoning in zero-shot prompts. Second, text-davinci-003 appears to be approaching human-level performance on the Remembering & Understanding and Application skill levels in the Exam absent calculation. For best prompt and parameters, the model answers 57.6% of questions correctly, significantly better than the 25% guessing rate, and its top two answers are correct 82.1% of the time, indicating strong non-entailment. Finally, we find that recent generations of GPT-3 demonstrate material improvements on this assessment, rising from 30% for text-davinci-001 to 57% for text-davinci-003. These findings strongly suggest that large language models have the potential to transform the quality and efficiency of future knowledge work.
keywords: knowledge work, artificial intelligence, natural language processing, accounting, finance, law +
Footnote †: journal: arxiv
## Introduction
Knowledge work is an increasingly important segment of the global economy, with qualified professionals providing services in areas such as law, finance, accounting, economics, and technology. Leading management theorists began exploring definitions of "knowledge workers" and approaches for their training nearly seven decades ago [1; 2; 3]. Since then, the percentage of the population that "thinks for a living" has grown dramatically. As of 2021, the Big 4 - Deloitte, EY, PWC, and KPMG - alone employ over one million people [4]; some definitions of knowledge work suggest that the true number of knowledge workers is in the hundreds of millions or even billions [5].
As their roles and activities may generate substantial value - and liability - many organizations require these knowledge workers to demonstrate their preparedness through comprehensive assessments, such as the so-called CPA, CFA, or Bar exams. While there is no universally-accepted definition of knowledge work [6], public accounting is a multidisciplinary practice that requires legal, financial, accounting, auditing, technology, and ethical knowledge and skills - all domains clearly within the scope of knowledge work. As the test used to assess the readiness of candidates for this profession, the American Institute of Certified Public Accountants (AICPA) Uniform CPA Examination ("CPA Exam" or "Exam") is the most comprehensive, well-known assessment of knowledge work readiness [7]. As compared to other assessments or examinations, the CPA Exam is broader, more practice-based, and more regularly updated to meet the changing landscape. This trend is perhaps best demonstrated by the fact that the commercial organizations most associated with the AICPA - the Big 4 - have accumulated practically every type of knowledge work under their umbrella, including even cybersecurity and traditional legal services [8; 9; 10].
The AICPA and the National Association of State Boards of Accountancy have undertaken a joint effort to ensure that the CPA licensure model reflects the "rapidly changing skills and competencies the practice of accounting requires today and will require in the future" [11]. The Exam is produced by the AICPA based on input from stakeholders in the professional services industry, academia, and governmental agencies. The Exam has been continually updated to meet changing regulations, standards, technology, and market expectations for over 100 years [7; 12]. While the Exam continues to evolve [12; 13], it was historically adapted from the best-known educational framework,
Bloom's cognitive taxonomy [2], to organize the assessment of practical, professional requirements into four skill levels [14]. Though the exam will undergo significant structural changes in 2024, the current implementation of the exam has been divided into four sections: Auditing and Attestation (AUD), Business Environment and Concepts (BEC), Financial Accounting and Reporting (FAR), and Regulation (REG). These four sections cover concepts, laws, rules, and relationships in legal, financial, accounting, and technology domains, common denominators among many knowledge professions.1
Footnote 1: Interested readers should review Table 4 for the list of all concept areas.
Previous decades of research into artificial intelligence (AI) have not yielded general models capable of performing knowledge work. While point solutions in many legal, financial, or accounting domains have shown value or reached adoption, there has been no demonstration of AI that can span multiple task types in professional services. This gap can likely be attributed to multiple reasons, including the breadth and depth of knowledge required to be indexed and recalled, as well as the complexity of translating this knowledge into work product in the context of realistic client engagements. To make matters more difficult, professional services like accounting, finance, and law also often require a combination of quantitative and qualitative skills.
Recent research has, however, shown potential to address at least some of these capability gaps. Advances in natural language processing (NLP), machine learning (ML), and computing over the last decade have produced material improvements in state-of-the-art performance on linguistic tasks that require deeper semantic understanding or feature more complex syntax [15][16][17]. More importantly, some types of models have begun to demonstrate the ability to address dramatically different task types, sometimes even in zero-shot use cases where there is no additional fine-tuning or customization. While neural network research is not new [18][19], the rate of progress has increased dramatically since 2013, and, in particular, transformer-based architectures [20] have been shown to produce previously-unseen capabilities to generalize across tasks [21][22][23][24][25].
The most accessible and well-known of these transformer-based models is OpenAI's family of large language models known as Generative Pre-trained Transformer or "GPT" [22][26]. The latest versions of GPT, often referred to as GPT-3 or GPT-3.5, are proprietary large language models, and these models are only available to OpenAI customers. One benefit of this approach is that it provides an important layer of legal and ethical moderation, as well as simplifying the user experience, such as by preprocessing input text or images. As of this publication, the OpenAI provides API endpoints for text completion, code completion, image generation, and embedding generation tasks. OpenAI has also recently unveiled ChatGPT, a public-facing "chatbot" built on GPT-3.5, which reportedly generated over 1M user sign-ups within just a few days of release.
As GPT-3 and its derivatives are proprietary machine learning models in production within a reinforcement learning platform, we cannot precisely describe them. However, based on GPT-3's original publication in July 2020 and subsequent material, these models are likely derived from an autoregressive language model with 175 billion parameters, 96 layers, and a batch size of 3.2M. OpenAI has launched or published a number of GPT-3 derivative models, most notably InstructGPT-3 and Codex 12B, which are colloquially referred to as GPT-3.5. The most advanced model in production in its API is text-davnci-003, an improvement on text-davnci-002, which is an InstructGPT model based on code-davnci-002, a base model for pure code-completion tasks, per OpenAI documentation. Our results in this paper are primarily based on text-davnci-003, as detailed in Section 4, though we also include results from older models for comparison and forecasting.
While text-davnci-003 and ChatGPT have demonstrated state-of-the-art performance on a wide range of tasks in zero-shot and few-shot contexts, there was previously little reason to believe that these models could perform even reasonably well in general assessments across the domains of finance, law, and accounting. However, in recent prior work on the Bar Exam [27], the authors have shown that text-davnci-003 could achieve near-parity with human test-akers in two of seven sections of the Multistate Bar Exam (MBE); more strikingly, generation-over-generation model performance suggests that an LLM like GPT-3.5 may be capable of passing the Bar Exam in the near future.
While the Bar Exam offered one measure of performance for GPT-3.5, it is arguably not the ideal instrument to evaluate readiness for multidisciplinary knowledge work. As noted, the CPA Exam requires a wider range of knowledge, including not only law, but also finance, accounting, technology, and ethics. Therefore, in order to evaluate whether and how current state-of-the-art models in AI might be applied to knowledge work, we experimentally evaluate the performance of "GPT as knowledge worker" through the skills and concepts outlined in the CPA Exam. Our analysis suggests both areas where GPT-3.5 may be useful today and areas where substantial research and development is still required.
### AICPA Exam
The Uniform CPA Examination is a modern, computerized assessment based on psychometric and statistical techniques. While prior paper-based generations of the Exam might have been compared to traditional linear exams, the current Exam is a dynamic, adaptive exam [28], best compared to exams like the current GRE or GMAT. Linear exams present the test-taker with a preset sequence of test questions, while dynamic exams adapt to each test-taker in response to the answers provided in prior questions.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Section** & **Student Pass Rate** \\ \hline AUD & 48.7\% \\ \hline BEC & 59.7\% \\ \hline FAR & 44.9\% \\ \hline REG & 61.1\% \\ \hline \end{tabular}
\end{table}
Table 1: Passage rates of students in 2022 as reported by the AICPA [29].
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Skill Level** & **Description** \\ \hline Evaluation & The examination or assessment of problems, and use of judgment to draw conclusions. \\ \hline Analysis & The examination and study of the interrelationships of separate areas in order to identify causes and find evidence to support inferences. \\ \hline Application & The use or demonstration of knowledge, concepts, or techniques. \\ \hline Remembering \& Understanding & The perception and comprehension of the significance of an area utilizing knowledge gained. \\ \hline \end{tabular}
\end{table}
Table 2: AICPA Uniform CPA Examination Skill Levels
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Auditing and Attestation (AUD)** & Ethics, Professional Responsibilities and General Principles \\ Assessing Risk and Developing a Planned Response & Performing Further Procedures and Obtaining Evidence \\ Forming Conclusions and Reporting & \\
**Business Environment and Concepts (BEC)** & Enterprise Risk Management, Internal Controls and Business Processes \\ Economics & Financial Management \\ Information Technology & Operations Management \\
**Financial Accounting and Reporting (FAR)** & Conceptual Framework, Standard-Setting and Financial Reporting \\ Select Financial Statement Accounts & Select Transactions \\ State and Local Governments & \\
**Regulation (REG)** & Ethics, Professional Responsibilities and Federal Tax Procedures \\ Business Law & Federal Taxation of Property Transactions \\ Federal Taxation of Individuals & Federal Taxation of Entities \\ \hline \end{tabular}
\end{table}
Table 4: Uniform CPA Examination Blueprints - Content Areas
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Skill** & **Area** & **Content** & **Task** \\ \hline Remembering \& Understanding & Internal & Sarbanes-Oxley \\ & Controls & Act of 2002 & Identify and define key corporate governance \\ & & & provisions of the Sarbanes-Oxley Act of 2002. \\ \hline Application & Internal & Sarbanes-Oxley \\ & Controls & Act of 2002 & Identify regulatory deficiencies within an entity \\ & & Act of 2002 & by using the requirements associated with the \\ & & & Sarbanes-Oxley Act of 2002. \\ \hline \end{tabular}
\end{table}
Table 3: Example AICPA Uniform CPA Examination Tasks
The Examination is divided into four sections that test-takers sit for independently: Auditing and Attestation (AUD), Business Environment and Concepts (BEC), Financial Accounting and Reporting (FAR), and Regulation (REG). Each section of the Exam is divided up into at least four testlets that feature scenarios, multiple choice questions, calculated amounts, short answer, and related evidence and research material. The passage rates of Exam sections are presented in Table 1; the AICPA does not publish statistics related to per-question or per-section test-taker accuracy.
By its very design, the Exam is meant to be a practical assessment of real-world tasks and requisite skills [11, 28]. It rigorously assesses candidates on their readiness across a broad range of concepts and skill levels progressing through (i) Remembering & Understanding, (ii) Application, (iii) Analysis, and (iv) Evaluation.
The overall design of the Exam is best viewed through the Uniform CPA Examination Blueprints ("Blueprints") [14], which document how concepts and tasks are adapted from Bloom's taxonomy of the cognitive domain [2]. An overview of the Exam and sample skills and tasks are provided in Tables 2, 3, and 4. The Blueprints are regularly updated by the AICPA and are the most detailed, representative outline of the test's construction.
Importantly, many of the tasks detailed in the Blueprints include an element of arithmetic. For example, many questions that include workpapers or sample financial statements expect the test-taker to first determine which numbers to include or exclude in arithmetic expressions, then to evaluate the resulting expression to calculate a specific amount. Sometimes, these expressions are as simple as \(A=L+E\), but in many cases, they involve more complex expressions based on tables with dozens of numbers and related materials. Based on prior research and experience with LLMs, we strongly suspected that GPT-3.5 would struggle with zero-shot quantitative reasoning in this context.
### Data
While there is an active body of research on quantitative reasoning with fine-tuning or few-shot contexts [30, 31, 32, 33], we constrain our results in this study to zero-shot prompts to better assess the "intrinsic" capability of these models. Therefore, we prepared two separate assessments to allow us to isolate the arithmetic or quantitative capabilities from other elements of the Exam.
#### Assessment 1: Sample Exam - Regulation
The first assessment is intended to approximate the real Uniform CPA Examination using the AICPA's online, publicly-available sample exams. These tests "include two multiple-choice testlets and three task-based simulation testlets for [...] Auditing and Attestation (AUD), Financial Accounting and Reporting (FAR) and Regulation (REG);" the fourth section, BEC, is shorter. Between AUD, FAR, and REG, we utilize the REG section as it contains the most balanced distribution of skill types and quantitative and qualitative reasoning. Therefore, a test session of the REG exam as provided on the AICPA's site was transcribed on January 3rd, 2023, including correct answers. All questions are formatted as simple text or, where evidence or workpapers are formatted in tables or lists, as Markdown.
This process results in 40 test questions across five testlets. Two of these five testlets consist of multiple-choice questions, with a total of 15 questions ranging from four to six options each. Of the remaining 25 questions, 24 require the test-taker to indicate the correct financial amount and one requires the test-taker to research authoritative material made available within the exam. While we cannot redistribute these test questions directly, interested readers can directly access and take the AICPA's online sample exams at no cost.
A partially-redacted sample question from this assessment is provided for reference below:
### Assessment 1: Sample Question
Question: All taxpapers file their Form 1040 using the tax filing status of single. Assume that [...].
Situation:
$6,000 - Loss on sale of [...]
$10,000 - Contribution to the capital [...]
$3,000 - Write-off of a worthless [...]
What is the taxpayer's adjusted gross income? Answer: $65,000
#### Assessment 2: Synthetic MCQ Assessment
As noted above, the Uniform CPA Examination is organized around Bloom's cognitive taxonomy [2], which is a widely-adopted framework for structuring learning objectives and capabilities. The taxonomy is generally conceptualized as a pyramid divided into six levels: Knowledge, Comprehension, Application, Analysis, Synthesis, and Evaluation or Creation. As noted above in Table 2, the AICPA has adapted these skill levels into four simpler groups. The top two levels - Evaluation and Analysis - not only most frequently feature arithmetic, but in practice, are also frequently the most nuanced, contextual tasks that real professionals address.
As an example, tasks like "Evaluate the reasonableness of significant accounting estimates [...]" are ones for which, for legal and ethical reasons, human oversight will likely remain necessary.
Therefore, we focused this second assessment on the foundational levels of the AICPA's skill pyramid - Remembering & Understanding and Application. To do so, we reviewed every task in the AICPA's Blueprints, dated October 18, 2021, to identify all relevant tasks. For each task, the lead author, a CPA, prepared at least one question to address each task and skill level identified. In sections where there were fewer than 50 relevant Blueprint tasks, we randomly sampled tasks and added additional questions to ensure that all sections had at least 50 samples. While this means that the calculation of overall accuracy
rate overweights sections such as BEC, we are not focused on test passage _per se_ in this research and therefore prefer breadth and power.
These questions have been prepared, to the best of our abilities, to mimic the nature and difficulty of real questions on the Exam. In addition to reviewing material provided by the AICPA itself, the authors also reviewed material and sample questions prepared by McGraw-Hill Education and Becker Professional Education to ensure that our test questions were at least as difficult and broad as theirs. All questions were drafted solely by the authors, and a sample question from each section of this assessment is provided for reference below.
### Assessment 2: Synthetic REG Question
Question: Which of the following types of contract does not require a written element in order to be enforceable?
A. Contracts for the sale of goods for $500 or more B. Contracts to act as surety C. Contracts for the sale of a house D. Contracts for leases of land for less than one year Answer: D
### Assessment 2: Synthetic BEC Question
Question: Which of the following elements is not part of the formula for calculating the cost of retained earnings using the Capital Asset Pricing Model?
A. The risk-free rate B. The pre-tax cost of long-term debt C. The company's beta coefficient D. The market risk premium Answer: B
### Assessment 2: Synthetic FAR Question
Question: Which of the following investment types is eligible to be reported in the financial statements at amortized cost?
A. Available-for-sale equity securities B. Available-for-sale debt securities C. Held-to-maturity debt securities D. Trading equity securities Answer: C
### Assessment 2: Synthetic AUD Question
Question: Which of the following disclosures related to the fair value of investments in securities is required for a nonissuer?
A. Purchases and issuances for each class of investments B. Rollfoward of recurring level 3 fair value measurements C. Disclosures for financial instruments not measured at fair value D. The range and weighted average of significant unobservable inputs Answer: A
These questions, like natural language in the law itself, can be subject to pedantic interpretation; for example, in the Auditing and Attestation (**AUD**) question above, an experienced practitioner might qualify choice B by stating that it depends on whether it's a "full rollforward" or a limited number of separate elements of the rollforward. Similar to the actual CPA Exam, some of our questions may require the selection of the "best" option.
In total, we produced 208 questions across the four sections of the Exam. The distribution of these questions is detailed in Table 5 below. All questions are available in the online SI on GitHub. Like the AICPA's exam designers themselves, we expect that there will be issues with the design or scoring of our questions, and we encourage readers to submit additional questions or suggested clarifications via corresponding email or GitHub. As errata may be detected or new questions accepted, updated results may be available in the online SI.
### Methods
In prior work on the Bar Exam [27], we outlined a method for experimentally evaluating OpenAI's models. For multiple choice question (MCQ) assessments in this paper, we follow this approach as closely as possible; calculated amounts and short answers are compared to the correct answer after stripping and reformatting answers. For example, \((10,000)\), \((10000)\), and \(-10,000\) are identical in the automated scoring of the model's responses.2
Footnote 2: Parentheses are used as shorthand in the accounting industry for negative amounts.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Assessment** & **Section** & **Number of Questions** \\ \hline
1 & REG & 40 \\ \hline
2 & AUD & 54 \\ \hline
2 & BEC & 50 \\ \hline
2 & FAR & 51 \\ \hline
2 & REG & 53 \\ \hline \end{tabular}
\end{table}
Table 5: Number of AICPA and author-prepared questions per section.
As in prior research, our evaluation is based on generating zero-shot prompts for the text-davnici-003 text completion API. Unlike in our prior research [27], we are able to fully open-source the source code and questions created in Assessment 2. While replication of results requires an OpenAI account and accepting the AICPA's terms of use, we have again attempted to provide researchers with as much replication detail as is possible under the circumstances.
#### Prompt Engineering and Responses
Our ability to understand these large language models is constrained both by our limited scientific understanding and the proprietary nature of OpenAI's models [27]. Despite this gap, many have documented that such models are unexpectedly sensitive to the specific prompts they are provided. The practice of writing such prompts is typically referred to as "prompt engineering," and details of prompt engineering are critical to replication of studies involving LLMs.
In this research, we experimented with answer types, contextualization, and justification in prompt engineering [34]. The following prompt variations were tested in at least one sample, although variations between Assessment 1 and Assessment 2 are required due to question types. For Assessment 1, the prompts define entailment or recall tasks, i.e., where the model must select the correct or most correct answer, as well as open-ended problems where the model must calculate the correct monetary amount. For Assessment 2, all questions are designed to evaluate traditional entailment tasks. Complete details are available in the source and data in the online SI.
1. Answer. Ask the model to answer with: * its best choice only. * its best and worst choices. * its top three rank-ordered choices.
2. Contextualization. Ask the model to imagine it is: * taking the CPA exam. * designing the CPA exam. * an accountant in the United States. * a tax professional in the United States. * a legal professional in the United States. * a Big 4 accountant in the United States.
3. Justification. Require the model to provide: * an explanation of its choices. * an explanation and citation to authority or source. * an explanation and citation within a specific list of authorities or sources.
Generated prompts are combined with questions and sent to the OpenAI API endpoint. The prompt and complete JSON response, including the OpenAI API request ID, are logged for all questions for all assessments. The API response is parsed and stored for scoring, qualitative analysis, and open source release. For scoring, no responses were manually altered or evaluated by humans.
In general, most prompts produced similar performance, clustering near the central tendency of 55% noted in Table 8. In a number of cases, contextualization or justification resulted in models that performed better on one section but worse on another section. Contextual variations suggest differences in the nature of advice between professions. Justification variations suggest differences in the complexity or state of codification across subject areas. Additional details, complete responses, and details regarding phenomena such as hallucination are provided in the SI.
#### Model (hyper)parameters
As the AICPA curriculum itself notes, many models are sensitive to small changes in their inputs, and LLMs are no different. In addition to prompt sensitivity, they are often highly sensitive to the parameters set in training and inference. While our ability to interpret results or identify all (hyper)parameters is limited by the proprietary nature of GPT, we did evaluate how altering some model parameters impacts the performance of the model. We do not vary the maximum token output or attempt nucleus sampling; however, we do evaluate the following parameters for at least one prompt:
1. temperature: Sampling temperature; 0.0 is deterministic, higher is more "random." We tested values in {0.0, 0.5, 1.0}.
2. best_of: "Generates [N] completions server-side and returns the "best" (the one with the highest log probability per token)." We tested values in {1, 2, 4}.
#### Fine-tuning and Historical Models
While OpenAI does provide an API for fine-tuning models including text-davnici-003, this publication is focused on the zero-shot performance of the model itself. Furthermore, based on prior experience in similar problems [27], we do not believe that fine-tuning text completion at small sample sizes would improve the models' performance. In some circumstances, others have found success in subsequent supervised or unsupervised re-training of some or all layers of an LLM [35][36], while others have documented circumstances in which fine-tuning results in unexplained model degradation. In our prior work [27], we noted a significant decrease in fine-tuned text-davnici-003 performance at the scale of our training data. While it is possible that this performance decrease is explained by the 50% head layer contraction required by OpenAI's API, we are unable to test further without access to details of fine-tuning or resulting weights.
In addition to text-davnici-003, OpenAI also makes a number of other models available through its API, including smaller and older iterations of the GPT family. We repeated our testing with the text-davnici-001, text-curici-001, text-babrage-001, and text-ada-001 models provided through the OpenAI API.
## Results
In total, across all prompts and parameters tested, we asked text-bayncn-003 to answer over 50,000 questions in more than 700 independent assessment sessions. Details of the number of sessions and parameter values tested are described below in each assessment and in the online SI. The range of performance values observed over all experiments is summarized in Table 6.
### Assessment 1
As expected, the quantitative reasoning and arithmetic required in Assessment 1 resulted in substantially lower zero-shot performance than observed in Assessment 2. Out of 24 questions that required the test-taker to provide a numeric answer based on facts and work papers, GPT-3.5 frequently only answered one, two, or three questions correctly, resulting in an average range across all parameters and prompts of 5.7 to 9.4%. While it is arguable whether 0% is the true baseline for this task, it is clear that such zero-shot performance is not on par with human test-takers.
GPT-3.5 also struggled with arithmetic on the 15 MCQs on Assessment 1, scoring above random chance for some, but not all, prompts and parameters. As a number of questions include more than four choices, the true baseline rate of guessing is 22.67%, not 25%, but despite this, the best prompts and parameters were only 4-6% above the baseline rate.
Based on a qualitative review of these questions and the model's responses, we believe that performance could be improved somewhat in few-shot evaluations. Further, we believe that even some zero-shot performance improvements could be achieved by expanding the prompt to include "scratchpads" for common relationships or equations [37], as might be seen on problems that feature common workpapers like a statement of cash flows; however, in this paper, we focus on a zero-shot, "out-of-the-box" evaluation, and so these improvements are left for future research.
### Assessment 2
As discussed in Assessment 2, we created 208 MCQs for Assessment 2 to evaluate GPT-3.5's capabilities at the foundation of knowledge work. Each of these 208 questions has four options, and therefore, the baseline guessing rate for the model is exactly 25%. We assessed GPT-3.5 on 208-question assessment exactly 180 times - three samples for each combination of 10 prompts, three temperature (\(T\)) values, and two best_of (\(n\)) parameter values (\(3\cdot 10\cdot 3\cdot 2\)). Across these 10 prompts, mean performance ranged between 51.1% and 56.9%, with a worst run of 50.0% (Prompt 13, \(T=1.0\)) and a best run of 57.6% (Prompt 16, \(T=0.0\)). We did not find significant differences between \(n\) parameter values in this assessment.
Table 7, Table 1, and Figure 1 show the performance of this best prompt and parameter value, including the average percentage of correct questions by section and the average passage rate for test-takers in 2022 as reported by [29]. Overall, GPT-3.5 is demonstrating performance significantly in excess of guessing, achieving approximately 70% in questions on Business Environment and Concepts (BEC), 57% for Auditing and Attestation (AUD), 53% for Regulation (REG), and 51% for Financial Accounting and Reporting (FAR). Furthermore, as seen in prior research [27], GPT-3.5 demonstrates strong non-entailment performance as represented by its rank ordering of choices. The model's top two answers are correct over 82% of the time, significantly in excess of the 50% baseline.
While we did not qualitatively code all 208 question for the applicable AICPA skill level, we did review all 53 questions from the Regulation section in Assessment 2. We found that at least 23 of the 53 questions (\(\approx\)43%) require some degree of Application or Analysis. While these skill levels may be subjective in the context of realistic questions, we encourage readers to examine the complete set of 208 questions in the SI for themselves and to self-assess their own performance to set expectations regarding task type and difficulty.
We do not have a head-to-head comparison between real test-takers and GPT-3.5 for Assessment 2. Based on our experience, however, we believe that these questions are at least as difficult as the real Remembering & Understanding and Application questions on the Exam. Further, the tasks tested in Assessment 2 also account for the vast majority of tasks and types of tasks covered in the AICPA Blueprints. In addition to reviewing models for single correct answers, some prompts also required models to provide explanations or justifications. We performed a qualitative review of explanations and justifications for a sample of sessions, and found that more than half of the model's correct answers were also correctly explained with the correct reference or authority. Interested readers are directed to the online SI for thousands of examples of responses from the model. Out of all explanations, including incorrect ones, explanations included at least one hallucinated reference or authority in approximately 37% of the time. Research is ongoing on the optimal degree of hallucination and techniques for mitigating unwanted hallucination [38], and we will continue to explore these questions and applications in future work.
\begin{table}
\begin{tabular}{|c|c|c|c|} \multicolumn{4}{c}{Correct Rates by Question Type and Assessment} \\ \hline
**Assessment** & **Amount** & **MCQ** & **Short Answer** \\ \hline Assessment 1 & 5.7 - 9.4\% & 22.3 - 28.1\% & 0\% \\ \hline Assessment 2 & N/A & 50.0 - 57.6\% & N/A \\ \hline \end{tabular}
\end{table}
Table 6: Correct rates by question type and assessment as measured by all-experiment range of mean prompt performance between Assessment 1 and Assessment 2. Baseline for Multiple Choice is 22.67% for Assessment 1, 25% for Assessment 2. Description of best prompts and parameters is provided below and prompt details are available in SI.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Section** & **Accuracy** & **Accuracy - Top Two** \\ \hline AUD & 57.1\% & 84.9\% \\ \hline BEC & 69.7\% & 85.7\% \\ \hline FAR & 51.0\% & 82.4\% \\ \hline REG & 53.1\% & 75.8\% \\ \hline \end{tabular}
\end{table}
Table 7: Accuracy of GPT-3.5 by section of AICPA Exam Blueprints for best prompt and parameter, with correct rate including second-best answer in parentheses. Passage rates are provided in Table 1 below for reference, but should not be directly compared with model accuracy rates for the reasons discussed above.
Figure 1: Performance of GPT-3.5 by section of AICPA Exam Blueprints for best prompt and parameter, with correct rate including second-best answer in dashed region. Error bars are \(\pm\)1 standard error of the mean. Note that GPT-3.5 is not assessed on Analysis or Evaluation tasks, unlike human test-takers, and that the percentage of questions correct does not scale linearly with score or passage.
Figure 2: Comparison of model performance across GPT-3 generations. For text-dunich-003, the average is reported across all runs; for other models, a subset of representative prompts and parameters were included. GPT-2 was unable to reliably respond to the prompt as instructed and questions were larger than its maximum input token length. More details are available in source and data in the online SI.
### GPT Model Progression
In prior work [27], we noted that text-davinci-003 demonstrated material improvements from prior generations of GPT models. In this work, we also compare our results against older or smaller GPT-3 models. Table 8 and Figure 2 summarize these findings, demonstrating a qualitatively-identical story from our work on the Bar Exam. Only text-davinci-001 exhibits the ability to follow instructions and answer above random chance, and between 001 and 003, the spread over random guessing has increased from less than 5% to over 30%.
## Conclusion and Future Work
In this paper, we document and develop two assessments of knowledge worker readiness based on the AICPA's Uniform CPA Examination Blueprints. Assessment 1 is a sample Regulation test as provided by the AICPA, including quantitative reasoning and calculations; Assessment 2 covers foundational skill levels, excluding quantitative reasoning and calculations, for all four sections of the Blueprints. In total, these assessments cover a broad, practical curriculum including law, finance, accounting, and technology. We then experimentally evaluate GPT-3.5 on these two assessments, including detailed steps to replicate this evaluation, and share source code and data for all questions not covered by copyright.
First, we find that text-davinci-003 achieves a correct rate of 14.4% on Assessment, significantly underperforming test-takers. As many authors have documented in research on large language models [31, 32, 33], arithmetic and quantitative reasoning are often outside the scope of zero-shot use cases, and these results are consistent with these prior findings.
As arithmetic and quantitative reasoning are the subjects of substantial active research, we look forward to exploring zero-shot approaches as new models or techniques become available. Further, as many industrial applications will support iterative or few-shot approaches, we are continuing to investigate applied use cases like the calculation of financial or operational metrics or the analysis of specific financial statements using more mature techniques like [39].
Second, we find that text-davinci-003 can achieve an accuracy of 57% on Assessment 2, significantly better than a 25% guessing rate, and approaching or on par with anecdotal test-taker performance. It also demonstrates strong non-entailment capabilities and improving explanation capabilities, as its top two answers are correct 82% of the time and explanations are correct more often than not. While this assessment is not identical to the CPA Exam and the AICPA does not publish directly comparable statistics, approximately 45-55% of test-takers fail the exams annually, as an indication of general difficulty. All questions in this assessment are available for readers to review and self-assess, and we encourage others to suggest improvements or perform their own assessment on this material.
Finally, as in prior research, we find that recent generations of GPT-3 demonstrate material improvements on this assessment. While text-ada-001 could barely follow instructions and text-davinci-001 only exceeded random chance by 5%, text-davinci-003 is now approaching human performance on this assessment.
As organizations and institutions around the world depend on knowledge workers to navigate an increasingly complex legal and financial landscape [40, 41], it is critical that we develop tools that can help safely, effectively meet this demand for knowledge work. Our findings strongly suggest that future large language models have the potential to transform the quality and efficiency of knowledge work at least as much as search engines did at the turn of the 21st century.
## Acknowledgments
Although the original draft of this paper was written by the authors, portions of this paper were fine-tuned by text-davinci-003 for formatting and clarity.
## Supplementary Information
Almost all of the material used in the creation and presentation of this research is available in the online Supplementary Information (SI) at the following URL:
[https://github.com/mjbommar/gpt-as-knowledge-worker](https://github.com/mjbommar/gpt-as-knowledge-worker).
|
2310.13621 | Principal 2-blocks with wreathed defect groups up to splendid Morita
equivalence | We classify principal $2$-blocks of finite groups $G$ with Sylow
$2$-subgroups isomorphic to a wreathed $2$-group $C_{2^n}\wr C_2$ with $n\geq
2$ up to Morita equivalence and up to splendid Morita equivalence. As a
consequence, we obtain that Puig's Finiteness Conjecture holds for such blocks.
Furthermore, we obtain a classification of such groups modulo $O_{2'}(G)$,
which is a pure group theoretical result and of independent interest. Methods
previously applied to blocks of tame representation type are used. They are,
however, further developed in order to deal with blocks of wild representation
type. | Shigeo Koshitani, Caroline Lassueur, Benjamin Sambale | 2023-10-20T16:12:43Z | http://arxiv.org/abs/2310.13621v5 | # Principal \(2\)-blocks with Wreathed defect groups up to Splendid Morita equivalence
###### Abstract.
We classify principal \(2\)-blocks of finite groups \(G\) with Sylow \(2\)-subgroups isomorphic to a wreathed \(2\)-group \(C_{2^{\prime}}\wrwr C_{2}\) with \(n\geq 2\) up to Morita equivalence and up to splendid Morita equivalence. As a consequence, we obtain that Puig's Finiteness Conjecture holds for such blocks. Furthermore, we obtain a classification of such groups modulo \(O_{2^{\prime}}(G)\), which is a purely group theoretical result and of independent interest. Methods previously applied to blocks of tame representation type are used, however, they are further developed in order to be able to treat blocks of wild representation type in the present case.
Key words and phrases:wreathed \(2\)-group, Morita equivalence, splendid Morita equivalence, Puig's Finiteness Conjecture, principal block, trivial source module, \(p\)-permutation module, Scott module, Brauer indecomposability, decomposition matrix 2010 Mathematics Subject Classification: 20C05, 20C20, 20C15, 20C33,16D90 The first author was partially supported by the Japan Society for Promotion of Science (JSPS), Grant-in-Aid for Scientific Research (C)19K03416, 2019-2021. The second author was supported by the DFG SFB/TRR 195
groups with Sylow \(2\)-subgroups isomorphic to a wreathed \(2\)-group \(C_{2^{n}}\wr C_{2}\) with \(n\geq 2\). We choose this defect group for its many similarities with the tame cases. In this respect, from the group theory point of view, we strongly rely on the facts that the wreathed \(2\)-groups \(C_{2^{n}}\wr C_{2}\) have \(2\)-rank \(2\) and an automorphism group which is a \(2\)-group, whereas from the modular representation theory point of view we rely on the Brauer indecomposability of Scott modules with wreathed vertices proved by the first author and Tuvay in [11].
In order to state our main results, we first need to introduce some notation. Given a finite group \(G\) and \(H\leq G\), we set \(\Delta H:=\{(h,h)\in G\times G\mid h\in H\}\) and we recall that the _Scott module_ of \(kG\) with respect to \(H\), denoted by \(\operatorname{Sc}(G,H)\), is, up to isomorphism, the unique indecomposable direct summand of the trivial \(kH\)-module induced from \(H\) to \(G\) with the property that the trivial \(kG\)-module is a constituent of its head (or equivalently of its socle). Furthermore, given an integer \(t\geq 0\) and a positive power \(q\) of a prime number, we let
\[\operatorname{SL}_{2}^{t}(q):=\{A\in\operatorname{GL}_{2}(q)\,|\,\det(A)^{2^ {t}}=1\}\ \ \text{and}\ \ \operatorname{SU}_{2}^{t}(q):=\{A\in\operatorname{GU}_{2}(q)\,|\,\det(A)^{2^ {t}}=1\}\,.\]
Now, in order to apply the previously developed methods, our first main result provides a classification of the finite groups \(G\) with a wreathed Sylow \(2\)-subgroup \(C_{2^{n}}\wr C_{2}\) (\(n\geq 2\)) modulo \(O_{2^{\prime}}(G)\), which is of independent interest.
**Theorem 1.1**.: _Let \(G\) be a finite group with a Sylow \(2\)-subgroup isomorphic to a wreathed \(2\)-group \(C_{2^{n}}\wr C_{2}\) for an integer \(n\geq 2\) such that \(O_{2^{\prime}}(G)=1\). Let \(q:=r^{f}\) denote a power of a prime number \(r\) for a positive integer \(f\geq 1\). Then one of the following holds:_
1. \(G\cong C_{2^{n}}\wr C_{2}\) _;_
2. \(G\cong(C_{2^{n}}\times C_{2^{n}})\rtimes\mathfrak{S}_{3}\) _;_
3. \(G\cong\operatorname{SL}_{2}^{n}(q)\rtimes C_{d}\) _where_ \((q-1)_{2}=2^{n}\) _and_ \(d\mid f\) _is odd;_
4. \(G\cong\operatorname{SU}_{2}^{n}(q)\rtimes C_{d}\) _where_ \((q+1)_{2}=2^{n}\) _and_ \(d\mid f\) _is odd;_
5. \(G\cong\operatorname{PSL}_{3}(q).H\) _where_ \((q-1)_{2}=2^{n}\)_,_ \(H\leq C_{(q-1,3)}\times C_{d}\) _and_ \(d\,|\,f\) _is odd;_
6. \(G\cong\operatorname{PSU}_{3}(q).H\) _where_ \((q+1)_{2}=2^{n}\)_,_ \(H\leq C_{(q+1,3)}\times C_{d}\) _and_ \(d\,|\,f\) _is odd._
This theorem, which we prove in Section 3, is a byproduct of Alperin-Brauer-Gorenstein's work [1] on finite groups with quasi-dihedral and wreathed Sylow \(2\)-subgroups.
Our second main result is then a classification of principal blocks with defect groups isomorphic to a wreathed \(2\)-group \(C_{2^{n}}\wr C_{2}\) with \(n\geq 2\).
**Theorem 1.2**.: _Let \(k\) be an algebraically closed field of characteristic \(2\) and let \(G\) be a finite group with a Sylow \(2\)-subgroup \(P\) isomorphic to a wreathed \(2\)-group \(C_{2^{n}}\wr C_{2}\) for a fixed integer \(n\geq 2\). Then, the following assertions hold._
1. _The principal_ \(2\)_-block_ \(B_{0}(kG)\) _of_ \(G\) _is splendidly Morita equivalent to the principal_ \(2\)_-block_ \(B_{0}(kG^{\prime})\) _of a finite group_ \(G^{\prime}\) _belonging to precisely one of the following families of finite groups:_ 1. \(C_{2^{n}}\wr C_{2}\) _;_ 2. \((C_{2^{n}}\times C_{2^{n}})\rtimes\mathfrak{S}_{3}\) _;_ 3. \(\operatorname{SL}_{2}^{n}(q)\) _where_ \(q\) _is a power of a prime number such that_ \((q-1)_{2}=2^{n}\)_;_ 4. \(\operatorname{SU}_{2}^{n}(q)\) _where_ \(q\) _is a power of a prime number such that_ \((q+1)_{2}=2^{n}\)_;_ 5. \(\operatorname{PSL}_{3}(q)\) _where_ \(q\) _is a power of a prime number such that_ \((q-1)_{2}=2^{n}\)_;_ 6. \(\operatorname{PSU}_{3}(q)\) _where_ \(q\) _is a power of a prime number such that_ \((q+1)_{2}=2^{n}\)_._ _Moreover, in all cases, the splendid Morita equivalence is induced by the Scott module \(\operatorname{Sc}(G\times G^{\prime},\Delta P)\), where \(P\) is also seen as a Sylow \(2\)-subgroup of \(G^{\prime}\)._
2. _In_ (a)_, more accurately, if_ \(G_{1}\) _and_ \(G_{2}\) _are two finite groups belonging to the same infinite family of finite groups_ (Wj(n)) _with_ \(\mathsf{j}\in\{\mathsf{3},\mathsf{4},\mathsf{5},\mathsf{6}\}\)_, then_ \(\operatorname{Sc}(G_{1}\times G_{2},\Delta P)\) _induces a splendid Morita equivalence between_ \(B_{0}(kG_{1})\) _and_ \(B_{0}(kG_{2})\)_._
We emphasize that in the case of principal blocks of tame representation type, treated in [11, 12, 13], a classification of these blocks up to Morita equivalence was known by Erdmann's work on tame algebras [1]. A major difference in the case of wreathed Sylow \(2\)-subgroups lies in the fact that a classification of these blocks up to Morita equivalence was, to our knowledge, not known. However, it follows from our methods, that the classification up to splendid Morita equivalence, which have obtained, coincides with the classification up to Morita equivalence.
**Theorem 1.3**.: _Let \(k\) be an algebraically closed field of characteristic \(2\) and let \(G\) be a finite group with a Sylow \(2\)-subgroup isomorphic to a wreathed \(2\)-group \(C_{2^{n}}\wr C_{2}\) for a fixed integer \(n\geq 2\). Then \(B_{0}(kG)\) is Morita equivalent to the principal block of precisely one of the families of groups_ (W1(n))_,_ (W2(n))_,_ (W3(n))_,_ (W4(n))_,_ (W5(n))_, or_ (W6(n)) _as in Theorem_ 1.2_(a)._
As an immediate consequence of Theorem 1.2 we also obtain that Puig's Finiteness Conjecture holds if we restrict our attention to principal blocks with a defect group isomorphic to a wreathed \(2\)-group \(C_{2^{n}}\wr C_{2}\).
**Corollary 1.4**.: _For each integer \(n\geq 2\) there are only finitely many splendid Morita equivalence classes of principal \(2\)-blocks with defect groups isomorphic to a wreathed \(2\)-group \(C_{2^{n}}\wr C_{2}\)._
This paper is organised as follows. In Section 2 the notation is introduced. In Section 3 we state and prove the classification of finite groups \(G\) with a wreathed Sylow \(2\)-subgroup and \(O_{2^{\prime}}(G)=1\). In Section 4 we recall, state and prove preliminary results on splendid Morita equivalences and on module theory over finite-dimensional algebras. In Sections 5, 6 and 7 we prove part (b) of Theorem 1.2. Section 8 contains the proof of Theorem 1.2 and Theorem 1.3. Finally, Appendix A fixes a gap in the proof of [11, Proposition 3.3(b)].
## 2. Notation
Throughout this paper, unless otherwise stated, we adopt the following notation and conventions. We let \(k\) be an algebraically closed field of characteristic \(p>0\). All groups considered are finite, all \(k\)-algebras are finite-dimensional and all modules over finite-dimensional algebras considered are finitely generated right modules. The symbols \(G\), \(G^{\prime}\), \(G_{1}\), \(G_{2}\), \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\) always denote finite groups of order divisible by \(p\).
Furthermore, we denote by \(\operatorname{Syl}_{p}(G)\) the set of all Sylow \(p\)-subgroups of \(G\), and for \(P\in\operatorname{Syl}_{p}(G)\), we let \(\mathcal{F}_{P}(G)\) be the fusion system of \(G\) on \(P\). If \(H\leq G\), we let \(\Delta H:=\{(h,h)\in G\times G\,|\,h\in H\}\) denote the diagonal embedding of \(H\) in \(G\times G\). Given an integer \(m\geq 2\), we let \(D_{2^{m}}\) denote the dihedral group of order \(2^{m}\), \(C_{m}\) denote the cyclic group of order \(m\), and \(C_{2^{m}}\wr C_{2}\) denote the wreathed product of \(C_{2^{m}}\) by \(C_{2}\). Given an integer \(t\geq 0\) and a positive prime power \(q\), we let
\[\operatorname{SL}_{2}^{t}(q):=\{A\in\operatorname{GL}_{2}(q)\,|\,\det(A)^{2^{ t}}=1\}\ \ \text{and}\ \ \operatorname{SU}_{2}^{t}(q):=\{A\in\operatorname{GU}_{2}(q)\,|\,\det(A)^{2^{t} }=1\}\,,\]
as already defined in the introduction.
Given a finite-dimensional \(k\)-algebra \(A\), we denote by \(\operatorname{rad}(A)\) the Jacobson radical of \(A\) and by \(1_{A}\) the unit element of \(A\), respectively. Furthermore, if \(X\) is an \(A\)-module and \(m\geq 0\) is an integer, then we denote by \(\operatorname{soc}^{m}(X):=\{x\in X\mid x\cdot\operatorname{rad}(A)^{m}=0\}\) the \(m\)-th socle of \(X\), where \(\operatorname{soc}(X):=\operatorname{soc}^{1}(X)\) is the socle of \(X\), and for \(1\leq i\leq\ell\), where \(\ell\) is the Loewy (or radical) length of \(X\), we set
\[S_{i}(X):=\operatorname{soc}^{i}(X)/\operatorname{soc}^{i-1}(X)\quad\text{ and }\quad L_{i}(X):=X\operatorname{rad}(A)^{i-1}/X\operatorname{rad}(A)^{i}\]
and we write \(\operatorname{hd}(X)\) for the head of \(X\). We then talk about the _radical (Loewy) series_ and about the _socle series_ of \(X\) as defined in [13, Chap. I SS8]. We describe a uniserial \(A\)-module \(X\) with simple composition factors \(L_{i}(X)\cong S_{i}\) for simple \(A\)-modules \(S_{1},\cdots,S_{\ell}\) via the diagram
\[X=\begin{array}{c}\framebox{$S_{1}$}\\ \vdots\\ S_{\ell}\end{array}.\]
We denote by \(P(X)\) the projective cover of an \(A\)-module \(X\) and by \(\Omega(X)\) the kernel of the canonical morphism \(P(X)\twoheadrightarrow X\). Dually, we let \(\Omega^{-1}(X):=I(X)/X\) where \(I(X)\) is an injective envelope of \(X\), and we denote by \(X^{*}\) the \(k\)-dual of \(X\) (which is a left \(A\)-module). Given a simple \(A\)-module \(S\), we denote by \(c_{X}(S)\) the multiplicity of \(S\) as a composition factor of \(X\) and if \(S_{1},\cdots,S_{n}\) are all the pairwise non-isomorphic composition factors of \(X\) with multiplicities \(m_{1},\ldots,m_{n}\), respectively, then we write \(X=m_{1}\times S_{1}+\cdots+m_{n}\times S_{n}\) (as composition factors). If \(Y\) is another \(A\)-module, then \(Y\mid X\) (resp. \(Y\nmid X\)) means that \(Y\) is isomorphic (resp. not isomorphic) to a direct summand of \(X\), (proj) denotes a projective \(A\)-module (which we do not need to specify).
We write \(B_{0}(kG)\) for the principal block of the group algebra \(kG\). Given a block \(B\) of \(kG\), we write \(1_{B}\) for the block idempotent of \(B\) and \(C_{B}\) for the Cartan matrix of \(B\). We denote by \(\operatorname{Irr}(B)\) and \(\operatorname{IBr}(B)\), respectively, the sets of all irreducible ordinary and Brauer characters of \(G\) belonging to \(B\). We write \(k(B):=|\operatorname{Irr}(B)|\) and \(\ell(B):=|\operatorname{IBr}(B)|\) and \(k_{i}(B):=|\{\chi\in\operatorname{Irr}(B)\mid\operatorname{ht}(\chi)=i\}|\) where \(\operatorname{ht}(\chi)\) is the height of \(\chi\). We denote by \(k_{G}\) the trivial \(kG\)-module. Given a \(kG\)-module \(M\) and a \(p\)-subgroup \(Q\leq G\) we denote by \(M(Q)\) the Brauer construction of \(M\) with respect to \(Q\). When \(H\leq G\), \(N\) is a \(kH\)-module and \(M\) is a \(kG\)-module, we write \(N\!\!\!\uparrow^{G}\) and \(M\!\!\!\!\downarrow_{H}\) respectively for the induction of \(N\) to \(G\) and the restriction of \(M\) to \(H\). For a subgroup \(H\leq G\) we denote by \(\operatorname{Sc}(G,H)\) the Scott module of \(kG\) with respect to \(H\), which by definition is the unique indecomposable direct summand of \(k_{H}\!\!\!\uparrow^{G}\) (up to isomorphism) that has the trivial module \(k_{G}\) as a constituent of its head (or equivalently of its socle). This is a \(p\)-permutation module (see [14, Chapter 4, SS8.4]).
If \(B_{1}\) and \(B_{2}\) are two finite-dimensional \(k\)-algebras and \(M\) is a \((B_{1},B_{2})\)-bimodule, we also write \({}_{B_{1}}\!M_{B_{2}}\) to emphasize the \((B_{1},B_{2})\)-bimodule structure on \(M\). Now, if \(B_{1}\) and \(B_{2}\) are blocks of \(kG_{1}\) and \(kG_{2}\), respectively, then we can view every \((B_{1},B_{2})\)-bimodule \(M\) as a right \(k(G_{1}\times G_{2})\)-module via the right \((G_{1}\times G_{2})\)-action defined by \(m\cdot(g_{1},g_{2}):={g_{1}}^{-1}mg_{2}\) for every \(m\in M\), \(g_{1}\in G_{1}\), \(g_{2}\in G_{2}\). Furthermore, the blocks \(B_{1}\) and \(B_{2}\) are called _splendidly Morita equivalent_ (or _source-algebra equivalent_, or _Puig equivalent_), if there is a Morita equivalence between \(B_{1}\) and \(B_{2}\) induced by a \((B_{1},B_{2})\)-bimodule \(M\) which is is a \(p\)-permutation module when viewed as a right \(k(G_{1}\times G_{2})\)-module. In this case, we write \(B_{1}\sim_{SM}B_{2}\). By a result of Puig and Scott, this definition is equivalent to the condition that \(B_{1}\) and \(B_{2}\) have source algebras which are isomorphic as interior \(P\)-algebras (see
[11, Theorem 4.1]). Also, by a result of Puig (see [11, Proposition 9.7.1]), the defect groups of \(B_{1}\) and \(B_{2}\) are isomorphic. Hence we may identify them.
## 3 Finite groups with wreathed Sylow \(2\)-subgroups
To begin with, we collect essential results about finite groups with wreathed Sylow \(2\)-subgroups. In particular, we classify such groups modulo \(O_{2^{\prime}}(G)\). This classification is a byproduct of the results of Alperin-Brauer-Gorenstein in [1].
**Lemma 3.1**.: _Let \(P:=C_{2^{n}}\wr C_{2}\) with \(n\geq 2\). Then the \(2\)-rank of \(P\) is \(2\) and \(\operatorname{Aut}(P)\) is a \(2\)-group._
Proof.: See e.g. [1, p. 5956].
For the benefit of legibility we state again Theorem 1.1 of the introduction, before we prove it.
**Theorem 3.2**.: _Let \(G\) be a finite group with a Sylow \(2\)-subgroup isomorphic to a wreathed \(2\)-group \(C_{2^{n}}\wr C_{2}\) for an integer \(n\geq 2\) such that \(O_{2^{\prime}}(G)=1\). Let \(q:=r^{f}\) denote a power of a prime number \(r\) for a positive integer \(f\geq 1\). Then one of the following holds:_
* \(G\cong C_{2^{n}}\wr C_{2}\) _;_
* \(G\cong(C_{2^{n}}\times C_{2^{n}})\rtimes\mathfrak{S}_{3}\) _;_
* \(G\cong\operatorname{SL}_{2}^{n}(q)\rtimes C_{d}\) _where_ \((q-1)_{2}=2^{n}\) _and_ \(d\mid f\) _is odd;_
* \(G\cong\operatorname{SU}_{2}^{n}(q)\rtimes C_{d}\) _where_ \((q+1)_{2}=2^{n}\) _and_ \(d\mid f\) _is odd;_
* \(G\cong\operatorname{PSL}_{3}(q).H\) _where_ \((q-1)_{2}=2^{n}\)_,_ \(H\leq C_{(q-1,3)}\times C_{d}\) _and_ \(d\mid f\) _is odd;_
* \(G\cong\operatorname{PSU}_{3}(q).H\) _where_ \((q+1)_{2}=2^{n}\)_,_ \(H\leq C_{(q+1,3)}\times C_{d}\) _and_ \(d\mid f\) _is odd._
Proof.: If \(G\) is \(2\)-nilpotent, then Case (WR1) holds since \(O_{2^{\prime}}(G)=1\). In all other cases, \(G\) is a \(D\)-group, a \(Q\)-group or a \(QD\)-group with the notation of [1, Definition 2.1]. Let \(G\) be a \(D\)-group. Then there exists \(K\unlhd G\) of index \(2\) such that \(P\cap K\cong C_{2^{n}}\times C_{2^{n}}\). By [1, Theorem 1], \(K\cong(C_{2^{n}}\times C_{2^{n}})\rtimes C_{3}\) and Case (WR2) holds.
If \(G\) is a \(Q\)-group, then Case (WR3) or (WR4) occurs by Propositions 3.2 and 3.3 of [1]. Finally, let \(G\) be a \(QD\)-group. Then by [1, Proposition 2.2], \(N:=O^{2^{\prime}}(G)\) is simple and the possible isomorphism types of \(N\) are given by the main result of [1]. Since \(C_{G}(N)\cap N=Z(N)=1\) we have \(C_{G}(N)\leq O_{2^{\prime}}(G)=1\). The possibilities for \(G/N\leq\operatorname{Out}(N)\) can be deduced from [1]. Since \(|G/N|\) is odd, no graph automorphism is involved. Hence, \(G/N\leq C_{(3,q-1)}\rtimes C_{d}\) or \(G/N\leq C_{(3,q+1)}\rtimes C_{d}\). In fact, \(G/N\) must be abelian since \(|G/N|\) is odd.
**Theorem 3.3**.: _Let \(G\) be as in Theorem 3.2 and let \(B:=B_{0}(kG)\). With the same labelling of cases as in Theorem 3.2 the following holds:_
* \(\ell(B)=1\)_,_ \(k(B)=2^{2n-1}+3\cdot 2^{n-1}\)_,_ \(k_{0}(B)=2^{n+1}\)_,_ \(k_{1}(B)=2^{2n-1}-2^{n-1}\)_;_
* \(\ell(B)=2\)_,_ \(k(B)=(2^{2n-1}+9\cdot 2^{n-1}+4)/3\)_,_ \(k_{0}(B)=2^{n+1}\)_,_ \(k_{1}(B)=(2^{2n-1}-3\cdot 2^{n-1}+4)/3\)_;_
* \(\ell(B)=2\)_,_ \(k(B)=2^{2n-1}+2^{n+1}\)_,_ \(k_{0}(B)=2^{n+1}\)_,_ \(k_{1}(B)=2^{2n-1}-2^{n-1}\)_,_ \(k_{n}(B)=2^{n-1}\)_;_
* \(\ell(B)=3\)_,_ \(k(B)=(2^{2n-1}+3\cdot 2^{n+1}+4)/3\)_,_ \(k_{0}(B)=2^{n+1}\)_,_ \(k_{1}(B)=(2^{2n-1}-3\cdot 2^{n-1}+4)/3\)_,_ \(k_{n}(B)=2^{n-1}\)_._
Proof.: Cases (WR1) and (WR2) follow from elementary group theory. If Case (WR3) or Case (WR4) of Theorem 3.2 holds, then the numbers follow from [10, Proposition (7.G)]. Suppose now that Case (WR5) or Case (WR6) holds, then the number \(k(B)\) follows from [15, Theorem 1A] - here, Brauer even computed the degrees of the ordinary irreducible characters in \(B\) - whereas the number \(\ell(B)\) can be obtained with [10, Lemma 7.I] for instance.
## 4. Preliminaries
We state below several results which will enable us to construct splendid Morita equivalences induced by Scott modules, but which are not restricted to characteristic \(2\). Therefore, throughout this section we may assume that \(k\) is an algebraically closed field of arbitrary characteristic \(p>0\).
Our first main tool to construct splendid Morita equivalences is given by the following Theorem which is an extended version of a well-known result due to Alperin [1] and Dade [1] restated in terms of splendid Morita equivalences.
**Theorem 4.1** (Alperin-Dade).: _Let \(\widetilde{G}_{1}\) and \(\widetilde{G}_{2}\) be a finite groups and assume \(G_{1}\unlhd\widetilde{G}_{1}\), \(G_{2}\unlhd\widetilde{G}_{2}\) are normal subgroups such that \(\widetilde{G}_{1}/G_{1}\), \(\widetilde{G}_{2}/G_{2}\) are a \(p^{\prime}\)-groups and having a common Sylow \(p\)-subgroup \(P\in\operatorname{Syl}_{p}(G_{1})\cap\operatorname{Syl}_{p}(G_{2})\) such that \(\widetilde{G}_{1}=G_{1}C_{\widetilde{G}_{1}}(P)\) and \(\widetilde{G}_{2}=G_{2}C_{\widetilde{G}_{2}}(P)\). Then the following assertions hold._
1. _If_ \(\tilde{e}\) _and_ \(e\) _denote the block idempotents of_ \(B_{0}(k\widetilde{G}_{1})\) _and_ \(B_{0}(kG_{1})\)_, respectively, then the map_ \(B_{0}(kG_{1})\longrightarrow B_{0}(k\widetilde{G}_{1}),a\mapsto a\tilde{e}\) _is an isomorphism of_ \(k\)_-algebras. Moreover, the right_ \(k[\widetilde{G}_{1}\times G_{1}]\)_-module_ \(\operatorname{Sc}(\widetilde{G}_{1}\times G_{1},\Delta P)=B_{0}(k\widetilde{ G}_{1})^{\downarrow\widetilde{G}_{1}\times\widetilde{G}_{1}}_{\widetilde{G}_{1} \times G_{1}}=\tilde{e}k\widetilde{G}_{1}=\tilde{e}k\widetilde{G}_{1}\)_, induces a splendid Morita equivalence between_ \(B_{0}(k\widetilde{G}_{1})\) _and_ \(B_{0}(kG_{1})\)_._
2. _The Scott module_ \(\operatorname{Sc}(\widetilde{G}_{1}\times\widetilde{G}_{2},\Delta P)\) _induces a splendid Morita equivalence between_ \(B_{0}(k\widetilde{G}_{1})\) _and_ \(B_{0}(k\widetilde{G}_{1})\) _if and only if the Scott module_ \(\operatorname{Sc}(G_{1}\times G_{2},\Delta P)\) _induces a splendid Morita equivalence between_ \(B_{0}(kG_{1})\) _and_ \(B_{0}(kG_{2})\)_._
Proof.: Assertion (a) follows from [1, 1]. More precisely, the given map is an isomorphism of \(k\)-algebras by [1, Theorem] and [1, Theorems 1 and 2] proves that restriction from \(\widetilde{G}_{1}\) to \(G_{1}\) induces a splendid Morita equivalence. Assertion (b) is given by [10, Lemma 5.1].
**Lemma 4.2**.: _Let \(\widetilde{G}_{1},\widetilde{G}_{2}\) be finite groups. Assume that \(G_{1}\unlhd\widetilde{G}_{1}\) and \(G_{2}\unlhd\widetilde{G}_{2}\) are normal subgroups such that \(\widetilde{G}_{1}/G_{1}\), \(\widetilde{G}_{2}/G_{2}\) are a \(p^{\prime}\)-groups and assume that \(G_{1}\) and \(G_{2}\) have a common Sylow \(p\)-subgroup \(P\) such that \(\operatorname{Aut}(P)\) is a \(p\)-group. Then, conclusions_ (a) _and_ (b) _of Theorem 4.1 hold._
Proof.: It suffices to prove that the hypotheses of Theorem 4.1 are satisfied. So, let \(i\in\{1,2\}\). Since \(\operatorname{Aut}(P)\) is a \(p\)-group we have \(N_{\widetilde{G}_{i}}(P)=PC_{\widetilde{G}_{i}}(P)\). Moreover, by Frattini's argument \(\widetilde{G}_{i}=G_{i}N_{\widetilde{G}_{i}}(P)\), thus \(\widetilde{G}_{i}=G_{i}C_{\widetilde{G}_{i}}(P)\), as required.
Next, it is well-known that inflation from the quotient by a normal \(p^{\prime}\)-subgroup induces an isomorphism of blocks as \(k\)-algebras. In fact, there is splendid Morita equivalence induced by a Scott module and we have the following stronger result.
**Lemma 4.3**.: _Let \(G_{1},G_{2}\) be finite groups with a common Sylow \(p\)-subgroup \(P\). Let \(N_{1}\unlhd G_{1}\) and \(N_{2}\unlhd G_{2}\) be normal \(p^{\prime}\)-subgroups and write \({}^{-}:G_{1}\longrightarrow G_{1}/N_{1}=:\overline{G_{1}}\), respectively \({}^{-}:G_{1}\longrightarrow G_{2}/N_{2}=:\overline{G_{2}}\), for the quotient homomorphisms, so that, by abuse of notation, we may identify \(\overline{P}=PN_{1}/N_{1}\cong P\) with \(\overline{P}=PN_{2}/N_{2}\cong P\). Then the following assertions hold:_
1. \(\operatorname{Sc}(G_{1}{\times}\overline{G_{1}},\Delta P)\) _induces a splendid Morita equivalence between_ \(B_{0}(kG_{1})\) _and_ \(B_{0}(k\overline{G_{1}})\)_, where_ \(\Delta P\) _is identified with_ \(\{(u,\bar{u})\,|\,u\in P\}\)_;_
2. \(\operatorname{Sc}(G_{1}{\times}G_{2},\Delta P)\) _induces a splendid Morita equivalence between_ \(B_{0}(kG_{1})\) _and_ \(B_{0}(kG_{2})\) _if and only if_ \(\operatorname{Sc}(\overline{G_{1}}{\times}\overline{G_{2}},\Delta\overline{P})\) _induces a splendid Morita equivalence between_ \(B_{0}(k\overline{G_{1}})\) _and_ \(B_{0}(k\overline{G_{2}})\)_._
Proof.: (a) By the assumption \(N_{1}\leq O_{p^{\prime}}(G_{1})\), hence \(N_{1}\) acts trivially on \(B_{0}(kG_{1})\). Thus, \(B_{0}(kG_{1})\) and its image \(\overline{B_{0}(kG_{1})}=B_{0}(k\overline{G_{1}})\) in \(k\overline{G_{1}}\) are isomorphic as interior \(P\)-algebras. Part (a) follows then immediately from the fact that \(\operatorname{Sc}(G_{1}\times\overline{G}_{1},\Delta P)={}_{kG_{1}}B_{0}(kG_ {1})_{k\overline{G_{1}}}\) (seen as a \((kG_{1},k\overline{G_{1}})\)-bimodule). Part (b) follows from (a) and the fact that
\[\operatorname{Sc}(G_{1}\times\overline{G_{1}},\Delta P)\otimes_{B_{0}(k \overline{G_{1}})}\operatorname{Sc}(\overline{G_{1}}{\times}\overline{G_{2}},\Delta\overline{P})\otimes_{B_{0}(k\overline{G_{2}})}\operatorname{Sc}( \overline{G_{2}}{\times}G_{2},\Delta P)\cong\operatorname{Sc}(G_{1}\times G_ {2},\Delta P)\,.\]
(See e.g. the proof of [10, Lemma 5.1] for a detailed argument proving this isomorphism.)
The following Lemma is also essential to treat central extensions.
**Lemma 4.4**.: _Let \(G_{1}\) and \(G_{2}\) be finite groups having a common Sylow \(p\)-subgroup \(P\). Assume that \(Z_{1}\leq Z(G_{1})\) and \(Z_{2}\leq Z(G_{2})\) are central subgroups such that \(P\cap Z_{1}=P\cap Z_{2}\). Set \(\overline{G_{1}}:=G_{1}/Z_{1}\), \(\overline{G_{2}}:=G_{2}/Z_{2}\) and \(\overline{P}:=PZ_{1}/Z_{1}\cong PZ_{2}/Z_{2}\), which is a common Sylow \(p\)-subgroup of \(\overline{G_{1}}\) and \(\overline{G_{2}}\). Then, \(\operatorname{Sc}(G_{1}\times G_{2},\Delta P)\) induces a splendid Morita equivalence between \(B_{0}(kG_{1})\) and \(B_{0}(kG_{2})\) if and only if \(\operatorname{Sc}(\overline{G_{1}}\times\overline{G_{2}},\Delta\overline{P})\) induces a splendid Morita equivalence between \(B_{0}(k\overline{G_{1}})\) and \(B_{0}(k\overline{G_{2}})\)._
Proof.: Let \(i\in\{1,2\}\). Clearly, we have \(Z_{i}=(P\cap Z_{i})\times O_{p^{\prime}}(Z_{i})\) and
\[\overline{G_{i}}=G_{i}/Z_{i}\cong\left(G_{i}/(P\cap Z_{i})\right)/\left(Z_{i} /(P\cap Z_{i})\right)=:\overline{\overline{G_{i}}}\,.\]
Write \(\widetilde{P}\) for the image of \(P\) in the quotients \(G_{i}/(P\cap Z_{i})\) and write \(\overline{\overline{P}}\) for the image of \(P\) in the quotients \(\overline{\overline{G_{i}}}\). Now, on the one hand, by Theorem A.1, the Scott module \(\operatorname{Sc}(G_{1}\times G_{2},\Delta P)\) induces a splendid Morita equivalence between \(B_{0}(kG_{1})\) and \(B_{0}(kG_{2})\) if and only if \(\operatorname{Sc}(G_{1}/(P\cap Z_{1})\times G_{2}/(P\cap Z_{2}),\Delta \widetilde{P})\) induces a splendid Morita equivalence between \(B_{0}(k[G_{1}/(P\cap Z_{1})])\) and \(B_{0}(k[G_{2}/(P\cap Z_{2})])\), which by Lemma 4.3(b) happens if and only if \(\operatorname{Sc}(\overline{\overline{G_{1}}}\times\overline{\overline{G_{2}}},\Delta\overline{\overline{P}})\) induces a splendid Morita equivalence between \(B_{0}(k\overline{\overline{G_{1}}})\) and \(B_{0}(k\overline{\overline{G_{2}}})\). The claim follows.
The next theorem is a standard method, called "gluing method", which was already applied in [10, 11]. It relies on gluing results, allowing us to construct stable equivalences of Morita type, which is a slight variation of different results of the same type due to Broue, Rouquier, Linckelmann and Rickard. See e.g. [1, 6.3.Theorem], [16, Theorem 5.6] and [17, Theorem 3.1].
**Theorem 4.5**.: _Let \(G_{1}\) and \(G_{2}\) be finite groups with a common Sylow \(p\)-subgroup \(P\) satisfying \(\mathcal{F}_{P}(G_{1})=\mathcal{F}_{P}(G_{2})\). Then, \(M:=\operatorname{Sc}(G_{1}\times G_{2},\Delta P)\) induces a splendid Morita equivalence between \(B_{0}(kG_{1})\) and \(B_{0}(kG_{2})\) provided the following two conditions are satisfied:_
1. _for every subgroup_ \(Q\leq P\) _of order_ \(p\)_, the bimodule_ \(M(\Delta Q)\) _induces a Morita equivalence between_ \(B_{0}(k\,C_{G_{1}}(Q))\) _and_ \(B_{0}(k\,C_{G_{2}}(Q))\)_; and_
2. _for every simple_ \(B_{0}(kG_{1})\)_-module_ \(S_{1}\)_, the_ \(B_{0}(kG_{2})\)_-module_ \(S_{1}\otimes_{B_{0}(kG_{1})}M\) _is again simple._
Proof.: By [14, Lemma 4.1], Condition (I) is equivalent to the fact that \(M\) induces a stable equivalence of Morita type between \(B_{0}(kG_{1})\) and \(B_{0}(kG_{2})\). Therefore, applying [13, Theorem 2.1], Condition (II) now implies that \(M\) induces a Morita equivalence between \(B_{0}(kG_{1})\) and \(B_{0}(kG_{2})\). This equivalence is necessarily splendid since \(M\) is a \(p\)-permutation module by definition.
**Lemma 4.6**.: _Let \(A\) be a finite-dimensional \(k\)-algebra. Let \(X\) be an \(A\)-module and let \(Y\) be an \(A\)-submodule such that \(X/Y\) and \(\operatorname{soc}(Y)\) are both simple. If \(Y\) is not a direct summand of \(X\), then \(\operatorname{soc}(X)=\operatorname{soc}(Y)\), and hence \(X\) is indecomposable._
Proof.: Consider the short exact sequence \(0\longrightarrow Y\stackrel{{ i}}{{\longrightarrow}}X \stackrel{{\pi}}{{\longrightarrow}}X/Y\longrightarrow 0\), where \(i\) is the canonical inclusion and \(\pi\) is the quotient morphism. By [12, I Lemma 8.5(i) and (ii)], \(\operatorname{soc}(X)=\operatorname{soc}(Y)\) or \(\operatorname{soc}(X)=\operatorname{soc}(Y)\oplus S\) where \(S\) is an \(A\)-submodule of \(X\) such that \(S\cong X/Y\). Assume now that \(\operatorname{soc}(X)=\operatorname{soc}(Y)\oplus S\). First, we claim that \(S\cap Y=0\). So suppose that \(S\cap Y\!\neq\!0\). Obviously, \(\operatorname{soc}(S\cap Y)\leq\operatorname{soc}(S)=S\), so \(\operatorname{soc}(S\cap Y)=S\neq 0\), since \(S\) is simple. Hence \(S\leq\operatorname{soc}(Y)\), which contradicts the assumption that \(\operatorname{soc}(X)\) is the direct sum of its submodules \(\operatorname{soc}(Y)\) and \(S\), proving the claim. Hence, \(S\oplus Y\) is an \(A\)-submodule of \(X\), implying that \(X=S\oplus Y\) and contradicting the assumption.
Finally, the next lemma is often called the "stripping-off method". It will be used to verify Condition (II) of Theorem 4.5 in concrete cases.
**Lemma 4.7** ([14, Lemma A.1]).: _Let \(A\) and \(B\) be self-injective finite-dimensional \(k\)-algebras. Let \(F:\operatorname{mod}\)-\(A\longrightarrow\operatorname{mod}\)-\(B\) be a covariant functor satisfying the following conditions:_
1. \(F\) _is exact;_
2. _if_ \(X\) _is a projective_ \(A\)_-module, then_ \(F(X)\) _is a projective_ \(B\)_-module;_
3. \(F\) _realises a stable equivalence from_ \(\operatorname{mod}\)_-_\(A\) _to_ \(\operatorname{mod}\)_-_\(B\)_._
_Then, the following assertions hold._
1. _[label=_()_]_
2. _(_Stripping-off method, case of_ s o
\(R^{\prime}\subseteq\ker(F(\pi))\)_,_ \(F(X)=Y\oplus R^{\prime}\) _and_
\[\ker\left(F(X)\overset{F(\pi)}{\twoheadrightarrow}F(X/X^{\prime}) \right)=\ker\left(Y\overset{F(\pi)|_{Y}}{\twoheadrightarrow}F(X/X^{\prime}) \right)\oplus\left(\text{proj}\right).\]
## 5 Groups of type (W3(n)) and (W4(n))
**Hypothesis 5.1**.: From now on and until the end of this manuscript we assume that the algebraically closed field \(k\) has characteristic \(p=2\). Furthermore, \(G\), \(G_{1}\), \(G_{2}\), \(\mathcal{G}_{1}\), \(\mathcal{G}_{2}\), \(\widetilde{G}_{1}\), \(\widetilde{G}_{2}\), \(\mathsf{G}\), \(\mathsf{G}_{1}\) and \(\mathsf{G}_{2}\) always denote finite groups with a common Sylow \(2\)-subgroup \(P\cong C_{2^{n}}\wr C_{2}\), where \(n\geq 2\) is a fixed integer. In other words, we choose a Sylow \(2\)-subgroup of each of these groups and we identify them for simplicity. Moreover, \(q\), \(q_{1}\) and \(q_{2}\) are (possibly different) positive powers of odd prime numbers such that \((q-1)_{2}=(q_{1}-1)_{2}=(q_{1}-1)_{2}=2^{n}\).
In this section and the next two ones, we prove Theorem 1.2(b) through a case-by-case analysis. We start with the groups of types (W3(n)) and (W4(n)), for which we reduce the problem to the classification of principal blocks with dihedral defect groups up to splendid Morita equivalence obtained in [10, Theorem 1.1]. The group theory setting to keep in mind is described in the following remark.
**Remark 5.2**.: For any positive power \(q\) of an odd prime number [1, p.4] shows that we have the following inclusions of normal subgroups with the given indices:
\[\begin{array}{ccc}\mathrm{GL}_{2}(q)&&\mathrm{GU}_{2}(q)\\ \vspace{-0.2cm}\mathrm{SL}_{2}^{n}(q)&&\vspace{-0.2cm}\mathrm{SU}_{2}^{n}(q) \\ \vspace{-0.2cm}\mathrm{SL}_{2}^{2}(q)&&\vspace{-0.2cm}\mathrm{SU}_{2}^{n}(q) \\ \vspace{-0.2cm}\mathrm{SL}_{2}(q)&&\vspace{-0.2cm}\cong&&\mathrm{SU}_{2}(q) \end{array}\]
**Proposition 5.3**.: _For each \(i\in\{1,2\}\) let \(G_{i}:=\mathrm{SL}_{2}^{n}(q_{i})\), \(\mathcal{G}_{i}:=\mathrm{GL}_{2}(q_{i})\) and assume that \((q_{i}-1)_{2}=2^{n}\). Then, the following assertions hold:_
1. \(\mathrm{Sc}(\mathcal{G}_{1}\times\mathcal{G}_{2},\Delta P)\) _induces a splendid Morita equivalence between_ \(B_{0}(k\mathcal{G}_{1})\) _and_ \(B_{0}(k\mathcal{G}_{2})\)_;_
2. \(\mathrm{Sc}(G_{1}\times G_{2},\Delta P)\) _induces a splendid Morita equivalence between_ \(B_{0}(kG_{1})\) _and_ \(B_{0}(kG_{2})\)_._
Proof.: Elementary calculations yield \(G_{i}\lhd\mathcal{G}_{i}\) and \(|\mathcal{G}_{i}/G_{i}|=(q_{i}-1)/2^{n}\) for each \(i\in\{1,2\}\) (see Remark 5.2). In particular both indices are odd. Hence, by Lemma 3.1 and Lemma 4.2, assertion (b) follows from assertion (a), so it suffices to prove (a).
Now, \(P\cap Z(\mathcal{G}_{1})=P\cap Z(\mathcal{G}_{2})=Z(P)\), so \(\overline{P}:=(PZ(\mathcal{G}_{1}))/Z(\mathcal{G}_{1})\cong(PZ(\mathcal{G}_{2 }))/Z(\mathcal{G}_{2})\,,\) and hence, up to identification, we can consider that \(\overline{P}\in\mathrm{Syl}_{2}(\mathcal{G}_{1}/Z(\mathcal{G}_{1}))\cap \mathrm{Syl}_{2}(\mathcal{G}_{2}/Z(\mathcal{G}_{2}))\). Moreover, we have
\[\overline{P}\cong P/Z(P)\cong D_{2^{n+1}}\,,\]
see e.g. [11, (2.A) Lemma (iii)]. Since \(\mathcal{G}_{i}/Z(\mathcal{G}_{i})\cong\mathrm{PGL}_{2}(q_{i})\) for each \(i\in\{1,2\}\), assertion (a) now follows directly from Lemma 4.4 and [10, Theorem 1.1].
**Proposition 5.4**.: _For each \(i\in\{1,2\}\) let \(G_{i}:=\mathrm{SU}_{2}^{n}(q_{i})\), \(\mathcal{G}_{i}:=\mathrm{GU}_{2}(q_{i})\) and assume that \((q_{i}+1)_{2}=2^{n}\). Then, the following assertions hold:_
1. \(\operatorname{Sc}(\mathcal{G}_{1}\times\mathcal{G}_{2},\Delta P)\) _induces a splendid Morita equivalence between_ \(B_{0}(k\mathcal{G}_{1})\) _and_ \(B_{0}(k\mathcal{G}_{2})\)_;_
2. \(\operatorname{Sc}(G_{1}\times G_{2},\Delta P)\) _induces a splendid Morita equivalence between_ \(B_{0}(kG_{1})\) _and_ \(B_{0}(kG_{2})\)_._
Proof.: In this case \(G_{i}\lhd\mathcal{G}_{i}\) and \(|\mathcal{G}_{i}/G_{i}|=(q_{i}+1)/2^{n}\) for each \(i\in\{1,2\}\) (See Remark 5.2). Thus both indices are odd. Again by Lemma 3.1 and Lemma 4.2, it suffices to prove (a).
Now, \(P\cap Z(\mathcal{G}_{1})=P\cap Z(\mathcal{G}_{2})=Z(P)\). Thus \(\overline{P}:=(PZ(\mathcal{G}_{1}))/Z(\mathcal{G}_{1})\cong(PZ(\mathcal{G}_{ 2}))/Z(\mathcal{G}_{2})\) and we can consider that \(\overline{P}\in\operatorname{Syl}_{2}(\mathcal{G}_{1}/Z(\mathcal{G}_{1})) \cap\operatorname{Syl}_{2}(\mathcal{G}_{2}/Z(\mathcal{G}_{2}))\). As in the previous proof,
\[\overline{P}\cong P/Z(P)\cong D_{2^{n+1}}\,.\]
Next, for each for \(i\in\{1,2\}\) we have an isomorphism \(\operatorname{SU}_{2}(q_{i})\cong\operatorname{SL}_{2}(q_{i})\), and hence \(\operatorname{PSU}_{2}(q_{i})\cong\operatorname{PSL}_{2}(q_{i})\). Furthermore, since \(q_{i}\) is odd, \(\operatorname{PGL}_{2}(q_{i})=\operatorname{PSL}_{2}(q_{i}).2\) (where \(2\) denotes the cyclic group of order \(2\) generated by the diagonal automorphism of \(\operatorname{PSL}_{2}(q_{i})\)) by Steinberg's result (see [10, Chap.6 (8.8), p. 511 and Theorem 8.11]). In other words, we have
\[\mathcal{G}_{i}/Z(\mathcal{G}_{i})=\operatorname{PGU}_{2}(q_{i})\cong \operatorname{PGL}_{2}(q_{i})\,. \tag{1}\]
Therefore, assertion (a) follows immediately from Lemma 4.4 and [11, Theorem 1.1], proving the Proposition.
## 6. Groups of type (W5(n))
We now turn to the groups of type (W5(n)). We continue using Hypothesis 5.1.
**Notation 6.1**.: Throughout this section we let \(i\in\{1,2\}\) be arbitrary and set \(G_{i}:=\operatorname{PSL}_{3}(q_{i})\), \(\mathsf{G}_{i}:=\operatorname{SL}_{3}(q_{i})\) and \(\widetilde{G}_{i}:=\operatorname{GL}_{3}(q_{i})\) where we assume that \((q_{i}-1)_{2}=2^{n}\). Thus, after identifications, we may assume that \(G_{1}\), \(G_{2}\), \(\mathsf{G}_{1}\), \(\mathsf{G}_{2}\), \(\widetilde{G}_{1}\) and \(\widetilde{G}_{2}\) have a common Sylow \(2\)-subgroup \(P\) isomorphic to \(C_{2^{n}}\wr C_{2}\). Then,
\[B_{0}(kG_{i})\sim_{SM}B_{0}(k\mathsf{G}_{i})\sim_{SM}B_{0}(k\widetilde{G}_{i}) \tag{2}\]
where the first splendid Morita equivalence is induced by inflation (as \(Z(\operatorname{SL}_{3}(q_{i}))\cong C_{(3,q_{i}-1)}\) is a \(2^{\prime}\)-group), and the second one is given by Theorem 4.1, that is, induced by restriction from \(\operatorname{GL}_{3}(q_{i})\) to \(\operatorname{SL}_{3}(q_{i})\). This means that to any simple \(kG_{i}\)-module \(R\) belonging to \(B_{0}(kG_{i})\) corresponds a simple \(B_{0}(k\widetilde{G}_{i})\)-module, which we denote by \(\widetilde{R}\), such that
\[\operatorname{Inf}_{G_{i}}^{\mathsf{G}_{i}}(R)=\operatorname{Res}_{\mathsf{G} _{i}}^{\widetilde{G}_{i}}(\widetilde{R})\,.\]
Using [1, Proposition 4.3.1 and Remark 4.2.1] we know that \(B_{0}(k\mathsf{G}_{i})\) contains three unipotent characters, namely
\[1_{\mathsf{G}_{i}},\chi_{q_{i}^{2}+q_{i}},\chi_{q_{i}^{3}}\,,\]
where we use the convention that the indices denote the degrees, whereas those lying in \(B_{0}(k\widetilde{G}_{i})\) can be written as
\[1_{\widetilde{G}_{i}},\widetilde{\chi}_{q_{i}^{2}+q_{i}},\widetilde{\chi}_{q_ {i}^{3}}\]
and satisfy \(1_{\widetilde{G}_{i}}\!\!\downarrow_{\mathsf{G}_{i}}=1_{\mathsf{G}_{i}}\), \(\widetilde{\chi}_{q_{i}^{2}+q_{i}}\!\!\downarrow_{\mathsf{G}_{i}}=\chi_{q_{i}^{ 2}+q_{i}}\) and \(\widetilde{\chi}_{q_{i}^{3}}\!\!\downarrow_{\mathsf{G}_{i}}=\chi_{q_{i}^{3}}\). (We also refer to [13], [10, 7.19. Theorem(i)], that first described these characters and their degrees.)
We obtain from [10, SS4] and the above that \(3=\ell(B_{0}(k\widetilde{G}_{i}))=\ell(B_{0}(k\mathsf{G}_{i}))\), so we may write
\[\operatorname{Irr}_{k}(B_{0}(k\mathsf{G}_{i}))=:\{k_{\mathsf{G}_{i}},S_{i},T_ {i}\}\quad\text{ and }\quad\operatorname{Irr}_{k}(B_{0}(k\widetilde{G}_{i}))=:\{k_{ \widetilde{G}_{i}},\widetilde{S}_{i},\widetilde{T}_{i}\}\,,\]
where \(S_{i}=\widetilde{S}_{i}{\downarrow}_{{\sf G}_{i}}\) and \(T_{i}=\widetilde{T}_{i}{\downarrow}_{{\sf G}_{i}}\). Moreover, by [1, p. 253], the part of the \(2\)-decomposition matrix of \(B_{0}(k\widetilde{G}_{i})\) whose rows are labelled by the unipotent characters is as follows:
\[\begin{array}{c|cccc}&k_{\widetilde{G}_{i}}&\widetilde{S}_{i}&\widetilde{T} _{i}\\ \hline 1_{\widetilde{G}_{i}}&1&.&.\\ \widetilde{\chi}_{q_{i}^{2}+q_{i}}&.&1&.\\ \widetilde{\chi}_{q_{i}^{3}}&1&.&1\end{array}\]
(This is the case \(\Delta_{3}\) with \(n=3\), \(e=2\) and \(p\geq 2\).)
We start by describing some trivial source modules belonging to the principal \(2\)-block of \(\mathrm{SL}_{3}(q_{i})\) which we will use in the sequel.
**Lemma 6.2**.: _The principal block \(B_{0}(k{\sf G}_{i})\) contains, amongst others, the following trivial source modules:_
1. _the trivial module_ \(k_{{\sf G}_{i}}\)_, with vertex_ \(P\) _and affording the trivial character_ \(1_{{\sf G}_{i}}\) _;_
2. _the simple module_ \(S_{i}\)_, having_ \(Q:=C_{2^{n}}\times C_{2^{n}}\leq P\) _as a vertex, and affording the character_ \(\chi_{q_{i}^{2}+q_{i}}\) _;_
3. _the Scott module_ \(\text{Sc}({\sf G}_{i},Q)\) _with vertex_ \(Q\)_, satisfying_ \(Sc({\sf G}_{i},Q)\not\cong S_{i}\)_;_
4. _the Scott module_ \(\text{Sc}({\sf G}_{i},\mathbb{B}_{i})\) _on a Borel subgroup_ \(\mathbb{B}_{i}\) _of_ \({\sf G}_{i}\)_, which is_ _uniserial with composition series_ _and affords the character_ \(1_{{\sf G}_{i}}+\chi_{q_{i}^{3}}\)_._
Proof.: First we note that it is clear that all the given modules belong to the principal block as at least one of their constituents obviously does.
(a) It is clear that the trivial module is a trivial source module with vertex \(P\) affording the trivial character.
(b) As the restriction of a trivial source module is always a trivial source module, to prove that \(S_{i}\) is a trivial source module affording \(\chi_{q_{i}^{2}+q_{i}}\), it is enough to prove that the \(k\widetilde{G}_{i}\)-module \(\widetilde{S}_{i}\) is a trivial source module affording affords \(\widetilde{\chi}_{q_{i}^{2}+q_{i}}\). (See e.g. [1, SS4] for these properties.) Now, [1, pp. 228-229] shows that \(1_{\widetilde{G}_{i}}+\widetilde{\chi}_{q^{2}+q}\) is a permutation character. More precisely there exists a subgroup \(\widetilde{H}_{i}\leq\widetilde{G}_{i}\) such that \(\widetilde{H}_{i}\cong(C_{q_{i}}\times C_{q_{i}})\rtimes\text{GL}_{2}(q_{i})\), \(|\widetilde{G}_{i}:\widetilde{H}_{i}|=1+q_{i}+q_{i}^{2}\) and \(1_{\widetilde{H}_{i}}{\uparrow}^{\widetilde{G}_{i}}=1_{\widetilde{G}_{i}}+ \widetilde{\chi}_{q_{i}^{2}+q_{i}}\). Thus, setting \(X_{i}:=k_{\widetilde{H}_{i}}{\uparrow}^{\widetilde{G}_{i}}\), the decomposition matrix given in Notation 6.1 implies that
\[X_{i}=k_{\widetilde{G}_{i}}+\widetilde{S}_{i}\text{ (as composition factors)}\,.\]
Then \(X_{i}=k_{\widetilde{G}_{i}}\oplus\widetilde{S}_{i}\) as \(k_{G_{i}}\) must occur as a composition factor of the socle and of the head, proving that \(\widetilde{S}_{i}\) is a trivial source module affording the character \(\widetilde{\chi}_{q_{i}^{2}+q_{i}}\). Finally, using [1, II Lemma 12.6(iii)] and the character table of \(\mathrm{SL}_{3}(q_{i})\) in [11] we can read from the values of the character \(\chi_{q_{i}^{2}+q_{i}}\) at non-trivial \(2\)-elements that \(Q=C_{2^{n}}\times C_{2^{n}}\leq P\) is a vertex of \(S_{i}\).
(c) The Scott module \(\text{Sc}({\sf G}_{i},Q)\) is a trivial source module with vertex \(Q\) and clearly \(S_{i}\not\cong\text{Sc}({\sf G}_{i},Q)\), as a Scott module always has a trivial constituent in its head by definition.
(d) [11, pp. 228-229] also shows that \(1_{\widetilde{G}_{i}}+2\widetilde{\chi}_{q_{i}^{2}+q_{i}}+\widetilde{\chi}_{q_{i} ^{3}}\) is a permutation character. More precisely, there is a Borel subgroup \(\widetilde{\mathbb{B}}_{i}\leq\widetilde{G}_{i}\) such that \(1_{\widetilde{\mathbb{B}}_{i}}\mathchoice{{\vbox{\hbox{$ \uparrow$}\kern-10.0pt}\hbox{\kern-10.0pt}}}{{\vbox{\hbox{$ \uparrow$}\kern-10.0pt}}}{{\vbox{\hbox{$\uparrow$}\kern-10.0pt}}}{{ \vbox{\hbox{$\uparrow$}\kern-10.0pt}}}}{{\vbox{\hbox{$ \uparrow$}\kern-10.0pt}}}^{\widetilde{G}}=1_{\widetilde{G}_{i}}+2 \widetilde{\chi}_{q_{i}^{2}+q_{i}}+\widetilde{\chi}_{q_{i}^{3}}\). Setting \(Y_{i}:=k_{\widetilde{\mathbb{B}}_{i}}\mathchoice{{\vbox{\hbox{$ \uparrow$}\kern-10.0pt}\hbox{\kern-10.0pt}}}{{\vbox{\hbox{$ \uparrow$}\kern-10.0pt}}}{{\vbox{\hbox{$\uparrow$}\kern-10.0pt}}}{{ \vbox{\hbox{$\uparrow$}\kern-10.0pt}}}}{{\vbox{\hbox{$ \uparrow$}\kern-10.0pt}}}^{\widetilde{G}_{i}}\) we obtain from the decomposition matrix in Notation 6.1 that
\[Y_{i}=2\times k_{\widetilde{G}_{i}}+2\times\widetilde{S}_{i}+\widetilde{T}_{i} \quad\text{(as composition factors)}.\]
As both \(Y_{i}\) and \(\widetilde{S}_{i}\) are trivial source modules, we have
\[\dim_{k}\operatorname{Hom}_{k\widetilde{G}_{i}}(Y_{i},\widetilde{S}_{i})= \dim_{k}\operatorname{Hom}_{k\widetilde{G}_{i}}(\widetilde{S}_{i},Y_{i})= \langle 1_{\widetilde{G}_{i}}+2\widetilde{\chi}_{q_{i}^{2}+q_{i}}+\widetilde{ \chi}_{q_{i}^{3}},\widetilde{\chi}_{q_{i}^{2}+q_{i}}\rangle_{\widetilde{G}_{i} }=2\]
(see [10, II Theorem 12.4(iii)]), implying that \(\widetilde{S}_{i}\oplus\widetilde{S}_{i}\mid\operatorname{soc}(Y_{i})\) and \(\widetilde{S}_{i}\oplus\widetilde{S}_{i}\mid\operatorname{hd}(Y_{i})\). Thus, there exists a submodule \(U_{i}\) of \(Y_{i}\) such that \(Y_{i}\cong\widetilde{S}_{i}\oplus\widetilde{S}_{i}\oplus U_{i}\) and hence \(U_{i}\) is a trivial source module with composition factors \(2\times k_{\widetilde{G}_{i}}+T_{i}\) and \(U_{i}\) affords the ordinary character \(1_{\widetilde{G}_{i}}+\widetilde{\chi}_{q_{i}^{3}}\). Applying [10, II Theorem 12.4(iii)] again, we get
\[\dim_{k}\operatorname{Hom}_{k\widetilde{G}}(U_{i},U_{i})=\langle 1_{\widetilde{G}_{i}}+ \widetilde{\chi}_{q_{i}^{3}},1_{\widetilde{G}_{i}}+\widetilde{\chi}_{q_{i}^{ 3}}\rangle_{\widetilde{G}_{i}}=2\]
and
\[\dim_{k}\operatorname{Hom}_{k\widetilde{G}_{i}}(k_{\widetilde{G}_{i}},U_{i})= \langle 1_{\widetilde{G}_{i}},1_{\widetilde{G}_{i}}+\widetilde{\chi}_{q_{i}^{3}} \rangle_{\widetilde{G}_{i}}=1=\dim_{k}\operatorname{Hom}_{k\widetilde{G}}(U_ {i},k_{\widetilde{G}_{i}})\,.\]
It follows that
\[U_{i}=\begin{array}{c|c}\hline k_{\widetilde{G}_{i}}\\ T\\ k_{\widetilde{G}_{i}}\end{array}=\operatorname{Sc}(\widetilde{G}_{i}, \widetilde{\mathbb{B}}_{i})\]
and setting \(\mathbb{B}_{i}:=\widetilde{\mathbb{B}}_{i}\cap\mathsf{G}_{i}\) yields assertion (d).
We can now prove Theorem 1.2(b) for the groups of types (W5(n)).
**Proposition 6.3**.: _The Scott module \(\operatorname{Sc}(G_{1}\times G_{2},\Delta P)\) induces a splendid Morita equivalence between \(B_{0}(kG_{1})\) and \(B_{0}(kG_{2})\)._
Proof.: Below \(i\in\{1,2\}\). First, we observe that by Lemma 4.4, \(\operatorname{Sc}(G_{1}\times G_{2},\Delta P)\) induces a splendid Morita equivalence between the principal blocks \(B_{0}(kG_{1})\) and \(B_{0}(kG_{2})\) if and only if \(\operatorname{Sc}(\mathsf{G}_{1}\times\mathsf{G}_{2},\Delta P)=:M\) induces a splendid Morita equivalence between \(B_{1}:=B_{0}(k\mathsf{G}_{1})\) and \(B_{2}:=B_{0}(k\mathsf{G}_{2})\). Thus, we may work with \(\mathsf{G}_{i}\) instead of \(G_{i}\) (for \(i\in\{1,2\}\)). Now, observe that \(\mathcal{F}_{P}(\mathsf{G}_{1})=\mathcal{F}_{P}(\mathsf{G}_{2})\) and all involutions in \(\mathsf{G}_{i}\) are \(\mathsf{G}_{i}\)-conjugate (see e.g. [13, Theorem 5.3] and [1, Proposition 2 on p.11]). Thus, it follows that it now suffices to prove that Conditions (I) and (II) of Theorem 4.5 hold.
**Condition (I)**. By the above we only need to consider one involution in \(P\), so we choose an involution \(z\in Z(P)\), and set \(C_{i}:=C_{\mathsf{G}_{i}}(z)\). Clearly, \(C_{i}\cong\operatorname{GL}_{2}(q_{i})\) and again, up to identification, we see \(P\in\operatorname{Syl}_{2}(\mathcal{G}_{1})\cap\operatorname{Syl}_{2}( \mathcal{G}_{2})\) (see Remark 5.2). We have to prove that \(M(\Delta(z))\) induces a Morita equivalence between \(B_{0}(kC_{1})\) and \(B_{0}(kC_{2})\). Now, recall that \(M_{z}:=\operatorname{Sc}(C_{1}\times C_{2},\Delta P)\) induces a splendid Morita equivalence between \(B_{0}(kC_{1})\) and \(B_{0}(kC_{2})\) by Proposition 5.3(a). Moreover, obviously, it is always true that \(M_{z}\mid M(\Delta\langle z\rangle)\), and we obtain that equality \(M_{z}=M(\Delta\langle z\rangle)\) holds by the Brauer indecomposability of \(M\) proved in [13, Theorem 1.1]. Thus Condition (I) is verified.
**Condition (II)**. We have to prove that the functor \(-\otimes_{B_{1}}M\) maps the simple \(B_{1}\)-modules to the simple \(B_{2}\)-modules. First, we have \(k_{\mathsf{G}_{1}}\otimes_{B_{1}}M\cong k_{\mathsf{G}_{2}}\) by [13, Lemma 3.4(a)]. Next, as \(N_{\mathsf{G}_{i}}(Q)/Q\cong\mathfrak{S}_{3}\), there are precisely \(|\mathfrak{S}_{3}|_{2}=2\) non-isomorphic trivial source \(k\mathsf{G}_{i}\)-modules (see e.g. [14, Theorem 4.6(c)]), namely the modules \(\operatorname{Sc}(\mathsf{G}_{i},Q)\) and \(S_{i}\), both belonging to the principal block by Lemma 6.2. Now, on the one hand, we know
from [13, Theorem 2.1(a)] that \(S_{1}\otimes_{B_{1}}M=:V\) is indecomposable and non-projective, and on the other hand we know from [13, Lemma 3.4(b)] that \(V\) is a trivial source module with vertex \(Q\). Thus \(V\) is either \(\operatorname{Sc}(\mathsf{G}_{2},Q)\) or \(S_{2}\). However, \(\operatorname{Sc}(\mathsf{G}_{1},Q)\otimes_{B_{1}}M\cong\operatorname{Sc}( \mathsf{G}_{2},Q)\oplus(\mathsf{proj})\) by [13, Lemma 3.4(c)]. Hence, it follows immediately that
\[S_{1}\otimes_{B_{1}}M\cong S_{2}\,.\]
It remains to treat \(T_{1}\). By our assumption, \((q-1)_{2}=(q_{2}-1)_{2}=2^{n}\), so the Sylow \(2\)-subgroups of \(\mathbb{B}_{1}\) and \(\mathbb{B}_{2}\) are isomorphic, meaning that the Scott modules \(\operatorname{Sc}(\mathsf{G}_{1},\mathbb{B}_{1})\) and \(\operatorname{Sc}(\mathsf{G}_{2},\mathbb{B}_{2})\) have isomorphic vertices (see e.g. [14, Corollary 4.8.5]). Therefore, [13, Lemma 3.4(c)] together with Lemma 6.2(d) yield
\[\boxed{\begin{array}{c}k_{\mathsf{G}_{1}}\\ T_{1}\\ k_{\mathsf{G}_{1}}\end{array}}\otimes_{B_{1}}M=\operatorname{Sc}(\mathsf{G}_{1},\mathbb{B}_{1})\otimes_{B_{1}}M\cong\operatorname{Sc}(\mathsf{G}_{2},\mathbb{ B}_{2})\oplus(\mathsf{proj})=\boxed{\begin{array}{c}k_{\mathsf{G}_{2}}\\ T_{2}\\ k_{\mathsf{G}_{2}}\end{array}}\oplus(\mathsf{proj})\end{array}}\]
and Lemma 4.7 implies that \(T_{1}\otimes_{B_{1}}M\cong T_{2}\oplus(\mathsf{proj})\,\). However, again [13, Theorem 2.1(a)] tells us that \(T_{1}\otimes_{B_{1}}M\) is indecomposable non-projective, proving that
\[T_{1}\otimes_{B_{1}}M\cong T_{2}\,.\]
Thus, Condition (II) is verified and the proposition is proved.
## 7 Groups of type (W6(n))
Finally, we examine the groups of type (W6(n)), and we continue using Hypothesis 5.1. Our aim is to prove Theorem 1.2(b) for such groups. However, in order to reach this aim, first we start by collecting some information about the principal \(2\)-block of \(\operatorname{PGU}_{3}(q)\) and about some of its modules.
**Notation 7.1**.: Throughout this section, given a positive power \(q\) of prime number satisfying \((q+1)_{2}=2^{n}\), we set the following notation. The \(3\)-dimensional projective unitary group is
\[\operatorname{GU}_{3}(q)=\{(a_{rs})\in\operatorname{GL}_{3}(q^{2})\mid(a_{sr} )w_{0}(a_{rs}^{q})=w_{0}\}\qquad\text{ with }\quad w_{0}:=\left(\begin{smallmatrix}0&0&1\\ 0&1&0\\ 1&0&0\end{smallmatrix}\right),\]
\(\mathsf{G}:=\mathsf{G}(q):=\operatorname{PGU}_{3}(q)=\operatorname{GU}_{3}(q)/ Z(\operatorname{GU}_{3}(q))\) where \(Z(\operatorname{GU}_{3}(q))\) consists of the scalar matrices in \(\operatorname{GU}_{3}(q)\), and \(\operatorname{PSU}_{3}(q)=:G(q)\) is the commutator subgroup of \(\operatorname{PGU}_{3}(q)\), which is a normal subgroup of index \((3,q+1)\). Furthermore, we let \(\mathbb{B}:=\mathbb{B}(q)\) denote the Borel subgroup of \(\operatorname{GU}_{3}(q)\) defined by \(\mathbb{B}(q)=\mathbb{T}(q)\mathbb{U}(q)\), with \(\mathbb{T}(q):=\{\operatorname{diag}(\zeta^{-1},1,\zeta^{q})\mid\zeta\in \mathbb{F}_{q^{2}}^{\times}\}\) and
\[\mathbb{U}(q):=\left\{\left(\begin{smallmatrix}1&0&0\\ \alpha&1&0\\ \beta&-\alpha^{q}&1\end{smallmatrix}\right)\in\operatorname{GU}_{3}(q)\mid \alpha,\beta\in\mathbb{F}_{q^{2}}\text{ and }\alpha^{q+1}+\beta^{q}+\beta=1\right\}.\]
It is clear that \(\mathbb{B}\cap Z(\operatorname{GU}_{3}(q))=1\), thus we may, and we do, identify \(\mathbb{B}\) with a subgroup of \(\operatorname{PGU}_{3}(q)\).
Next, we observe that [12, Theorem 1A] gives us the number of ordinary characters in the principal \(2\)-block of \(G\) and their degrees. Moreover, using [10, Table 1.1 and Table 3.1], or CHEVIE [11] it is easy to compute central characters and we have that \(B_{0}(k\mathsf{G})\) contains the following ordinary irreducible characters, in the notation
of [10]:
\[\begin{array}{l|l|l}&\text{condition}&\text{number of characters}\\ \hline 1_{\mathsf{G}}&\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit 1\\ \chi_{q(q-1)}&\omit\span\omit\span\omit\span\omit\span\omit\span\omit 1\\ \chi_{q^{3}}&\omit\span\omit\span\omit\span\omit\span\omit\span\omit 1\\ \hline\chi_{q^{2}-q+1}^{(u)}&u\equiv 0&(\text{mod }(q+1)_{2^{\prime}})&2^{n}-1\\ \hline\chi_{q(q^{2}-q+1)}^{(u)}&u\equiv 0&(\text{mod }(q+1)_{2^{\prime}})&2^{n}-1\\ \hline\chi_{(u,v)}^{(u,v)}&u,v\equiv 0&(\text{mod }(q+1)_{2^{\prime}})&(2^{n}-1)(2^{n- 1}-1)/3\\ \hline\chi_{q^{3}+1}^{(u)}&u\equiv 0&(\text{mod }(q+1)_{2^{\prime}})&2^{n-1}\\ \end{array}\]
where the subscripts denote the degrees. Finally, the principal block of \(k\mathsf{G}\) contains precisely three pairwise non-isomorphic simple modules and we write
\[\operatorname{Irr}_{k}(B_{0}(k\mathsf{G}))=\{k_{\mathsf{G}},\varphi,\theta\}\]
as in [12, Theorem 4.1] where the simples and their Brauer characters are identified for simplicity.
**Lemma 7.2**.: _With the notation of Notation 7.1, the decomposition matrix of the principal \(2\)-block of \(\mathsf{G}=\operatorname{PGU}_{3}(q)\) is as follows:_
\[\begin{array}{l|cccc}&k_{\mathsf{G}}&\varphi&\theta&\text{number of characters}\\ \hline 1_{\mathsf{G}}&1&.&.&1\\ \chi_{q(q-1)}&.&1&.&1\\ \chi_{q^{3}}&1&2&1&1\\ \hline\chi_{q^{2}-q+1}^{(u)}&1&1&.&2^{n}-1\\ \hline\chi_{q(q^{2}-q+1)}^{(u)}&1&1&1&2^{n}-1\\ \hline\chi_{(q^{2}-1)(q^{2}-q+1)}^{(u,v)}&.&.&1&(2^{n}-1)(2^{n-1}-1)/3\\ \hline\chi_{q^{3}+1}^{(u)}&2&2&1&2^{n-1}\\ \end{array}\]
Proof.: To start with, [12, Appendix] gives us the unipotent part of the decomposition matrix. (See also [1, Table 4.5].) Then, direct computations using [10, Table 1.1 and Table 3.1] (see also [12]) or CHEVIE [13] yield the remaining entries. In particular, it follows easily from the character table that any two irreducible characters of the same degree have the same reduction modulo \(2\).
**Corollary 7.3**.: _The \(B_{0}(k\mathsf{G})\)-simple modules \(\varphi\) and \(\theta\) are not trivial source modules._
Proof.: It follows from the decomposition matrix of \(B_{0}(k\mathsf{G})\) in Lemma 7.2 that \(\varphi\) and \(\theta\) are liftable modules. Moreover, any lift of \(\varphi\) to an \(\mathsf{OG}\)-lattice affords the unipotent character \(\chi_{q(q-1)}\), and any lift of \(\theta\) to an \(\mathsf{OG}\)-lattice affords one of the characters \(\chi_{(q-1)(q^{2}-q+1)}^{(u,v)}\) of degree \((q-1)(q^{2}-q+1)\). However, it follows from [10, II Theorem 12.4(iii)] that neither \(\chi_{q(q-1)}\) nor the characters \(\chi_{(q-1)(q^{2}-q+1)}^{(u,v)}\) can be the characters of trivial source modules, because it is easily checked from the character table that these characters take strictly negative values at some \(2\)-elements. (See e.g. [10, Table 3.1].)
Next we collect useful information about the permutation module \(k_{\mathbb{B}}\mathord{\uparrow}^{\mathsf{G}}\) and the \(2\)nd Heller translate \(\Omega^{2}(k_{\mathsf{G}})\), based on ideas of [11, pp. 259-260 and p. 263] and which complements the information provided in [12, pp. 227-228].
**Lemma 7.4**.: _Assume \(\mathsf{G}=\mathrm{PGU}_{3}(q)\) and set \(X:=\Omega^{2}(k_{\mathsf{G}})\). Then, the following assertions hold:_
1. _the permutation module_ \(k_{\mathbb{B}}\!\uparrow^{\mathsf{G}}\) _is a trivial source module affording the ordinary character_ \(1_{\mathbb{B}}\!\uparrow^{\mathsf{G}}=1_{\mathsf{G}}+\chi_{q^{3}}\) _and satisfying_ \[k_{\mathbb{B}}\!\uparrow^{\mathsf{G}}\ =\ \begin{array}{c}\boxed{k_{\mathsf{G}}}\\ \varphi\\ \theta\\ \varphi\\ k_{\mathsf{G}}\end{array}\ =\ \mathrm{Sc}(\mathsf{G},\mathbb{B})\ =\ \mathrm{Sc}(\mathsf{G},Q)\] _where_ \(Q\in\mathrm{Syl}_{2}(\mathbb{B})\) _is such that_ \(Q\cong C_{2^{n+1}}\) _and we may assume that_ \(Q\leq P\)_;_
2. _no indecomposable direct summand_ \(U\) _of_ \(\varphi\!\downarrow_{\mathbb{B}}\) _or_ \(\theta\!\downarrow_{\mathbb{B}}\) _belongs to_ \(B_{0}(k\mathbb{B})\)_;_
3. \(\mathrm{Ext}^{1}_{k\mathsf{G}}(k_{\mathsf{G}},k_{\mathsf{G}})=0\)_;_
4. \(\dim_{k}\mathrm{Ext}^{1}_{k\mathsf{G}}(k_{\mathsf{G}},\varphi)=\dim_{k} \mathrm{Ext}^{1}_{k\mathsf{G}}(\varphi,k_{\mathsf{G}})=1\)_;_
5. \(\mathrm{Ext}^{1}_{k\mathsf{G}}(k_{\mathsf{G}},\theta)=\mathrm{Ext}^{1}_{k \mathsf{G}}(\theta,k_{\mathsf{G}})=0\)_;_
6. \(\mathrm{hd}(\Omega(k_{\mathsf{G}}))=\varphi\) _and so there exists a surjective_ \(k\mathsf{G}\)_-homomorphism_ \(P(\varphi)\twoheadrightarrow\Omega(k_{\mathsf{G}})\)_;_
7. \(X\) _lifts to an_ \(\mathcal{OG}\)_-lattice which affords the character_ \(\chi_{q(q-1)}+\chi_{q^{3}}\) _and as composition factor_ \(X=k_{\mathsf{G}}+3\times\varphi+\theta\) _;_
8. \(\mathrm{soc}(X)\cong\varphi\) _and_ \(\varphi\mid\mathrm{hd}(X)\) _;_
9. \(\dim_{k}\mathrm{Hom}_{k\mathsf{G}}(X,k_{\mathbb{B}}\!\uparrow^{\mathsf{G}})= \dim_{k}\mathrm{Hom}_{k\mathsf{G}}(k_{\mathbb{B}}\!\uparrow^{\mathsf{G}},X)=1\)_;_
10. \(k_{\mathsf{G}}\not\mid\mathrm{soc}^{2}(X)\)_;_
11. \(X\) _has a uniserial_ \(k\mathsf{G}\)_-submodule_ \(Z\cong k_{\mathbb{B}}\!\uparrow^{\mathsf{G}}/\mathrm{soc}(k_{\mathbb{B}}\! \uparrow^{\mathsf{G}})\) _with Loewy series_ \[\boxed{k_{\mathsf{G}}}\] _and hence if_ \(Y:=\mathrm{rad}(Z)=\boxed{\varphi}\boxed{k_{\mathsf{G}}}\) _then_ \(X/Y\) _is of the form_ \(\boxed{\varphi}\boxed{k_{\mathsf{G}}}\) _or of the form_ \(k_{\mathsf{G}}\oplus\varphi\) _._
Proof.: (a) The claim about the structure of \(Q\) is clear from the structure of \(\mathbb{B}\). Hence, it is clear that \(\mathrm{Sc}(\mathsf{G},\mathbb{B})=\mathrm{Sc}(\mathsf{G},Q)\) (see e.g. [17, Corollary 4.8.5]). The claim about \(k_{\mathbb{B}}\!\uparrow^{\mathsf{G}}\) being uniserial with the given Loewy series and the given ordinary character is given by [11, Theorem 4.1(c) and Appendix (pp. 238-241)]. Then, as \(k_{\mathbb{B}}\!\uparrow^{\mathsf{G}}\) is indecomposable, and \(\mathrm{Sc}(\mathsf{G},\mathbb{B})\) is an indecomposable direct summand of \(k_{\mathbb{B}}\!\uparrow^{\mathsf{G}}\) by definition, certainly \(k_{\mathbb{B}}\!\uparrow^{\mathsf{G}}=\mathrm{Sc}(\mathsf{G},\mathbb{B})\).
(b) Suppose that \(U\mid\varphi\!\downarrow_{\mathbb{B}}\) and \(U\) lies in \(B_{0}(k\mathbb{B})\). Since \(\mathbb{B}\) is \(2\)-nilpotent, its principal block is nilpotent and so \(\mathrm{Irr}_{k}(B_{0}(k\mathbb{B}))=\{k_{\mathbb{B}}\}\). All the composition factors of \(U\) are isomorphic to \(k_{\mathbb{B}}\) as they must lie in \(B_{0}(k\mathbb{B})\). Thus, \(0\neq\mathrm{Hom}_{k\mathbb{B}}(U,k_{\mathbb{B}})\) and Frobenius reciprocity yields
\[0\neq\mathrm{Hom}_{k\mathbb{B}}(\varphi\!\downarrow_{\mathbb{B}},k_{\mathbb{B}} )\cong\mathrm{Hom}_{kG}(\varphi,k_{\mathbb{B}}\!\uparrow^{\mathsf{G}})\,,\]
proving that \(\varphi\) is a constituent of the soc of \(k_{\mathbb{B}}\!\uparrow^{G}\). This contradicts (a) and so the first claim follows. The claim about \(\theta\) is proved analogously.
(c) By [13, I Corollary 10.13], \(\mathrm{Ext}^{1}_{k\mathsf{G}}(k_{\mathsf{G}},k_{\mathsf{G}})=0\) as \(O^{2}(\mathsf{G})=\mathsf{G}\).
(d) First, it is immediate from (a) that \(\dim_{k}\operatorname{Ext}^{1}_{k\mathsf{G}}(k_{\mathsf{G}},\varphi)\geq 1\). Now, suppose that \(\dim_{k}\operatorname{Ext}^{1}_{k\mathsf{G}}(k_{\mathsf{G}},\varphi)\geq 2\). Then, there exists a non-split short exact sequence
\[0\to\varphi\to V\to k_{\mathbb{B}}\!\uparrow^{\mathsf{G}}\to 0\]
of \(k\mathsf{G}\)-modules, i.e. \(\operatorname{Ext}^{1}_{k\mathsf{G}}(k_{\mathbb{B}}\!\uparrow^{\mathsf{G}}, \varphi)\neq 0\). However, by the Eckmann-Shapiro Lemma,
\[\operatorname{Ext}^{1}_{k\mathsf{G}}(k_{\mathbb{B}}\!\uparrow^{\mathsf{G}}, \varphi)\cong\operatorname{Ext}^{1}_{k\mathsf{B}}(k_{\mathbb{B}},\varphi \!\downarrow_{\mathbb{B}})\,,\]
which is zero by (b). This is a contradiction and so it follows that \(\dim_{k}\operatorname{Ext}^{1}_{k\mathsf{G}}(k_{\mathsf{G}},\varphi)=1\). Moreover, \(\dim_{k}\operatorname{Ext}^{1}_{k\mathsf{G}}(k_{\mathsf{G}},\varphi)=1\) as well by the self-duality of \(k_{\mathsf{G}}\) and \(\varphi\).
(e) Suppose that \(\operatorname{Ext}^{1}_{k\mathsf{G}}(k_{\mathsf{G}},\theta)\!\neq\!0\). Then, with arguments similar to those used in the proof of (d), we obtain that \(\operatorname{Ext}^{1}_{k\mathsf{G}}(k_{\mathbb{B}}\!\uparrow^{\mathsf{G}}, \theta)\!\neq\!0\), which contradicts (b). Again, as \(k_{\mathsf{G}}\) and \(\theta\) are self-dual, it follows that \(\operatorname{Ext}^{1}_{k\mathsf{G}}(k_{\mathsf{G}},\theta)=0\) as well.
(f) Since \(\operatorname{Irr}_{k}(B_{0}(k\mathsf{G}))=\{k_{\mathsf{G}},\varphi,\theta\}\), it follows from (c), (d) and (e) that the second Loewy layer of \(P(k_{\mathsf{G}})\) consists just of the simple module \(\varphi\), with multiplicity \(1\). Thus, the claim follows from the fact that \(\Omega(k_{\mathsf{G}})=P(k_{\mathsf{G}})\!\cdot\!\operatorname{rad}(kG)\).
(g) First, it is well-known that \(X\) lifts to an \(\mathcal{O}\mathsf{G}\)-lattice (see e.g. [10, SS7.3]). Moreover, by (f) we have that \(\Omega^{2}(k_{\mathsf{G}})\) is the kernel of a short exact sequence of \(k\mathsf{G}\)-modules of the form
\[0\to\Omega^{2}(k_{\mathsf{G}})\to P(\varphi)\to\Omega(k_{\mathsf{G}})\to 0\,.\]
Thus, in the Grothendieck ring of \(kG\), we have
\[\Omega^{2}(k_{\mathsf{G}})=P(\varphi)-\Omega(k_{\mathsf{G}})=P(\varphi)-P(k _{\mathsf{G}})\!\cdot\!\operatorname{rad}(k\mathsf{G})=P(\varphi)-(P(k_{ \mathsf{G}})-k_{\mathsf{G}})\,.\]
Using the decomposition matrix of \(B_{0}(k\mathsf{G})\) given in Lemma 7.2, we obtain that the character afforded by \(\Omega^{2}(k_{\mathsf{G}})\) is \(\chi_{q(q-1)}+\chi_{q^{3}}\), and the composition factors of \(X\) as claimed.
(h) It is clear that \(\operatorname{soc}(X)\cong\varphi\) as \(\Omega^{2}(k_{\mathsf{G}})\) is a submodule of \(P(\varphi)\) by the proof of assertion (g). Now, by Lemma 7.2, any lift of \(\varphi\) affords the character \(\chi_{q(q-1)}\). Thus, by [10, I Theorem 17.3], \(X\) has a pure submodule \(Y\) affording the Steinberg character \(\chi_{q^{3}}\). Then, \(X/Y\cong\varphi\), proving the claim.
(i) It follows from Frobenius reciprocity that
\[\operatorname{Hom}_{k\mathsf{G}}(k_{\mathbb{B}}\!\uparrow^{\mathsf{G}},X) \cong\operatorname{Hom}_{k\mathsf{B}}(k_{\mathbb{B}},X\!\downarrow_{\mathbb{B }}).\]
Now, as \(\operatorname{Irr}_{k}(B_{0}(k\mathsf{G}))=\{k_{\mathsf{G}},\varphi,\theta\}\) and by assertion (g) we have that \(k_{\mathsf{G}}\) has multiplicity one as a composition factor of \(X\), it follows from (b) that
\[\operatorname{Hom}_{k\mathbb{B}}(k_{\mathbb{B}},X\!\downarrow_{\mathbb{B}}) \cong\operatorname{Hom}_{k\mathbb{B}}(k_{\mathbb{B}},k_{\mathsf{G}}\!\downarrow_ {\mathbb{B}})\cong k\]
as \(k\)-vector spaces. The second equality is obtained analogously.
(j) Consider the Auslander-Reiten sequence \((\mathcal{E}):0\to X\stackrel{{ g}}{{\to}}E\stackrel{{ \pi}}{{\to}}k_{\mathsf{G}}\to 0\) starting at \(X=\Omega^{2}(k_{\mathsf{G}})\) (and hence ending at \(k_{\mathsf{G}}\)). (See e.g. [11, SS34] for this notion.) By (d) there exists a uniserial module of length \(2\) of the form
\[\boxed{k_{\mathsf{G}}}=:Y\,.\]
Consider the quotient homomorphism \(\rho:Y\to Y/\varphi\cong k_{\mathsf{G}}\), which is obviously not a split-epi. Hence there exits a \(k\mathsf{G}\)-homomorphism \(\alpha:Y\to E\) with \(\pi\circ\alpha=\rho\). Next we claim that \(\ker(\alpha)\neq\operatorname{soc}(Y)\). So assume \(\ker(\alpha)=\operatorname{soc}(Y)\). Then, \(E\geq\operatorname{im}(\alpha)\cong k_{\mathsf{G}}\), proving that \(\operatorname{im}(\alpha)\leq\operatorname{soc}(E)\) (as it is simple). On the other hand, by (h), \(\operatorname{soc}(X)=\varphi\), implying that \(\operatorname{im}(\alpha)\cap\operatorname{soc}(X)=0\). Thus, identifying \(X\) with its image in \(E\), we get that \(\operatorname{im}(\alpha)\cap X=0\) as \(\operatorname{im}(\alpha)\) is simple. (Use here the same argument as in the last five
lines of the proof of Lemma 4.6.) Hence, \(E\) has a submodule of the form \(\operatorname{im}(\alpha)\oplus X\), which implies that \(E=\operatorname{im}(\alpha)\oplus X\) as we can read from the s.e.s. \((\mathcal{E})\) that they have the same \(k\)-dimension. Thus, it follows from [1, Lemma 6.12] that the sequence \((\mathcal{E})\) splits, which is a contradiction and the claim follows. Next, since \(\alpha\neq 0\), it follows that \(\ker(\alpha)=0\), that is, \(\alpha\) is injective. Hence, \(\operatorname{im}(\alpha)\cong Y\). Now, suppose that \(k_{\mathsf{G}}\,|\operatorname{soc}^{2}(X)\). Set \(W:=\operatorname{soc}^{2}(X)+\operatorname{im}(\alpha)\leq E\). Note that \(\operatorname{im}(\alpha)\not\leq X\) since \(X=\ker(\pi)\), so that \(\operatorname{im}(\alpha)\not\leq\operatorname{soc}^{2}(X)\). Hence \(\operatorname{soc}^{2}(X)+\operatorname{im}(\alpha)\) has the following socle series
\[\begin{bmatrix}k_{\mathsf{G}}&k_{\mathsf{G}}\\ \varphi\end{bmatrix}\,,\]
since by Lemma 4.6 we have \(\operatorname{soc}(E)=\operatorname{soc}(X)\cong\varphi\), where the last isomorphism holds by (h). This is a contradiction to (d), and so the claim follows.
(k) It follows from assertions (i) and (a) that
\[1=\dim_{k}\operatorname{Hom}_{k\mathsf{G}}(k_{\mathbb{B}}\!\uparrow^{\mathsf{ G}},X)=\dim_{k}\operatorname{Hom}_{k\mathsf{G}}\!\left(\begin{array}{c}\!\!\! \begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array} []{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\! \begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array} []{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\! \begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array} []{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\! \begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c} \!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\! \begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c} \!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\! \begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c} \!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array} []{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\! \begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c} \!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c} \!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c} \!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c} \!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c} \!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c} \!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c} \!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c} \!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c} \!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c} \!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c} \!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c} \!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c} \!\!\begin{array}{c}\!\!\begin{array}{c}\!\begin{array}{c}\!\!\begin{array}{c} \!\begin{array}{c}\!\!\begin{array}{c}\!\begin{array}{c}\!\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\!\begin{array}{c}\!\begin{array}{c} \!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c}\!\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\!\begin{array}{c} \!\begin{array}{c}\!\!\begin{array}{c}\!\begin{array}{c}\!\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\!\begin{array}{c}\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\end{array}[c \end{array}[c]{\array}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c}\!\begin{array}{c} \!\begin{array}{c}\!\begin{array}[]
Set \(M:=\operatorname{Sc}(\mathsf{G}_{1}\times\mathsf{G}_{2},\Delta P)\). For each \(i\ \in\ \{1,2\}\) write \(B_{i}:=B_{0}(k\mathsf{G}_{i})\). Write \(\operatorname{Irr}_{k}(B_{i})=\{k_{\mathsf{G}_{i}},\varphi_{i},\theta_{i}\}\) with \(\dim_{k}\varphi_{i}=q_{i}(q_{i}-1)\) and \(\dim_{k}\theta_{i}=(q_{i}-1)({q_{i}}^{2}-q_{i}+1)\), and set \(\mathbb{B}_{i}:=\mathbb{B}_{i}(q_{i})\) as in Notation 7.1. Moreover, let \(Q_{i}\in\operatorname{Syl}_{2}(\mathbb{B}_{i})\) such that \(Q_{i}\leq P\) and let \(X_{i}:=\Omega^{2}(k_{\mathsf{G}_{i}})\) as in Lemma 7.4. Furthermore, observe that \(\mathcal{F}_{P}(\mathsf{G}_{1})=\mathcal{F}_{P}(\mathsf{G}_{2})\) and all involutions in \(\mathsf{G}_{i}\) are \(\mathsf{G}_{i}\)-conjugate (see e.g. [12, Theorem 5.3] and/or [1, Proposition 2 on p.11]). It follows that it suffices to prove that Conditions (I) and (II) of Theorem 4.5 hold.
**Condition (I)**. A similar argument to the one used in the proof of Proposition 6.3 (Condition I) can be used. In the present case, if \(z\) is an involution in the centre of \(P\), then by \(C_{\mathsf{G}_{i}}(z)=:C_{i}\) is a quotient of \(\operatorname{GU}_{2}(q)\) by a normal subgroup of odd index by [1, Proposition 4(iii)]. Hence, we obtain from Lemma 4.3 and Proposition 5.4 that \(M_{z}:=\operatorname{Sc}(C_{1}\times C_{2},\Delta P)\) induces a splendid Morita equivalence between \(B_{0}(kC_{1})\) and \(B_{0}(kC_{2})\) by Proposition 5.3(a). Moreover, \(M_{z}=M(\Delta\langle z\rangle)\) by the Brauer indecomposability of \(M\) proved in [13, Theorem 1.1], proving that Condition (I) is verified.
**Condition (II)**. Again, we have to prove that the functor \(-\otimes_{B_{1}}M\) maps the simple \(B_{1}\)-modules to the simple \(B_{2}\)-modules, and again, we have \(k_{\mathsf{G}_{1}}\otimes_{B_{1}}M\cong k_{\mathsf{G}_{2}}\) by [13, Lemma 3.4(a)]. Thus, it remains to prove that \(\varphi_{1}\otimes_{B_{1}}M\cong\varphi_{2}\) and \(\theta_{1}\otimes_{B_{1}}M\cong\theta_{2}\).
First recall from Lemma 7.4(a) that for each \(i\in\{1,2\}\) we have
\[\operatorname{Sc}(\mathsf{G}_{i},Q_{i})=\boxed{\begin{array}{c}k_{\mathsf{G }_{i}}\\ \varphi_{i}\\ \theta_{i}\\ \varphi_{i}\\ k_{\mathsf{G}_{i}}\end{array}}. \tag{3}\]
and moreover by [13, Lemma 3.4(c)] we have
\[\operatorname{Sc}(\mathsf{G}_{1},Q_{1})\otimes_{B_{1}}M\cong\operatorname{Sc} (\mathsf{G}_{2},Q_{2})\oplus(\operatorname{proj})\,.\]
Thus, because we already know that
\[\operatorname{soc}(\operatorname{Sc}(\mathsf{G}_{1},Q_{1}))\otimes_{B_{1}}M=k _{\mathsf{G}_{1}}\otimes_{B_{1}}M\cong k_{\mathsf{G}_{2}}=\operatorname{soc }(\operatorname{Sc}(\mathsf{G}_{2},Q_{2}))\,,\]
the stripping-off method (see Lemma 4.7(a)) yields
\[\boxed{\begin{array}{c}k_{\mathsf{G}_{1}}\\ \varphi_{1}\\ \theta_{1}\\ \varphi_{1}\end{array}}\otimes_{B_{1}}M\ \cong\boxed{\begin{array}{c}k_{\mathsf{G}_{2}}\\ \varphi_{2}\\ \theta_{2}\\ \varphi_{2}\end{array}}\oplus(\operatorname{proj}) \tag{4}\]
where for each \(i\in\{1,2\}\), the latter uniserial module of length \(4\) is defined to be
\[\boxed{\begin{array}{c}k_{\mathsf{G}_{i}}\\ \varphi_{i}\\ \theta_{i}\\ \varphi_{i}\\ \varphi_{i}\end{array}}:=\operatorname{Sc}(\mathsf{G}_{i},Q_{i})/\operatorname{ soc}(\operatorname{Sc}(\mathsf{G}_{i},Q_{i}))=:Z_{i}\]
as in Lemma 7.4(k). Then, applying again the stripping-off method (Lemma 4.7(b) this time) to equation (4) and \(\operatorname{hd}Z_{i}\cong k_{\mathsf{G}_{i}}\ (i\in\{1,2\})\), we obtain that
\[\boxed{\begin{array}{c}\varphi_{1}\\ \theta_{1}\\ \varphi_{1}\end{array}}\otimes_{B_{1}}M\ =\ \boxed{\begin{array}{c}\varphi_{2}\\ \theta_{2}\\ \varphi_{2}\end{array}}\oplus(\operatorname{proj}) \tag{5}\]
where for each \(i\in\{1,2\}\), the latter uniserial module of length \(3\) is defined to be
\[\begin{array}{|c|}\hline\varphi_{i}\\ \theta_{i}\\ \varphi_{i}\end{array}:=\operatorname{rad}(Z_{i})=:Y_{i}\,,\]
again as in Lemma 7.4(k). Now, by the proof of Lemma 7.4(k), we also know that \(Y_{i}\) is (up to identification) a submodule of \(X_{i}\) for each \(i\in\{1,2\}\), and
\[X_{1}\otimes_{B_{1}}M\cong X_{2}\oplus(\operatorname{proj})\]
by [13, Lemma 3.4(d)]. Because of the way, we have defined \(X_{i}\) and \(Y_{i}\) (\(i\in\{1,2\}\)) via the stripping-off method, it follows from the exactness of the functor \(-\otimes_{B_{1}}M\) that
\[X_{1}/Y_{1}\otimes_{B_{1}}M\cong(X_{1}\otimes_{B_{1}}M)/(Y_{1}\otimes_{B_{1}} M)\cong X_{2}/Y_{2}\oplus(\operatorname{proj})\,.\]
Lemma 7.4(k) gives, up to isomorphism, two possibilities for \(X_{1}/Y_{1}\) and two possibilities for \(X_{2}/Y_{2}\), namely,
\[\begin{array}{|c|}\hline\varphi_{1}\\ k_{\mathsf{G}_{1}}\end{array}\text{or }k_{\mathsf{G}_{1}}\oplus\varphi_{1}, \text{ and }\begin{array}{|c|}\hline\varphi_{2}\\ k_{\mathsf{G}_{2}}\end{array}\text{or }k_{\mathsf{G}_{2}}\oplus\varphi_{2}, \text{respectively},\]
but in any configuration we can apply the stripping-off method again (Lemma 4.7(a)) to strip off the trivial socle summand of \(X_{1}/Y_{1}\) and \(X_{2}/Y_{2}\) and we obtain that
\[\varphi_{1}\otimes_{B_{1}}M\cong\varphi_{2}\oplus(\operatorname{proj})\,.\]
However, as \(\varphi_{1}\) is simple, \(\varphi_{1}\otimes_{B_{1}}M\) must be indecomposable by [13, Theorem 2.1(a)], proving that \(\varphi_{1}\otimes_{B_{1}}M\cong\varphi_{2}\). Then, we can apply yet again the stripping-off method twice (once Lemma 4.7(a) and once Lemma 4.7(b)) to equation (5) and \(\operatorname{soc}(Y_{i})\), respectively \(\operatorname{hd}(Y_{i})\), (\(i\in\{1,2\}\)) to obtain that
\[\theta_{1}\otimes_{B_{1}}M\cong\theta_{2}\oplus(\operatorname{proj})\,.\]
However, again, as \(\theta_{1}\) is simple, \(\theta_{1}\otimes_{B_{1}}M\) must be indecomposable by [13, Theorem 2.1(a)], eventually proving that \(\theta_{1}\otimes_{B_{1}}M\cong\theta_{2}\).
## 8 Proofs of Theorem 1.2 and Theorem 1.3.
We can now prove our main results, that is, Theorem 1.2 and Theorem 1.3. We recall that \(G\) is a finite group with a fixed Sylow \(2\)-subgroup \(P\cong C_{2^{n}}\wr C_{2}\), where \(n\geq 2\) is a fixed integer.
Proof of Theorem 1.2.: (a) To start with, by Lemma 4.3, we may assume that \(O_{2^{\prime}}(G)=1\) and therefore that \(G\) is one of the groups listed in Theorem 1.1. Furthermore, by Lemma 3.1 and Lemma 4.2, we may also assume that \(O^{2^{\prime}}(G)=G\). Hence, Theorem 1.1, applied a second time, implies that \(G\) belongs to family \((\mathsf{Wj}(n))\) for some \(\mathsf{j}\in\{1,\cdots,\mathsf{6}\}\).
It remains to prove that \(\mathsf{j}\) is uniquely determined. So, suppose that \(G=:G_{1}\) is a finite group belonging to family \((\mathsf{Wj}_{\mathsf{1}}(n))\) for some \(\mathsf{j}_{\mathsf{1}}\in\{1,\cdots,\mathsf{6}\}\) and assume that the following hypothesis is satisfied:
* \(B_{0}(kG_{1})\) is splendidly Morita equivalent to the principal block \(B_{0}(kG_{2})\) of a finite group \(G_{2}\) belonging to family \((\mathsf{Wj}_{\mathsf{2}}(n))\) for some \(\mathsf{j}_{\mathsf{2}}\in\{1,\cdots,\mathsf{6}\}\).
For \(i\in\{1,2\}\) set \(B_{i}:=B_{0}(kG_{i})\), and notice that \((*)\) implies that \(\ell(B_{1})=\ell(B_{2})\) and \(k(B_{1})=k(B_{2})\) because these numbers are invariant under Morita equivalences.
Now, first assume that \(\mathsf{j}_{1}=1\). Then, it follows from Theorem 3.3 that \(\ell(B_{1})=1\) and \(\ell(B_{2})>1\) if \(\mathsf{j}_{2}>1\), contradicting \((*)\). Hence, we have \(\mathsf{j}_{2}=1\) and \(G_{2}\cong G_{1}\).
Assume then that \(\mathsf{j}_{1}=2\). Then, by Theorem 3.3, we have \(\ell(B_{1})=2\) and by \((*)\) we may also assume that \(\mathsf{j}_{2}\in\{2,3,4\}\). If \(\mathsf{j}_{2}\neq 2\), then, as \(n\geq 2\), Theorem 3.3 yields
\[k(B_{1})=(2^{2n-1}+9{\cdot}2^{n-1}+4)/3\neq 2^{2n-1}+2^{n+1}=k(B_{2})\,,\]
also contradicting \((*)\), so that \(\mathsf{j}_{2}=2\) and \(G_{2}\cong G_{1}\).
Suppose next that \(\mathsf{j}_{1}=3\). Then, again by Theorem 3.3, we have \(\ell(B_{1})=2\) and by \((*)\) we may assume that \(\mathsf{j}_{2}\in\{2,3,4\}\). Moreover, by the previous case, we have \(\mathsf{j}_{2}\neq 2\). So, let us assume that \(\mathsf{j}_{2}=4\). We can consider that \(B_{1}=B_{0}(k\operatorname{SL}_{2}^{n}(q_{1}))\) and \(B_{2}=B_{0}(k\operatorname{SU}_{2}^{n}(q_{2}))\) for prime powers \(q_{1}\), \(q_{2}\) such that \((q_{1}-1)_{2}=2^{n}=(q_{2}+1)_{2}\). Then, again, as \(\operatorname{SL}_{2}^{n}(q_{1})\lhd\operatorname{GL}_{2}(q_{1})\) and \(\operatorname{SU}_{2}^{n}(q_{2})\lhd\operatorname{GU}_{2}(q_{2})\) are normal subgroups of odd index, it follows from Lemma 3.1 and Lemma 4.2 that \(B_{0}(k\operatorname{GL}_{2}(q_{1}))\) and \(B_{0}(k\operatorname{GU}_{2}(q_{2}))\) are splendily Morita equivalent, and so Lemma 4.4 implies that \(B_{0}(k\operatorname{PGL}_{2}(q_{1}))\) and \(B_{0}(k\operatorname{PGU}_{2}(q_{2}))\) are splendily Morita equivalent. Now, as \(\operatorname{PGL}_{2}(q_{1})\cong\operatorname{PGU}_{2}(q_{1})\), we have that \(\mathcal{B}_{1}:=B_{0}(k\operatorname{PGL}_{2}(q_{1}))\) and \(\mathcal{B}_{2}:=B_{0}(k\operatorname{PGL}_{2}(q_{2}))\) are splendidly Morita equivalent, where \(D_{2^{n+1}}\in\operatorname{Syl}_{2}(\operatorname{PGL}_{2}(q_{1}))\cap \operatorname{Syl}_{2}(\operatorname{PGL}_{2}(q_{2}))\) by the proofs of Proposition 5.3 and Proposition 5.4. However, the conditions on \(q_{1}\) and \(q_{2}\) imply that \(\mathcal{B}_{1}\) and \(\mathcal{B}_{2}\) are in \((5)\) and \((6)\), respectively, in the list of [13, Theorem 1.1], hence cannot be splendidly Morita equivalent. Thus, we have a contradiction, proving that \(\mathsf{j}_{2}=3\) if \(\mathsf{j}_{1}=3\). Moreover, swapping the roles of \(\mathsf{j}_{1}\) and \(\mathsf{j}_{2}\) in the previous argument, we obtain that \(\mathsf{j}_{2}=4\) if \(\mathsf{j}_{1}=4\).
Suppose next that \(\mathsf{j}_{1}=5\). Then, as above, Theorem 3.3 and \((*)\) imply that \(\ell(B_{1})=3\) and \(\mathsf{j}_{2}\in\{5,6\}\). So, assume that \(\mathsf{j}_{2}=6\). Hence, we can consider that \(B_{1}=B_{0}(k\operatorname{PSL}_{3}(q_{1}))\) and \(B_{2}=B_{0}(k\operatorname{PSU}_{3}(q_{2}))\) for prime powers \(q_{1}\) and \(q_{2}\) such that \((q_{1}-1)_{2}=2^{n}=(q_{2}+1)_{2}\). However, \(B_{0}(k\operatorname{PSL}_{3}(q_{1}))\) and \(B_{0}(k\operatorname{PSU}_{3}(q_{2}))\) cannot be splendidly Morita equivalent by Lemma 6.2 and Corollary 7.3, because such an equivalence maps simple modules to simple modules and also trivial source modules to trivial source modules. It follows that \(\mathsf{j}_{2}=5\) if \(\mathsf{j}_{1}=5\). Again, swapping the roles of \(\mathsf{j}_{1}\) and \(\mathsf{j}_{2}\) in the previous argument, we obtain that \(\mathsf{j}_{2}=6\) if \(\mathsf{j}_{1}=6\). Finally, we observe that the claim about the Scott module is immediate by construction.
(b) Assume \(G_{1}\) and \(G_{2}\) both belong to family \((\mathsf{Wj}(\mathsf{n}))\) for a \(\mathsf{j}\in\{3,4,5,6\}\). Then, \(\operatorname{Sc}(G_{1}\times G_{2},\Delta P)\) induces a splendid Morita equivalence between \(B_{0}(kG_{1})\) and \(B_{0}(kG_{2})\) by Propositions 5.3, 5.4, 6.3 and 7.5 for \(\mathsf{j}=3,4,5\) and 6 respectively.
Proof of Theorem 1.3.: It is clear from the definitions that any splendid Morita equivalence is in particular a Morita equivalence. Thus, it only remains to prove that two distinct splendid Morita equivalence classes of principal blocks in Theorem 1.2 do not merge into one Morita equivalence class. In fact, from the numbers \(\ell(B)\) and \(k(B)\) in Theorem 3.3, it suffices to argue that the splendid Morita equivalence classes of principal blocks of groups of type \((\mathsf{Wj}(3))\) and \((\mathsf{Wj}(4))\), respectively of type \((\mathsf{Wj}(5))\) and \((\mathsf{Wj}(6))\), do not merge into one Morita equivalence class. In the former case, this is clear from the proof of Theorem 1.2, because else \(B_{0}(k\operatorname{PGL}_{2}(q_{1}))\) and \(B_{0}(k\operatorname{PGL}_{2}(q_{2}))\) with \((q_{1}-1)_{2}=2^{n}=(q_{2}+1)_{2}\) would be Morita equivalent, which would contradict Erdmann's classification of tame blocks in [1]. In the later case, it follows from the decomposition matrices of \(B_{0}(k\operatorname{PSL}_{3}(q_{1}))\) and \(B_{0}(k\operatorname{PSU}_{3}(q_{2}))\) with \((q_{1}-1)_{2}=2^{n}=(q_{2}+1)_{2}\) given in [13, Proposition 6.12] and Lemma 7.2, respectively, that these blocks are not Morita equivalent. The claim follows.
## Appendix A Appendix. On [14, Proposition 3.3(b)]
The purpose of this appendix is to fix a problem in the proof of [14, Proposition 3.3(b)], which was incomplete as written in [14]. See Remark A.2.
**Theorem A.1** (See Proposition 3.3(b) in [14]).: _Suppose that \(G_{1}\) and \(G_{2}\) are finite groups with a common Sylow \(p\)-subgroup \(P\), and assume that \(Z\) is a subgroup of \(P\) such that \(Z\leq Z(G_{1})\cap Z(G_{2})\). Write \(\overline{G_{1}}:=G_{1}/Z\), \(\overline{G_{2}}:=G_{2}/Z\) and \(\overline{P}:=P/Z\)._
_The following assertions are equivalent:_
1. \(\mathrm{Sc}(G_{1}\times G_{2},\Delta P)\) _induces a Morita equivalence between_ \(B_{0}(kG_{1})\) _and_ \(B_{0}(kG_{2})\)_;_
2. \(\mathrm{Sc}(\overline{G_{1}}\times\overline{G_{2}},\Delta\overline{P})\) _induces a Morita equivalence between_ \(B_{0}(k\overline{G_{1}})\) _and_ \(B_{0}(k\overline{G_{2}})\)_._
Proof.: Let \(i\in\{1,2\}\). Write \(B_{i}:=B_{0}(kG_{i})\) and let \(\overline{B_{i}}\) be the image of \(B_{i}\) via the \(k\)-algebra epimorphism \(kG_{i}\twoheadrightarrow k\overline{G_{i}}\) induced by the canonical group epimorphism \(G_{i}\twoheadrightarrow\overline{G_{i}}\). Then [13, Chap. 5 Theorem 8.11] says that \(\overline{B_{i}}=B_{0}(k\overline{G_{i}})\). Furthermore \(\overline{B_{i}}\cong k\overline{G_{i}}\otimes_{kG_{i}}B_{i}\otimes_{kG_{i}}k \overline{G_{i}}\) as \((k\overline{G_{i}},k\overline{G_{i}})\)-bimodules. Write \(M:=\mathrm{Sc}(G_{1}\times G_{2},\Delta P)\) and \(N:=M^{*}=\mathrm{Sc}(G_{2}\times G_{1},\Delta P)\). Set \(\overline{M}:=k\overline{G_{1}}\otimes_{kG_{1}}M\otimes_{kG_{2}}k\overline{G_ {2}}\). Then,
\[\overline{M} \Big{|}\ k\overline{G_{1}}\otimes_{kG_{1}}\left(\mathrm{Ind}_{ \Delta P}^{G_{1}\times G_{2}}(k_{\Delta P})\right)\otimes_{kG_{2}}k\overline{ G_{2}}\] \[=k\overline{G_{1}}\otimes_{kG_{1}}(kG_{1}\otimes_{kP}\ kG_{2}) \otimes_{kG_{2}}k\overline{G_{2}}\] \[\cong k\overline{G_{1}}\otimes_{kP}\ k\overline{G_{2}}\] \[\cong k\overline{G_{1}}\otimes_{k\overline{P}}k\overline{G_{2}}\] \[\cong\mathrm{Ind}_{\Delta\overline{P}}^{\overline{G_{1}}\times \overline{G_{2}}}(k_{\Delta\overline{P}}).\]
Note furthermore that \(\overline{M}\) obviously has the trivial \(k(\overline{G_{1}}\times\overline{G_{2}})\)-module \(k_{\overline{G_{1}}\times\overline{G_{2}}}\) as an epimorphic image. Set \(\mathfrak{M}:=\mathrm{Sc}(\overline{G_{1}}\times\overline{G_{2}},\,\Delta \overline{P})\). Then
\[\mathfrak{M}\left|\overline{M}\right.\quad\text{ (equality does not necessarily hold).} \tag{6}\]
(i) \(\Rightarrow\) (ii): Set \(\overline{N}:=k\overline{G_{2}}\otimes_{kG_{2}}N\otimes_{kG_{1}}k\overline{G _{1}}\). Then,
\[\overline{M}\otimes_{\overline{b_{2}}} \overline{N} \cong\overline{M}\otimes_{k\overline{G_{2}}}\overline{N}\] \[\cong(k\overline{G_{1}}\otimes_{kG_{1}}M\otimes_{kG_{2}}k \overline{G_{2}})\otimes_{k\overline{G_{2}}}(k\overline{G_{2}}\otimes_{kG_{2} }N\otimes_{kG_{1}}k\overline{G_{1}})\] \[\cong k\overline{G_{1}}\otimes_{kG_{1}}(M\otimes_{kG_{2}}k \overline{G_{2}})\otimes_{kG_{2}}N\otimes_{kG_{1}}k\overline{G_{1}}\] \[\cong k\overline{G_{1}}\otimes_{kG_{1}}(k\overline{G_{1}}\otimes_{kG _{1}}M)\otimes_{kG_{2}}N\otimes_{kG_{1}}k\overline{G_{1}}\] \[\text{since }M\otimes_{kG_{2}}k\overline{G_{2}}\cong k \overline{G_{1}}\otimes_{kG_{1}}M\quad\text{as }(kG_{1},kG_{2})\text{-bimodules}\] \[\cong(k\overline{G_{1}}\otimes_{kG_{1}}k\overline{G_{1}})\otimes_ {kG_{1}}M\otimes_{kG_{2}}N\otimes_{kG_{1}}k\overline{G_{1}}\] \[\cong(k\overline{G_{1}}\otimes_{k\overline{G_{1}}}k\overline{G_ {1}})\otimes_{kG_{1}}M\otimes_{kG_{2}}N\otimes_{kG_{1}}k\overline{G_{1}}\] \[\cong k\overline{G_{1}}\otimes_{kG_{1}}(M\otimes_{kG_{2}}N) \otimes_{kG_{1}}k\overline{G_{1}}\] \[\cong k\overline{G_{1}}\otimes_{kG_{1}}B_{1}\otimes_{kG_{1}}k \overline{G_{1}}\quad\text{by (i)}\] \[\cong\overline{B_{1}}.\]
Since \(\overline{B_{i}}\) is a symmetric \(k\)-algebra for \(i=1,\underline{2}\), the above already shows that the pair \((\overline{M},\overline{N})\) induces a Morita equivalence between \(\overline{B_{1}}\) and \(\overline{B_{2}}\), and hence \(\overline{M}\) is indecomposable as a right \(k(\overline{G_{1}}\times\overline{G_{2}})\)-module, which implies that \(\mathfrak{M}\cong\overline{M}\) from (6).
(ii) \(\Rightarrow\) (i): As in [11, p.822] there exist \(P\)-source idempotents \(j_{i}\) of \(B_{i}\) for \(i=1,2\) with \(M\,\big{|}\,(kG_{1}\,j_{1}\otimes_{kP}j_{2}\,kG_{2})\). Then the images \(\overline{j_{i}}\) of \(j_{i}\) via the canonical \(k\)-algebra epimorphisms \(kG_{i}\twoheadrightarrow k\overline{G_{i}}\) are \(\overline{P}\)-source idempotent of \(\overline{B_{i}}\) for \(i=1,2\) (see [10, SS3] and [12, Lemma 4.1]). Hence, \(\mathfrak{M}\,\big{|}\,(k\overline{G_{1}\,j_{1}}\otimes_{k\overline{P}}\overline {j_{2}}\,k\overline{G_{2}})\) from (6). By the assumption of (ii) and [11, Theorem 9.7.4], \(\mathfrak{M}\) induces an interior \(\overline{P}\)-algebra isomorphism \(\Phi:\overline{j_{1}}\,k\overline{G_{1}\,j_{1}}\xrightarrow{\approx}\overline {j_{2}}\,k\overline{G_{2}\,j_{2}}\). Since our blocks are all principal, it is routine work to know that for every self-centralising local pointed group \(Q_{\delta_{1}}\leq P_{\gamma_{1}}\) there exists a local point \(\delta_{2}\) of \(B_{2}{}^{Q}\) such that the pointed group \(Q_{\delta_{2}}\) is self-centralising, \(Q_{\delta_{2}}\leq P_{\gamma_{2}}\) and \(E_{G_{1}}(Q_{\delta_{1}})\cong E_{G_{2}}(Q_{\delta_{2}})\), where \(\gamma_{i}\) is a point of \(B_{i}{}^{P}\) with \(j_{i}\in\gamma_{i}\) and \(E_{G_{i}}(Q_{\delta_{i}}):=N_{G_{i}}(Q_{\delta_{i}})/Q\,C_{G_{i}}(Q)\) and \(N_{G_{i}}(Q_{\delta_{i}})\) is the normaliser of \(Q_{\delta_{i}}\) in \(N_{G_{i}}(Q)\) for each \(i\) (see [13, p.103]). Thus [21] implies that \(\Phi\) lifts to an interior \(P\)-algebra isomorphism \(\phi:j_{1}kG_{1}\,j_{1}\xrightarrow{\approx}j_{2}kG_{2}\,j_{2}\) (see [13]). It follows from [10, Lemma 3.4(i)] that \(k_{\overline{G_{1}}}\otimes_{\overline{B_{1}}}\mathfrak{M}\cong k_{\overline {G_{2}}}\). Then the canonical Morita equivalences between \(\overline{B_{i}}\) and \(\overline{j_{i}}\,k\overline{G_{i}\,j_{i}}\) for \(i=1,2\) induce that \(k_{\overline{G_{1}}\overline{j_{1}}}\cong{}_{\Phi}(k_{\overline{G_{2}}} \overline{j_{2}})\) as right \(\overline{j_{1}}k\overline{G_{1}}\,\overline{j_{1}}\)-modules where \({}_{\Phi}(k_{\overline{G_{2}}}\overline{j_{2}})\) is \(k_{\overline{G_{2}}}\overline{j_{2}}\) as a \(k\)-space and considered as a right \(\overline{j_{1}}k\overline{G_{1}}\,\overline{j_{1}}\)-module via \(\Phi\) (see [11, Theorem 9.7.4]). Since \(\Phi\) lifts to \(\phi\), we do have that \(k_{G_{1}}\,j_{1}\cong\ _{\phi}(k_{G_{2}}\,j_{2})\) as right \(j_{1}kG_{1}\,j_{1}\)-modules. Further the existence of \(\phi\) yields that \(B_{1}\) and \(B_{2}\) are splendidly Morita equivalent, namely there exists an indecomposable \((B_{1},B_{2})\)-bimodule \(\mathcal{M}\) such that \(\mathcal{M}\) is a trivial source right \(k(G_{1}\times G_{2})\)-module with vertex \(\Delta P\) inducing a Morita equivalence between \(B_{1}\) and \(B_{2}\) and inducing \(\phi\) (see Ibid.). Set \(T:=k_{G_{1}}\otimes_{B_{1}}\mathcal{M}\), so that \(T\) is a simple right \(kG_{2}\)-module in \(B_{2}\). Then the canonical Morita equivalences between \(B_{i}\) and \(j_{i}kG_{i}\,j_{i}\) for \(i=1,2\) imply that \(k_{G_{1}}\,j_{1}\cong\ _{\phi}(Tj_{2})\) as right \(j_{1}kG_{1}\,j_{1}\)-modules. Thus, \(k_{G_{2}}\,j_{2}\cong Tj_{2}\) as right \(j_{2}kG_{2}\,j_{2}\)-modules. Hence by the bijection between the simple modules in \(B_{2}\) and \(j_{2}kG_{2}\,j_{2}\) given by the canonical Morita equivalence, we finally get that \(T\cong k_{G_{2}}\) as right \(kG_{2}\)-modules, namely, \(k_{G_{1}}\otimes_{B_{1}}\mathcal{M}\cong k_{G_{2}}\). Hence the adjunction in [14, line 9 on p.105] implies that \(\operatorname{Hom}_{k(G_{1}\times G_{2})}(\mathcal{M},k_{G_{1}\times G_{2}}) \neq\{0\}\). Therefore \(\mathcal{M}\cong M\) by [13, (27.5) Exercise].
**Remark A.2**.: On the right-hand side of line 2 of [10, Lemma 3.1(b)], \(\operatorname{Sc}(\overline{G}\times\overline{H},\Delta\overline{P})\) must be replaced by \(\operatorname{Sc}(\overline{G}\times\overline{H},\Delta\overline{P})\oplus \mathcal{N}\) for a _possibly non-zero_\(k(\overline{G}\times\overline{H})\)-module \(\mathcal{N}\) as in (6). As a result, the proof of [10, Proposition 3.3(b)] as given in [10] holds only in the case in which \(\mathcal{N}=\{0\}\). However, Theorem A.1 now proves that [10, Proposition 3.3(b)] is correct, also in the case in which \(\mathcal{N}\neq\{0\}\). As a consequence, [10] and [10, proof of Proposition 5.2], where [10, Proposition 3.3(b)] are used, are not affected and remain true with the given proofs. Moreover, we would like to mention that the results of [10], together with further explicit calculations, give an alternative way to establish the validity of [10, Proposition 3.3(b)] in special cases, e.g. when the defect groups are generalised quaternion or semi-dihedral \(2\)-groups.
### Acknowledgements
The authors would like to thank Gerhard Hiss and Gunter Malle for useful information on the modular representation theory of the finite groups of Lie type involved in this article. They also would like to thank Naoko Kunugi for pointing out a gap in the proof of [10, Proposition 3.3(b)] (see Appendix A), as well as Markus Linckelmann and Yuanyang Zhou for answering several of their questions on Puig's theory. |
2308.09433 | Can ultrasound confidence maps predict sonographers' labeling
variability? | Measuring cross-sectional areas in ultrasound images is a standard tool to
evaluate disease progress or treatment response. Often addressed today with
supervised deep-learning segmentation approaches, existing solutions highly
depend upon the quality of experts' annotations. However, the annotation
quality in ultrasound is anisotropic and position-variant due to the inherent
physical imaging principles, including attenuation, shadows, and missing
boundaries, commonly exacerbated with depth. This work proposes a novel
approach that guides ultrasound segmentation networks to account for
sonographers' uncertainties and generate predictions with variability similar
to the experts. We claim that realistic variability can reduce overconfident
predictions and improve physicians' acceptance of deep-learning cross-sectional
segmentation solutions. Our method provides CM's certainty for each pixel for
minimal computational overhead as it can be precalculated directly from the
image. We show that there is a correlation between low values in the confidence
maps and expert's label uncertainty. Therefore, we propose to give the
confidence maps as additional information to the networks. We study the effect
of the proposed use of ultrasound CMs in combination with four state-of-the-art
neural networks and in two configurations: as a second input channel and as
part of the loss. We evaluate our method on 3D ultrasound datasets of the
thyroid and lower limb muscles. Our results show ultrasound CMs increase the
Dice score, improve the Hausdorff and Average Surface Distances, and decrease
the number of isolated pixel predictions. Furthermore, our findings suggest
that ultrasound CMs improve the penalization of uncertain areas in the ground
truth data, thereby improving problematic interpolations. Our code and example
data will be made public at
https://github.com/IFL-CAMP/Confidence-segmentation. | Vanessa Gonzalez Duque, Leonhard Zirus, Yordanka Velikova, Nassir Navab, Diana Mateus | 2023-08-18T10:07:17Z | http://arxiv.org/abs/2308.09433v1 | # Can ultrasound confidence maps predict sonographers' labeling variability?
###### Abstract
Measuring cross-sectional areas in ultrasound images is a standard tool to evaluate disease progress or treatment response. Often addressed today with supervised deep-learning segmentation approaches, existing solutions highly depend upon the quality of experts' annotations. However, the annotation quality in ultrasound is anisotropic and position-variant due to the inherent physical imaging principles, including attenuation, shadows, and missing boundaries, commonly exacerbated with depth. This work proposes a novel approach that guides ultrasound segmentation networks to account for sonographers' uncertainties and generate predictions with variability similar to the experts. We claim that realistic variability can reduce overconfident predictions and improve physicians' acceptance of deep-learning cross-sectional segmentation solutions. Toward that end, we rely on a simple and efficient method to estimate Confidence Maps (CM)s from ultrasound images. The method provides certainty for each pixel for minimal computational overhead as it can be precalculated directly from the image. We show that there is a correlation between low values in the confidence maps and expert's label uncertainty. Therefore, we propose to give the confidence maps as additional information to the networks. We study the effect of the proposed use of ultrasound CMs in combination with four state-of-the-art neural networks and in two configurations: as a second input channel and as part of the loss. We evaluate our method on 3D ultrasound datasets of the thyroid and lower limb muscles. Our results show ultrasound CMs increase the Dice score, improve the Hausdorff and Average Surface Distances, and decrease the number of isolated pixel predictions. Furthermore, our findings suggest that ultrasound CMs improve the penalization of uncertain areas in the ground truth data, thereby improving problematic interpolations. Our code and example data will be made public at [https://github.com/IFL-CAMP/Confidence-segmentation](https://github.com/IFL-CAMP/Confidence-segmentation).
###### Abstract
We propose a novel approach to the classification of the proposed method for detecting the presence of a large number of unknown images. The proposed method is based on a novel approach to detect the presence of a large number of unknown images. The proposed method is based on a novel approach to detect the presence of a large number of unknown images.
applied on MRI or CT. We align ourselves with these ideas, applying our method to ultrasound, a modality characterized by blurred edges, low signal-to-noise ratio, speckle noise, and other challenges. Our second proposition to cope with these challenges is to define a cross-entropy loss based on a probabilistic ground truth called "confidence mask", computed from the CMs and the ground truth labels. This new label is probabilistic not only in the borders, like other methods, but in the whole structure. Thereby, we propose to predict the "Confidence Masks" in order to make predictions both a good segmentation but also calibrate the output probability to the confidence content. Following our experiments, we discover that confidence masks teach the network to penalize areas with high confidence and interpolate areas with low confidence.
## 2 Methodology
#### 2.0.1 Guiding segmentation networks with Confidence Maps.
The core of our method is to guide the training with pre-calculated CMs. Let \(\mathbf{X}\mathbf{\in}\mathbb{R}^{W\times H\times D}\) be the input volume and \(\mathbf{Y},\mathbf{\hat{Y}}\mathbf{\in}\mathbb{R}^{W\times H\times D\times C}\) respectively the one-hot encoding labels and the network prediction for \(C\) classes. We first compute the CM from the image: \(\mathbf{CM}:\mathbf{X}\mapsto(0,1)^{W\times H\times D}\) (c.f. the next subsection). Our first proposition is to use the CMs as an additional channel so that the input to the network becomes \([\mathbf{X}|\mathbf{CM}]\), with \(\cdot|\) a concatenation. In our second proposition, we combine the CMs with the labels to create a "Confidence Mask" (\(Y\cdot CM\)), where "\(\cdot\)" represents the element-wise multiplication, and define the Cross entropy confidence loss over the \(m\) voxels of the image as:
\[\text{CE}_{conf}(\mathbf{Y},\mathbf{\hat{Y}})=-\frac{1}{m}\sum_{i=1}^{m}(Y_{i} \cdot CM_{i})\cdot\log\left(\hat{Y}_{i}\right) \tag{1}\]
**Pre-calculated Confidence Maps**: In ultrasound imaging, pressure waves are transmitted and reflected primarily to the transducers, but traversed tissues absorb, diverge, refrac, and disperse the sound as well. Therefore, as the wave progresses, the recorded intensities become less reliable. The goal of the confidence map algorithm is to assign uncertainty values to each pixel in any ultrasound image, without prior knowledge about its content. Karamalis et al.[12] proposed a simplified but efficient wave propagation model to address this problem based on a random walk on a graph. The nodes of the graph represent the
Figure 1: From confidence maps to confidence masks: a) US image and overlaid segmentations, b) image graph representation, c) Confidence map [12], d) Confidence mask
image pixels while an 8 neighbourhood rule is used to define the edges. Edge weights model ultrasound physical properties: an exponential Beer-Lambert attenuation governed by parameter \(\alpha\) in the vertical direction; a penalization for horizontal and diagonal propagations associated with the beam shape; and a penalization between neighbour pixels with different intensities, controlled by parameter \(\beta\), modeling the negative correlations between reflection and transmission across tissue boundaries. At each step of the random walk, the probability of moving from one pixel to another is based on the defined edge weights. By definition, source and sink nodes are placed at the top and bottom of the image, respectively. The problem is then formulated as computing the probability of the random walk starting from a transducer/source pixel to reach a sink node (c.f. Fig. 1-c). The sought probabilities are obtained by solving a linear system of equations, we refer the reader to [12] for more details. In practice, and following the above model, the random walk goes from the top to the bottom approximately perpendicular to the beam/scanline direction. Deviations in the horizontal/diagonal directions are possible to a small degree, according to the image content, and controlled by the \(\alpha\) and \(\beta\) hyper-parameters.
**Datasets:** Two different 3D ultrasound datasets were used for the experiments and are presented in Fig. 2. They consist of 2D B-mode ultrasound sweeps that can generate compounded 3D volumes. The first dataset is open-source and available from [13]. It contains scans of the _thyroid_ of 16 volunteers, with 3 labels: thyroid, aorta, and jugular vein. Each volume contains around 200 images of size \(W\times H=400\times 270\), for a total of more than 1600 images. The second in-house dataset [5] contains 4 to 6 scans per volume of the left _low-limb legs_ of 16 participants with 3 labels: Soleus, Gastrocnemius lateralis, and Gastrocnemius Medialis muscles. Each volume of size \(W\times H=500\times 420\) contains around 1500 images, for a total of more than 24000 images.
For both datasets, confidence maps were calculated over 2D ultrasound images using the implementation of the random walker algorithm [12] available in the ImFusion4[24] software, version 2.36.3, with \(\alpha\) and \(\beta\) parameters set to \(0.5,100\) respectively. The data was split patient-wise into 11 train volumes, 2 for validation and 3 for testing, in each case.
Footnote 4: ImFusion GmbH, Munich, Germany
**Evaluation Metrics:** For multi-label segmentation, we compute 8 different metrics: Dice Similarity Coefficient (DSC), mean Intersection over Union (mIoU), precision, recall, Average Surface Distance (ASD), Hausdorff distance(HD), miss rate, and fall out. Following [20], we evaluate the metrics for each class and average them over the organs and participants.
## 3 Experiments and Results
**Contribution of the Confidence Maps:** We denote the models relying on the CMs as a second channel with names including the term (*-2ch-*) and those
with CMs in the loss with the pattern (*-*-*conf). Based on a U-Net architecture we evaluate a total of 10 configurations:
* **Baselines:** unet-1ch-dice, unet-1ch-crossentropy(CE), unet-1ch- Dice cross entropy (diceCE)
* **CMs as 2nd channel:** unet-2ch-dice, unet-2ch-CE, unet-2ch-diceCE,
* **CMs within the loss :** unet-1ch-CEconfidence, unet-1ch-diceCEconfidence
* **CMS both as 2nd channel and within the loss:** unet-2ch-diceCE confidence and unet-2ch-CEconfidence
The results are reported in Fig. 3, where we keep the best configuration for each group. We computed a 3-fold-cross validation to verify the independence of the results to the participants split. We found that CMs decrease the standard deviation of the DSC in general. While the HD of CM configurations is similar or increased for the thyroid dataset, the positive effect of CMs in the muscles dataset is clear. We attribute this behavior to the more complex muscle shapes. More boxplots metrics can be found in our github. Based on the balance between DSC and HD scores, the best two configurations are: unet-2ch-dice and unet-1ch-CEconfidence. Figure 4 showcases segmentation improvements using CMs. For the thyroid dataset, CMs reduced isolated regions, enhancing accuracy. For the leg dataset, CMs improved interpolation and smoothness of segmented structures.
**Expert uncertainty and prediction variability :** To evaluate the areas where the network is less certain, we ask the same expert to perform the labeling of the same image 100 times at different times. We compare the variability with the entropy of 100 Monte Carlo Dropout predictions for the unet-1ch-dice and
Figure 2: Examples of the thyroid (top row) and the low-limb muscles (bottom row), respectively. (a) corresponds to the 3D view of the labels at the top, the red, blue, and yellow correspond to the Thyroid, the carotid artery, and the jugular vein, while at the bottom, they correspond to the Soleus, the Gastrocnemius lateralis, and the Gastrocnemius Medialis. (b) the CM cross-sectional view, (c) the confidence Mask used for the loss, (d) the CM overlapped over the image with red signalizing the areas with low confidence.
unet-2ch-dice. We observe in Fig. 5, how CMs bring the predictions variability closer to the expert's uncertainty, with an anisotropic behaviour that reflects difficult areas (the intersection of the three muscles) and increases with depth.
**CMs with state of the art architectures:** For both datasets, we tested four different 3D networks available in MONAI[4]. We trained the models for 120
Figure 4: Prediction for one participant of the thyroid dataset in the top and the leg in the bottom. At left the baseline method:**unet-1ch-ce**, at the right our proposal:**unet-2ch-dice**
Figure 3: First and second columns report the results for the thyroid and the muscles metrics, respectively. The best four performing methods are ranked \(1^{\circ}\), \(2^{\circ}\), \(3^{\circ}\), \(4^{\circ}\).
epochs, with a learning rate of 0.001 and the ADAM optimizer, on a Nvidia 390 GPU. Qualitative images are presented in Fig. 6. The evaluation metrics for the Muscles dataset, computed in a 3-fold cross-validation manner, are presented in Table 1.
Hausdorff distance, are marked with \(*\). The results in the table show that CMs improve segmentation metrics by a small factor. However, looking at the qualitative results, we see CM-models favor better interpolation of the bottom areas where uncertainty is higher, improve the segmentation of small structures, and decrease the number of islands, as it can be seen in Fig. 6.
## 4 Discussion and Conclusions
In conclusion, this study presents an original approach to improve the awareness of ultrasound deep learning segmentation methods to label variability. Our method, based on ultrasound Confidence Maps, takes into account the basic ultrasound wave propagation principles, which affect sonographers' uncertainty when annotating. Introduced as an additional input channel or within the loss, CMs guide the network to predict segmentations that effectively reproduce expert-like borders variability and whose drop-out uncertainty grows with the depth, as expected for ultrasound images. Thereby, our method can be used to generate multiple solutions for the physicians to judge, with fixed borders for certain and variable mask predictions for uncertain regions. Two advantages of the approach with the CM loss is that it does not increase the number of parameters, which indicates that it is architecture agnostic. In this sense, the approach could be applied as a fine-tuning strategy after transfer learning.
Our experimental results show that the training of CM models does not affect the convergence for either of the proposed approaches. Moreover, the CMs pre-computation is very fast as it consists of the resolution of a linear system with a sparse matrix. In sum, this novel, simple and effective approach to introduce ultrasound and expert knowledge
can be easily implemented in combination with various ultrasound segmentation architectures without incurring additional computational costs. We evaluated our method on two datasets, one private and another public, to ensure
Figure 6: **(top row)** 2D cross-sectional view of the predictions overlaid over the expert’s labels. **(bottom row)** 3D predictions in color and labels in gray for the 2 best configurations and their corresponding baselines: 1. Deep-atlas 1ch dice, 2. Deep-atlas 2ch dice, 3. Attention Unet 1ch CE, 4.Attention Unet-2ch-Dice.
repeatability. Future work aims at distilling the confidence map automatically. Although we used the dice loss and cross-entropy loss, other losses or combinations could also be considered.
|
2301.05702 | confidence-planner: Easy-to-Use Prediction Confidence Estimation and
Sample Size Planning | Machine learning applications, especially in the fields of me\-di\-cine and
social sciences, are slowly being subjected to increasing scrutiny. Similarly
to sample size planning performed in clinical and social studies, lawmakers and
funding agencies may expect statistical uncertainty estimations in machine
learning applications that impact society. In this paper, we present an
easy-to-use python package and web application for estimating prediction
confidence intervals. The package offers eight different procedures to
determine and justify the sample size and confidence of predictions from
holdout, bootstrap, cross-validation, and progressive validation experiments.
Since the package builds directly on established data analysis libraries, it
seamlessly integrates into preprocessing and exploratory data analysis steps.
Code related to this paper is available at:
https://github.com/dabrze/confidence-planner. | Antoni Klorek, Karol Roszak, Izabela Szczech, Dariusz Brzezinski | 2023-01-12T14:49:59Z | http://arxiv.org/abs/2301.05702v1 | # confidence-planner: Easy-to-Use Prediction Confidence Estimation and Sample Size Planning
###### Abstract
Machine learning applications, especially in the fields of medicine and social sciences, are slowly being subjected to increasing scrutiny. Similarly to sample size planning performed in clinical and social studies, law-makers and funding agencies may expect statistical uncertainty estimations in machine learning applications that impact society. In this paper, we present an easy-to-use python package and web application for estimating prediction confidence intervals. The package offers eight different procedures to determine and justify the sample size and confidence of predictions from holdout, bootstrap, cross-validation, and progressive validation experiments. Since the package builds directly on established data analysis libraries it seamlessly integrates into preprocessing and exploratory data analysis steps. Code related to this paper is available at: [https://github.com/dabrze/confidence-planner](https://github.com/dabrze/confidence-planner)
## 1 Introduction
Medical, social, and behavioral sciences are known to be plagued by undersampling [5]. In the traditional statistical framework, even when the effect exists, undersampled studies yield either non-significant results or significant results because of overestimating the size of the effect. Similar problems can occur in machine learning studies on life-science data, where classification accuracy is often measured on relatively small samples without providing any uncertainty estimation. Importantly, statistical testing and uncertainty estimation of machine learning systems will become more and more common, as the global community realizes that AI systems need to be controlled to maintain fairness and comply with government regulations. Examples of such regulations have recently appeared in the proposed EU AI act [4], UNESCO recommendation on ethics in AI [8], or the FDA's Software as Medical Device guidance [1].
To mitigate issues with undersampled studies, in social sciences authors are increasingly expected to plan and justify the sample size of their study. Such sample-size justification procedures can be found in tutorial papers and online tools [5]. Equivalent confidence interval and sample size estimation procedures
for classification accuracy are scattered throughout scientific literature [6, 2, 3, 7] and, to the best of our knowledge, are not available as a python package. The confidence-planner package presented herein aims to fill this gap.
## 2 The confidence-planner Package
The confidence-planner package provides implementations of estimation procedures for confidence intervals around classification accuracy. A _confidence interval_ (CI) is a range of estimates for an unknown parameter. In our case it is the range of values that we expect the accuracy _acc_ of our model to fall between if we re-run our experiment again. A confidence interval is computed for a test sample size \(n\) and at a designated confidence level \(\gamma\), which is the percentage of times one expects to reproduce an estimate between the upper and lower bounds of the confidence interval. A interval with a confidence level of 90% is called a 90% confidence interval (90% CI).
For a given number of test samples \(n\), test accuracy _acc_, and expected confidence level \(\gamma\), confidence-planner offers a set of CI estimation procedures. The package currently features approximations for holdout (Langford [6], Clopper-Pearson [3], Wilson [9], Z-test, t-test), bootstrap [7], cross-validation [2], and progressive validation [2] experiments. Moreover, for selected methods, for a given confidence level \(\gamma\), confidence-planner can help estimate the number of samples \(n\) needed to obtain a CI of a user-specified radius. For example, using the Z-test approximation, confidence-planner can help estimate that in order to achieve a 90% CI of \(\pm 0.05\) one needs a holdout test of at least 271 examples.
The confidence-planner package is open source and available under a permissive MIT license. The source code, documentation and an introductory video are available at [https://github.com/dabrze/confidence-planner](https://github.com/dabrze/confidence-planner). The package can also be installed via PyPI using pip install confidence-planner. The package's basic functionality, along with guidance on selecting the appropriate estimation method, is also available the form of a confidence-planner web application that can be deployed using the code in the repository.
## 3 Application Example
Next, we present a basic application example featuring the well-known _Breast Cancer Wisconsin_ dataset to demonstrate how confidence interval estimation can be included in a data classification script. In this particular example, we will estimate the 90% confidence interval (CI) of a classifier tested on a holdout test set using a Z-test approximation [6]. Then, we will estimate the holdout size that would be required to limit the 90% CI to a radius 0.05 classification accuracy (estimated accuracy \(\pm 0.05\)). The complete code required to execute these tasks is the following:
from sklearn import datasets, svm, metrics from sklearn.model_selection import train_test_split import confidence_planner as cp
_# example dataset_ X, y = datasets.load_breast_cancer(return_X_y=True) X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, stratify=y, random_state=23 )
_# training the classifier and calculating accuracy_ clf = svm.SVC(gamma=0.001) clf.fit(X_train, y_train) y_pred = clf.predict(X_test) acc = metrics.accuracy_score(y_test, y_pred)
_# confidence interval and samples size estimation_ ci = cp.estimate_confidence_interval( len(y_test), acc, 0.90, method="holdout_z_test" ) sample = cp.estimate_sample_size(0.05, 0.90, method="holdout_z_test") print(f"Holdout accuracy: {acc}") print(f"90% CI: {ci}") print(f"Test samples needed for a 0.05 radius 90% CI: {sample}")
The first three lines import the sklearn library used for classification and the confidence-planner package. The following fragments of code load the data into a standard pandas DataFrame object, split the data into a training and a holdout test set, train an SVM classifier and record its classification accuracy on the holdout. The final fragment of code performs the 90% CI estimation and calculates the number of samples that would be needed to make CI radius equal 0.05. Similar estimations can be performed for cross-validation, bootstrapping, and progressive validation, by specifying a different estimation function.
The same analyses can be performed online, without coding, by using the confidence-planner web app. Figure 1 shows the CI estimation page for the Wilson method for holdout data and graded error bars that can be created using the python package.
## 4 Conclusions
This demo paper introduced the confidence-planner package that enables calculating confidence bounds around classification accuracy. It provides an easy-to-use, extensible, and freely available implementation of estimation procedures for holdout, bootstrap, cross-validation and progressive validation schemes. In the future we plan to extend the list of estimation methods and provide more visualizations of uncertainty, for example for validation and learning curves. |
2306.00805 | Gravitational collapse of matter in the presence of Quintessence and
Phantom-like scalar fields | In this work, we propose a model of the gravitational collapse of dark matter
in the presence of quintessence or phantom-like scalar fields. Our treatment is
based on the principles of general relativity up to virialization. We have
chosen a spherical patch that starts to collapse gravitationally as it happens
in top-hat collapse. It is seen that although the dark matter sector collapses
the dark energy sector does keep a profile that is almost similar to the dark
energy profile for the background expanding Friedmann-Lemaitre-Robertson-Walker
(FLRW) universe for suitable model parameters. It is observed that in order to
formulate the problem in the general relativistic setting one has to abandon
the idea of a closed FLRW isolated collapsing patch. General relativity
requires an external generalized Vaidya spacetime to be matched with the
internal spherical patch whose dynamics is guided by the FLRW metric. It is
shown that almost all collapses are accompanied by some flux of matter and
radiation in the generalized Vaidya spacetime. Some of the spherical regions of
the universe are seen not to collapse but expand eternally, producing void-like
structures. Whether a spherical region will collapse or expand depends upon the
initial values of the system and other model parameters. As this work shows
that collapsing structures must emit some form of radiation, this may be taken
as an observational signature of our proposal. | Priyanka Saha, Dipanjan Dey, Kaushik Bhattacharya | 2023-06-01T15:33:57Z | http://arxiv.org/abs/2306.00805v1 | # Gravitational collapse of matter in the presence of Quintessence and Phantom-like scalar fields
###### Abstract
In this work, we propose a model of the gravitational collapse of dark matter in the presence of quintessence or phantom-like scalar fields. Our treatment is based on the principles of general relativity up to virialization. We have chosen a spherical patch that starts to collapse gravitationally as it happens in top-hat collapse. It is seen that although the dark matter sector collapses the dark energy sector does keep a profile that is almost similar to the dark energy profile for the background expanding Friedmann-Lemaitre-Robertson-Walker (FLRW) universe for suitable model parameters. It is observed that in order to formulate the problem in the general relativistic setting one has to abandon the idea of a closed FLRW isolated collapsing patch. General relativity requires an external generalized Vaidya spacetime to be matched with the internal spherical patch whose dynamics is guided by the FLRW metric. It is shown that almost all collapses are accompanied by some flux of matter and radiation in the generalized Vaidya spacetime. Some of the spherical regions of the universe are seen not to collapse but expand eternally, producing void-like structures. Whether a spherical region will collapse or expand depends upon the initial values of the system and other model parameters. As this work shows that collapsing structures must emit some form of radiation, this may be taken as an observational signature of our proposal.
## I Introduction
The formation of structure in a homogeneous and isotropic universe is always an interesting and evergreen topic in astrophysics and cosmology. In the standard picture, the seed for structure formation in cosmology comes from linear perturbation theory. During or after recombination the cosmological perturbations, for some modes, start to grow and does not remain strictly linear. These modes act as seeds for future structure formation in the universe. Some of these perturbation modes move out of the linear paradigm and enter the nonlinear mode where different physical principles are operational. Just before entering the nonlinear regime Jeans instability [1; 2] and other effects guide the formation of structures. Gravitationally bound structures, from the cluster of galaxies scales to much lower scales, are supposed to have been born due to nonlinear instabilities. In the standard picture of structure formation, it is assumed that primarily the dark matter sector plays the most important role. The dark matter sector is supposed to be composed of a fluid with zero pressure which follows the gravitational potential produced by a marginally denser region and tries to collapse about those regions. The baryonic matter follows the dark matter flow [3; 4; 5; 6]. One of the most important semi-relativistic methods used to study structure formation is called the top-hat collapse [7]. In this collapse process, it is assumed that if in some closed region of the cosmos, the density of dark matter has exceeded the background matter density then a collapse follows. In top-hat collapse, the closed overdense region at first expands following the background expansion, but this expansion halts at a certain moment due to gravity and there is a turnaround. Following the turnaround, the closed region starts to collapse. A pure general relativistic top-hat collapse generally produces a singularity as the end state, since the collapsing matter is homogeneous and dust-like [8]. However, in astrophysics, it is assumed that much before the formation of a singularity the collapsing fluid virializes. The virialized end state of the collapse signifies structure formation. In this sense, the top-hat collapse is a semi-relativistic process where people use a semi-Newtonian paradigm to interpret the end phase of the collapse.
Traditionally one does not take into account the role of the cosmological constant, \(\Lambda\), in the structure formation process. Some authors have tried to incorporate the effects of such a constant in the gravitational collapse process [9; 10; 11; 12]. Traditional \(\Lambda\)CDM models have their own difficulties [13; 14], and consequently, the dynamical dark energy models based on scalar fields have been introduced. One of the most widely used scalar fields in this paradigm is the quintessence field. Phantom-like scalar fields, with a negative kinetic term, also is used to model dark energy [15; 16; 17; 18; 19]. In this paper, we will mainly be working with these two types of scalar fields. Our
main goal is to study the gravitational collapse process in a two-component universe, with dark matter and a scalar field acting as source of dynamic dark energy. Many authors have attempted such a problem in various forms [20; 21; 22; 23; 24]. In almost all of the attempts the authors never used a formal general relativistic approach although they used one or two equations that can only be found in a general relativistic setting. The main reason for such a purely phenomenological approach by the previous authors is primarily based on the following reason. If one wants to apply general relativistic treatment for the gravitational collapse of a closed spherical region then one has to start with the closed Friedmann-Lemaitre-Robertson-Walker (FLRW) spacetime with some matter inside the closed region. This closed region does not exchange energy with the outside as this spacetime is assumed to be 'closed' and acts as an isolated system. If this closed region undergoes a gravitational collapse, as in top-hat collapse, then the energy density of the matter inside grows as the region shrinks as that is the only way the energy of matter can be conserved. On the other hand in a two-component system, where one of the components can be a dark energy candidate, this logic may not apply as the dark energy sector may remain homogeneous and unclustered. By unclustered dark energy, we mean that the energy density of dark energy practically remains the same as that of the expanding background FLRW spacetime. In simple terms, the dark energy sector may not collapse at all following the dark matter partner inside the closed region. In such a case energy conservation becomes problematic and the problem becomes paradoxical. To evade this problem, previous authors have used a pure phenomenological method. In this method, one does not perceive the problem relativistically, where one uses an FLRW metric with a positive spatial curvature constant and then writes down the Friedmann equations. The first Friedmann equation (containing the square of the first derivative of the local scale factor) particularly becomes problematic as it requires an estimate of all the known energy sources inside the spherical patch. As energy may not be conserved, this equation becomes redundant. Mostly all of the previous works in this field only use the other Friedmann equation containing the second derivative of the scale factor and consider it as a second-order ordinary differential equation in time and solve it with appropriate initial conditions.
We have addressed the above-mentioned problem in a more relativistic way. As it is known that for unclustered dark energy, the scalar field sector does not collapse, we expect that this sector primarily leaks out of the boundary of the closed, positively curved spacial region. To incorporate such an idea we match the internal FLRW patch with a generalized Vaidya spacetime before the internal spacetime closes (the internal radial distance marker is less than one). As a result of this we predict the emission of radiation from the boundary of the collapsing region, this radiation is naturally obtained in generalized Vaidya spacetimes. The two spacetimes are matched at a time-like hypersurface using the standard junction conditions of general relativity. The matching of the spacetimes solves the issue of non-conservation of energy in the closed patch as in the modified scenario the spherical patch is radiating energy outside and ideally does not remain an isolated patch anymore. In our model the collapsing dark matter cloud affects the dark energy density locally as the spherical region under collapse forces the dark energy sector to radiate. This model is natural in the sense that the effect of a gravitational collapse does not go unnoticed in the dark energy sector, it reacts to the collapse by transforming locally into radiation although its energy density follows the energy density of the background spacetime. We think this is the first serious attempt to produce a formal general relativistic paradigm of the spherical collapse of dark matter in the presence of dark energy.
Although we have tried to formally establish a general relativistic attempt to tackle the problem of spherical collapse of matter in presence of dark energy we do not fully extend the relativistic formalism up to the formation of the singularity which is inevitable in such situations. The primary reason for using a more phenomenological process, to end the collapse, is related to the fact that large scale structures exist and perhaps they are produced from some virialization process. Our radiating collapsing structure inevitably must virialize at some point of time and after that time the system does not remain relativistic. Virialization by itself is not built in the collapsing process, one has to bring in this pseudo-Newtonian concept to explain the existence of large-scale gravitationally bound objects. In the cases of collapse in the presence of unclustered dark energy, the dark matter sector primarily collapses and virializes whereas the dark energy sector does not virialize. Although the dark energy sector does not virialize it does affect the virialization process of the dark matter sector.
In all the cases of spherical collapse, which we have studied in this paper, the dark energy component remains primarily unclustered and homogeneous for some suitable small values of parameters in our model. Our attempt to study collapse in such two-component systems does not only produce unclustered dark energy, in some situations, it is seen that the closed, spherical region does not proceed to a collapse at all. In these cases, we have an eternal expansion of a small local spherical patch in the background of the spatially flat FLRW spacetime. These regions act like voids as the matter density inside them decreases. The dark energy density in these patches exceeds the dark energy density of the background and consequently, we can say that clustered dark energy can also be produced in our model. Whether a spherical patch will end up in a virialized state or an expanding phase depends upon the parameters of the theory and initial conditions. In these expanding regions the dark matter density remains a factor of ten smaller than the background spacetime for some time. Ultimatly as these patches expand the matter density drops. The dark en
ergy density remains less than the background dark energy density for quintessence fields. For phantom fields the dark energy desnity in the spherical patch tends to be more in the voids.
The work in this paper is organized in the following way. In section II we elaborately discuss the semi-Newtonian theory of virialization in a two component universe where the two components are related to the dark matter sector and the dark energy sector. We call this treatment semi-Newtonian as we use the language of Newtonian potentials although the energy conservation equations are obtained from an expanding universe paradigm. In section III we present the general relativistic formalism for our work. This section contains the junction conditions used to join a collapsing/expanding closed FLRW spacetime with the generalized Vaidya spacetime. Section IV presents the basic equations which guides the collapse of a spherical FLRW patch in presence of a quintessence/phantom like scalar field and dark matter. In section V we have presented the results obtained from the calculations in the previous section. This section shows the details of the various collapsing processes. Section VI gives a summary of the work presented in this paper.
## II Virialization state of dark matter in the presence of dark energy
The total gravitational potential of the over-dense region of a two-fluid system consisting of dark matter (DM) and dark energy (DE) can be written as [25]
\[V_{T}=\frac{1}{2}\int_{v} \rho_{DM}~{}~{}\phi_{DM}~{}dv+\frac{1}{2}\int_{v}\rho_{DM}~{} \phi_{DE}~{}dv\] \[+ \frac{1}{2}\int_{v}\rho_{DE}~{}\phi_{DM}~{}dv+\frac{1}{2}\int_{v }\rho_{DE}~{}\phi_{DE}~{}dv~{},\]
where \(\phi_{DM}\) and \(\phi_{DE}\) are the gravitational potentials of dark matter and dark energy, respectively, and \(\rho_{DM}\) and \(\rho_{DE}\) are the energy density of dark matter and dark energy, respectively. The integration is done over the whole volume (\(v\)) of the spherical over-dense region. The non-zero values of the four integrations written above can be used to classify the two-fluid system into the following four distinguishable scenarios,
* In the first scenario, the dark energy effect is totally neglected considering only the first integration in Eq. (II) is non-zero. In this case, the spherical overdensities of dark matter behave like an isolated sub-universe and virialize at a certain radius [7].
* If only the first two integrations in Eq. (II) contribute to the total gravitational potential of the over-dense region, then it can be shown that there exists a non-negligible effect of dark energy which affects the virialization process of the spherically symmetric over-dense regions of dark matter. In this scenario, dark energy cannot cluster and virialize with dark matter, and therefore, the dark energy density inside the over-dense region is similar to the external dark energy density. Hence, this type of model is known as the homogeneous dark energy model [26; 23; 27; 28].
* In the third scenario, dark energy does not virialize with dark matter though it can cluster inside the over-dense regions. In this scenario, it is considered that from the starting point of the matter-dominated era, dark energy moves synchronously with the dark matter on both the Hubble scale and the galaxy cluster scale. This scenario is known as the clustered dark energy scenario [29; 30; 31; 21; 25; 32; 20; 32].
* At last, in the fourth scenario, dark energy can cluster and also virializes with dark matter inside the spherical over-dense regions [25].
If we consider no influence of dark energy in the evolution of the dark matter over-densities, then as mentioned previously, the first integration of Eq. (II) contributes to the total gravitational potential of the over-dense regions. This scenario is described by the top-hat collapse model, where one self-gravitating fluid, inside a spherical over-dense region, virializes [7]. In the top-hat collapse model, the over-dense region expands first with the background but at a slower rate than that of the background, and then after a certain turnaround radius (\(R_{max}\)), the over-dense region starts collapsing. At the turnaround radius, momentarily the kinetic energy of the over-dense
region becomes zero, and the total gravitational potential energy (\(V_{T}\)) of the region becomes the total energy (\(E_{T}\)) of the same at that moment. The total energy inside the spherical over-dense region, when it reaches the turnaround radius (\(R_{max}\)), is:
\[E_{T}|_{t=t_{max}}=V_{T}=\frac{1}{2}\int_{v_{max}}\ \ \rho_{DM}\ \ \phi_{DM}\ dv=-\frac{3M^{2}}{5R_{max}}\,, \tag{2}\]
where \(t_{max}\) is the turnaround time. At the virialization time \(t=t_{vir}\), the total kinetic energy of the over-dense region \(E_{K\bar{E}}|_{t_{vir}}=-\frac{V_{T}|_{t_{vir}}}{2}\). Therefore, at the virialization time, the total energy of the over-dense region \(E_{T}|_{t_{vir}}=\frac{V_{T}|_{t_{vir}}}{2}\). Hence, using energy conservation, one can show that the spherically symmetric overdensities virialize when \(\eta=\frac{R_{vir}}{R_{max}}=0.5\). In order to model the dynamics of the over-dense region, if one uses the closed FLRW spacetime, then it can be shown that \(t_{vir}=1.81t_{max}\).
In [26] and [23], the authors investigated the cosmological scenario where the dark energy is homogeneous, i.e., the internal and external dark energy densities are the same. In [26], the authors studied the effect of the cosmological constant on the virialization of the spherical over-densities, whereas in [23], the authors consider the homogeneous quintessence dark energy model. As previously mentioned, in the homogeneous dark energy scenarios, the dark energy does not cluster and virialize inside the spherical over-densities of dark matter, however, the virialization process of the over-densities is modified since there exist a non-zero energy density and negative pressure of dark energy, and this effect of dark energy can be realized by the different values of \(\eta\). For the homogeneous dark energy scenario, the total potential energy of the over-dense region can be written as [25]
\[V_{T}=\int_{v}\ \rho_{DM}\ \ \phi_{DM}\ dv+\int_{v}\rho_{DM}\ \phi_{DE}\ dv\, \tag{3}\]
which gives
\[V_{T}=-\frac{3M^{2}}{5R}\left[1-\ \frac{q}{2}(1+3\omega)\left(\frac{\bar{a}}{ \bar{a}_{max}}\right)^{-3(1+\omega)}\left(\frac{R}{R_{max}}\right)^{3}\right] \tag{4}\]
where \(\omega\) is the equation of state of dark energy and \(q=\left(\frac{\rho_{DE}}{\rho_{DM}}\right)_{t=t_{max}}\), which is the ratio of energy densities of dark energy and dark matter inside the spherical over-dense regions at the turnaround time \(t=t_{max}\). Here and throughout the paper, \(a\) and \(\bar{a}\) represent the scale factor of the spherical over-dense region and the background, respectively. Since the physical radius \(R=ra(t)\), at the turnaround time, when the over-dense region reaches its maximum physical radius, the scale factor of the over-dense region also reaches its upper-limit \(a_{max}\), and at that moment, the scale factor of background is \(\bar{a}_{max}\). At the virialization time \(t=t_{vir}\), the scale factors of the over-dense region and background become \(a_{vir}\) and \(\bar{a}_{vir}\), respectively. Using Eq. (4) and the virialization condition \(\left(V_{T}+\frac{1}{2}R\frac{\partial V_{T}}{\partial R}\right)_{t=t_{vir}}= (V_{T})_{t=t_{max}}\), we can get the following cubic equation of \(\eta\):
\[4Q\eta^{3}\left(\frac{\bar{a}_{vir}}{\bar{a}_{max}}\right)^{-3(1+\omega)}-2 \eta(1+Q)+1=0\, \tag{5}\]
where \(Q=-(1+3\omega)\frac{q}{2}\). It can be verified that for the vanishing value of \(q\) (i.e., neglecting the dark energy effect), the solution of the above equation is \(\eta=0.5\), which we obtained earlier for the top-hat collapse model. For a homogeneous cosmological constant model, where \(\omega=-1\), the above cubic equation for \(\eta\) becomes [26; 27]:
\[4q\eta^{3}-2\eta(1+q)+1=0. \tag{6}\]
If we consider small value of \(q\), the solution of the above equation for \(\eta\) can be written as [27]
\[\eta=0.5-0.25q-0.125q^{2}+\mathcal{O}(q^{3})\, \tag{7}\]
which implies the value of \(\eta\) is always less than \(0.5\) for models involving the cosmological constant. The presence of \(\Lambda\) makes the over-dense regions collapse more to attain the virialization state.
In the homogeneous dark energy model, since the background universe continues expanding after the virialization of the over-dense regions, the density of the dark energy (with \(\omega\neq-1\)) in the virialized over-dense region also changes with time, and this is a big problem with the homogeneous dark energy model. This problem is discussed elaborately in [25]. This problem does not appear for models involving the cosmological constant since the density of dark energy always remains constant in such cases. The aforementioned problem is resolved in clustered dark energy models, where at the galaxy cluster scale, dark energy can cluster and virialize inside the over-dense regions. In this scenario, the total gravitational potential energy of the spherical over-dense regions can be written as [25]:
\[V_{T}=-\frac{3M^{2}}{5R}-(2+3\omega)\frac{3M^{2}}{5R}q\left(\frac{R}{R_{max}} \right)^{-3\omega}-(1+3\omega)\frac{3M^{2}}{5R}q^{2}\left(\frac{R}{R_{max}} \right)^{-6\omega}\, \tag{8}\]
where each of the integrations in Eq. (1) has non-zero value. Using the virialization condition and the above expression
of the total gravitational potential energy one can get the following equation for \(\eta\)[25]
\[\left[1+(2+3\omega)q+(1+3\omega)q^{2}\right]\eta-\frac{1}{2}(2+3\omega)(1-3\omega )q\eta^{-3\omega}-\frac{1}{2}(1-6\omega)(1+3\omega)q^{2}\eta^{-6\omega}=\frac{1 }{2}. \tag{9}\]
There exists another scenario where the dark energy only can cluster inside the spherical over-densities; however, it cannot virialize at that scale. For this scenario, the total potential energy of the spherical regions can be written as
\[V_{T}=-\frac{3M^{2}}{5R}\left[1+q\left(\frac{R}{R_{max}}\right)^{-3\omega} \right]\, \tag{10}\]
from which we get the following equation for \(\eta\)[25]
\[\eta(1+q)-\frac{q}{2}(1-3\omega)\eta^{-3\omega}=\frac{1}{2}. \tag{11}\]
Fig. (1) depicts how much the value of \(\eta\) deviates from \(0.5\) for different values of \(q\) if we do not neglect the dark energy effect in the evolution of the spherical over-dense regions. In that figure, the brown line shows how \(\eta\) changes with \(q\) in those scenarios where dark energy can cluster and virialize inside the over-dense regions of dark matter. For this scenario, one can verify that \(\eta\) is always greater than \(0.5\). However, in the case where clustered dark energy cannot virialize, the virialized radius of the spherical over-dense region becomes smaller than half of the turnaround radius (i.e., \(\eta<0.5\)) which is shown by the green line in Fig. (1). For both these cases, the equation of the state of dark energy is \(\omega=-0.75\). On the other hand, the blue curve in Fig. (1) shows \(\eta<0.5\) for the case involving the cosmological constant, however, the value of \(\eta\) in this scenario is greater than that in the scenario where dark energy can cluster but cannot virialize. Therefore, we can see that the presence of negative pressure in the dark energy fluid can create distinguishable large-scale structures of dark matter.
In the next section, we use a two-fluid model to describe one of the cosmological scenarios discussed above, where the dark energy is homogeneous and it influences the collapsing dynamics of the over-dense dark matter region.
## III Gravitational collapse in the presence of dust-like matter and a scalar field
As we discussed in the previous section, in this paper, we study the dynamics of a perfect fluid made of dust-like matter and a scalar field (\(\phi(t)\)) in order to understand the structure formation of dark matter in the presence of dark energy. Since we consider a minimally coupled scalar field with the dust-like matter, the energy-momentum tensor of the resultant fluid (\(T^{\mu\nu}\)) can be written as the sum of the energy-momentum tensors of the scalar field and the matter:
\[T^{\mu\nu}=(T^{\mu\nu})_{m}+(T^{\mu\nu})_{\phi}\, \tag{12}\]
where \((T^{\mu\nu})_{m}\) and \((T^{\mu\nu})_{\phi}\) correspond to the energy-momentum tensor of dust-like matter and the scalar field respectively. Therefore, \((T^{\mu}_{\mu})_{\phi}=\text{diag}(-\rho_{\phi},\rho_{\phi},\rho_{\phi},\rho_{ \phi})\) and \((T^{\mu}_{\mu})_{m}=\text{diag}(-\rho_{\text{m}},0,0,0).\) Here, we consider the collapsing fluid is homogeneous and spherically symmetric. In order to model the dynamics of the over-dense region of dark matter in the presence of dark energy, we use closed FLRW spacetime:
\[ds^{2}=-dt^{2}+\frac{a^{2}(t)}{1-kr^{2}}dr^{2}+r^{2}a^{2}(t)(d \theta^{2}+\sin^{2}\theta d\Phi^{2})\, \tag{13}\]
where \(a(t)\) is the scale factor of the over-dense region and the constant \(k\) can be \(0,\pm 1\). If \(k=0\) we have a flat spatial part whereas negative and positive \(k\) imply an open or closed spatial section. We could have taken the metric of the over-dense region as a spatially flat FLRW metric, in that case, there will be no turnaround radius. We want to generalize the top-hat collapse in the presence of dark energy and for this, we require a turnaround. For a continual gravitational collapse, singularity forms when the scale factor \(a(t)\) becomes zero at a comoving time \(t_{s}\). At the initial stage of the gravitational collapse (\(t=0\)), \(a(t)\) can attain any positive definite value that can always be rescaled to one. Therefore, we consider \(a(t=0)=1\). Since dark matter and dark energy should also be present in the background of the over-dense regions, we model the background by the above-mentioned two-fluid model, and we describe the dynamics of the background using flat FLRW spacetime:
\[ds^{2}=-dt^{2}+\bar{a}^{2}(t)dr^{2}+r^{2}\bar{a}^{2}(t)(d\theta ^{2}+\sin^{2}\theta d\Phi^{2})\, \tag{14}\]
where as mentioned before, the scale factor of the background is denoted by \(\bar{a}(t)\).
In the present paper, as stated above, we describe the dynamics of the over-dense region by closed FLRW space-time, and the background is modeled by flat FLRW space-time. However, in order to describe a matter flux through the boundary of the over-dense region, in the immediate neighborhood of the over-dense region, we consider an external generalized Vaidya space-time. It should be noted that we do not consider Vaidya space-time as a background space-time. The background at the Hubble scale is modeled by flat FLRW space-time.
Vaidya space-time is used only to describe the local dynamics of matter around the boundary of the over-dense patches.
The boundary of the over-dense region is a timelike hyper-surface \(\Sigma=r-r_{b}=0\), and the dynamical spacetime structure that we consider here is internally (\(\mathcal{V}^{-}\)) closed FLRW metric and externally (\(\mathcal{V}^{+}\)) exploding generalized Vaidya spacetime:
\[dS_{-}^{2} = -dt^{2}+a^{2}(t)\left(\frac{dr^{2}}{1-r^{2}}+r^{2}d\Omega^{2}\right) \tag{15}\] \[= -dt^{2}+a^{2}(t)d\Psi^{2}+a^{2}(t)\sin^{2}\Psi d\Omega^{2}\,\] \[dS_{+}^{2} = -\left(1-\frac{2M(r_{v},v)}{r_{v}}\right)dv^{2}-2dvdr_{v}+r_{v}^ {2}d\Omega^{2}\,\]
where we consider co-moving radius \(r=\sin\Psi\) and \(r_{v}\) and \(v\) are the coordinates corresponding to the generalised Vaidya spacetime. At the timelike hyper-surface (\(\Sigma\)) where the internal and external spacetimes match with each other, \(\Psi\) becomes \(\Psi_{b}\) and the \(v\) and \(r_{v}\) become the function of co-moving time \(t\). Therefore, at \(\Sigma\), we can write down the induced metric from both the sides as,
\[dS_{-}^{2}|_{\Sigma} = -dt^{2}+a^{2}(t)\sin^{2}\Psi_{b}d\Omega^{2}\, \tag{17}\] \[dS_{+}^{2}|_{\Sigma} = -\left(\dot{v}^{2}-\frac{2M(r_{v},v)}{r_{v}}\dot{v}^{2}+2\dot{v}r _{v}\right)dt^{2}+r_{v}^{2}d\Omega^{2}\,\]
where \(\dot{v}\) and \(\dot{r}_{v}\) are the partial derivatives of \(v\) and \(r_{v}\) with respect to co-moving time \(t\). As we know, for the smooth matching of two spacetimes at a hyper-surface, the necessary and sufficient condition is that the induced metric (\(h_{ab}\)) and the extrinsic curvature (\(K_{ab}\)) from both the sides should match at the junction. From the induced metric matching of the above spacetime structures on \(\Sigma\) yields:
\[\left(\dot{v}^{2}-\frac{2M(r_{v},v)}{r_{v}}\dot{v}^{2}+2\dot{v} \dot{r}_{v}\right) = 1\, \tag{19}\] \[r_{v} = a(t)\sin\Psi_{b}. \tag{20}\]
In order to calculate the extrinsic curvature (\(K_{ab}\)), one needs the information of the spacelike normal (\(n^{\alpha}\)) to \(\Sigma\) from both the sides. From the side \(\mathcal{V}^{-}\), the four velocity (\(u^{\alpha}\)) of the comoving shell \(\Sigma\) can be written as: \(u^{\alpha}_{-}\equiv\{1,0,0,0\}\). Using \((n_{\alpha})_{-}n^{\alpha}_{-}=1\) and \((n_{\alpha})_{-}u^{\alpha}_{-}=0\), we get
\[(n_{\alpha})_{-}\equiv\{0,a(t),0,0\}.\]
For \(\mathcal{V}^{+}\), we can write down the following expression of \(u^{\alpha}_{+},n^{\alpha}_{+}\) as,
\[u^{\alpha}_{+} \equiv \{\dot{v},\dot{r}_{v},0,0\}\, \tag{21}\] \[n^{\alpha}_{+} \equiv \{-\frac{1}{\sqrt{1-\frac{2M}{r_{v}}+2\frac{dr_{v}}{dv}}},\frac{ 1-\frac{2M}{r_{v}}+\frac{dr_{v}}{dv}}{\sqrt{1-\frac{2M}{r_{v}}+2\frac{dr_{v}}{ dv}}},0,0\}\.\]
Using the expressions of \(u^{\alpha}\) and \(n^{\alpha}\) from both the sides, we get the following expressions of azimuthal components
Figure 3: Figure depicts the allowed parameters’ space (i.e., shown by blue shaded region) of \(V_{0}\) and \(\rho_{m_{0}}\) for which the over-dense region collapses in the presence of phantom-like scalar field after reaching its maximum physical radius.
Figure 2: Figure depicts the allowed parameters’ space (i.e., shown by blue shaded region) of \(V_{0}\) and \(\rho_{m_{0}}\) for which the over-dense region collapses in the presence of quintessence-like scalar field after reaching its maximum physical radius.
of extrinsic curvature tensors:
\[K^{-}_{\theta\theta} = a(t)\sin\Psi_{b}\cos\Psi_{b}\, \tag{23}\] \[K^{+}_{\theta\theta} = r_{v}\frac{1-\frac{2M}{r_{v}}+\frac{dr_{v}}{dv}}{\sqrt{1-\frac{2M }{r_{v}}+2\frac{dr_{v}}{dv}}}. \tag{24}\]
Equating \(K^{+}_{\theta\theta}\) and \(K^{-}_{\theta\theta}\), we get,
\[\cos\Psi_{b}=\frac{1-\frac{2M}{r_{v}}+\frac{dr_{v}}{dv}}{\sqrt{1-\frac{2M}{r_{ v}}+2\frac{dr_{v}}{dv}}}\,. \tag{25}\]
Equating the temporal components of \(K_{tt}\) from both sides we get,
\[M(r_{v},v)_{,r_{v}}=\frac{F}{2\sin\psi_{b}a(t)}+\sin^{2}\psi_{b}a\tilde{a}\, \tag{26}\]
where \(F\) is the Misner-Sharp mass of the internal collapsing spacetime which should follow the following condition at the boundary,
\[F(t,\sin\psi_{b})=2M(r_{v},v). \tag{27}\]
From Eq. (26), it can be seen how the flux of the matter at the boundary depends upon the scale factor and the Misner-Sharp mass (\(F\)) of the collapsing spacetime. In the present case, \(F\) is a function of time only, since it represents the internal homogeneous two-fluid system. Due to the time dependence of \(F\), pressure is non-zero internally and it can be written as:
\[p=-\frac{\dot{F}}{\dot{R}R^{2}}. \tag{28}\]
A non-zero pressure at the boundary of a system implies the existence of non-zero matter flux through the boundary and that is the very reason why we consider generalized Vaidya spacetime in the immediate neighborhood of the internal two-fluid system. From the above expression of pressure, it can be understood that the presence of negative pressure at the boundary of an internal spacetime implies an inward matter flux through the boundary for an expanding scenario and an outward matter flux for a collapsing scenario. In our model, the non-zero internal pressure is generated due to the presence of a scalar field, and therefore, the scalar field is responsible for the non-zero flux through the boundary. On the other hand, the
Figure 4: Figure shows variation of different variables with variation of \(V_{0}\) for scalar potential \(V(\phi)=V_{0}e^{-\lambda\phi}\) for Quintessence field.
matter-field part of the two-fluid system does not leak out of the boundary, since it has zero pressure. Only the scalar field continuously is leaking out/in throughout the whole dynamics of the over-dense region. If there exists a non-minimal coupling between the matter and scalar field then a non-zero pressure at the boundary can make the matter-field flux out/in along with the scalar field. In this paper, we consider only the minimal coupling between the matter and scalar field, and therefore, the above-mentioned scenario where the matter has non-zero flux at the boundary is not possible. The flux of the scalar field from inside gives rise to non-zero components of the energy-momentum tensor of the external generalized Vaidya spacetime which is seeded by a fluid composed of null dust and perfect fluid. Therefore, the energy-momentum tensor of the internal spacetime and the external spacetime can be respectively written as:
\[T^{-}_{\mu\nu} = (\rho_{m}+\rho_{\phi}+p_{\phi})u^{-}_{\mu}u^{-}_{\nu}+p_{\phi}g^{ -}_{\mu\nu}\,\] \[T^{+}_{\mu\nu} = \bar{\epsilon}l_{\mu}k_{\nu}+(\epsilon+\mathcal{P})\left(l_{\mu} k_{\nu}+l_{\nu}k_{\mu}\right)+\mathcal{P}g^{+}_{\mu\nu}\, \tag{29}\]
where \(\bar{\epsilon}\), \(\epsilon\), and \(\mathcal{P}\) can be written as:
\[\bar{\epsilon}=-\frac{2M,v}{r_{v}^{2}},\ \ \epsilon=\frac{2M,r_{v}}{r_{v}^{2}},\ \ \text{and}\ \ \mathcal{P}=-\frac{M,r_{v}r_{v}}{r_{v}}, \tag{30}\]
and \(l^{\mu},k^{\mu}\) are two null vectors which follow the condition: \(l^{\mu}k_{\mu}=-1\). Due to the existence of non-zero pressure at the boundary, the flux from the internal spacetime at the boundary seeds the components of the energy-momentum tensor of the external generalized Vaidya spacetime. In the next section, we show the dynamics of the two-fluid system by solving Einstein's equations for the internal spacetime. Using the freedom to choose one free function, we consider the scalar field is either a quintessence field or a phantom field, and since the matter is minimally coupled with the scalar field, \(\rho_{m}\) varies as \(\frac{1}{a^{3}}\). This prior consideration makes the matter part evolve like a closed dust ball, while the internal density of the scalar field stays almost constant throughout the evolution which implies a non-zero flux of the scalar field through the boundary. We consider the initial matter density \(\rho_{m_{0}}\) to be \(10^{3}-10^{4}\) times greater than the initial density of the scalar field which allows us to use the virialization technique discussed in Sec. (II) in order to understand the virialization process of the two-fluid system, though the two-fluid system in our model is not a closed system.
Figure 5: Figure shows variation of different variables with variation of \(\lambda\) for scalar potential \(V(\phi)=V_{0}e^{-\lambda\phi}\) for Quintessence field.
Gravitational collapse solutions of matter in the presence of quintessence and phantom-like scalar fields
Using Einstein's equation for the FLRW space-time (Eq. (13,14)), one can write down the effective density and pressure of the resultant fluid as,
\[\rho = \rho_{\phi}+\rho_{m}=\frac{1}{2}\epsilon\dot{\phi}^{2}+V(\phi)+ \rho_{m}=\frac{3\dot{a}^{2}}{a^{2}}+\frac{3k}{a^{2}} \tag{31}\] \[p = p_{\phi}=\frac{1}{2}\epsilon\dot{\phi}^{2}-V(\phi)=-\frac{2\ddot {a}}{a}-\frac{\dot{a^{2}}}{a^{2}}-\frac{k}{a^{2}}\, \tag{32}\]
where the \(V(\phi)\) is the potential of the scalar field, \(\epsilon\) is a real-valued constant, \(k\) represents the curvature of 3-space, and over-dot denotes the time derivatives of the function. In this section, the above expressions of \(\rho\) and \(p\) and all other differential equations are written in a general way, where \(k=0\) implies the corresponding equations are related to the background, on the other hand, \(k=1\) implies they are related to the over-dense region. From Eq. (31) and Eq. (32), it can be seen that there are four unknown functions: \(V(\phi),\phi(a),\dot{a}(a)\) and \(\rho_{m}(a)\) and two differential equations, and therefore, we have the freedom to choose two free functions along with the initial conditions to solve the differential equations. As stated before, here we consider the scenario where the scalar field is minimally coupled with dust-like matter. Therefore, the energy-momentum tensors of matter and scalar field follow the conservation equation separately:
\[\nabla_{a}T_{\phi}^{ab} = 0\implies\ddot{\phi}+3\frac{\dot{a}}{a}\dot{\phi}+V_{,\phi}=0\, \tag{33}\] \[\nabla_{a}T_{m}^{ab} = 0\implies\dot{\rho}_{m}+3\frac{\dot{a}}{a}\rho_{m}=0. \tag{34}\]
Consequently we have \(\rho_{m}\propto\frac{1}{a^{3}}\). This shows that ultimately we have to choose only one function out of \(V(\phi),\phi(a),\dot{a}(a)\) to solve the differential Eqs. (31),(32).
Using the expression of energy density and pressure of the scalar field, we can write
\[\rho_{\phi}+p_{\phi}=\epsilon\dot{\phi}^{2}=\epsilon\phi_{,a}^{2}\dot{a}^{2}\, \tag{35}\]
where we use the chain rule \(\dot{\phi}^{2}=\phi_{,a}^{2}\dot{a}^{2}\) where \(\phi_{,a}\) imply a derivative with respect to \(a\). From Eq. (31) we get
\[\dot{a}=\pm\sqrt{\frac{\rho_{\phi}+\rho_{m}}{3}a^{2}-k}\, \tag{36}\]
where the \(+\) and \(-\) signs are for expanding and collapsing scenarios, respectively. Now, differentiating (36) with respect to the comoving time (\(t\)) we get,
\[\ddot{a}=\frac{a}{3}\left[\rho_{\phi}+\rho_{m}+\frac{a}{2}(\rho_{\phi,a}+\rho _{m,a})\right]\, \tag{37}\]
where \(\rho_{\phi,a}\) and \(\rho_{m,a}\) are derivatives of the scalar field energy density and the fluid energy density respectively, with respect to the scale factor \(a\).
Using Eqs. (35) and (36) we get
\[\rho_{\phi}(1-\frac{\epsilon\phi_{,a}^{2}a^{2}}{3})-\rho_{m}\frac{\epsilon \phi_{,a}^{2}a^{2}}{3}+p_{\phi}+k\epsilon\phi_{,a}^{2}=0. \tag{38}\]
From Eq. (31) we get
\[p_{\phi}=\rho_{\phi}-2V(\phi). \tag{39}\]
Since quintessence like scalar field has positive kinetic energy, \(\epsilon=1\) and we can write down the following expression of \(\rho_{\phi}\) using Eqs. (38), (39)
\[\rho_{\phi}=\frac{\frac{\rho_{m}\phi_{,a}^{2}a^{2}}{6}+V(\phi)-\frac{k\phi_{, a}^{2}}{2}}{(1-\frac{\phi_{,a}^{2}a^{2}}{6})}. \tag{40}\]
Now, using Eqs. (35),(36), (37) and (38), we get
\[\rho_{\phi,a}=\frac{-\phi_{,a}^{2}\rho_{\phi}a^{2}-(3+a^{2}\phi_{,a}^{2})\rho _{m}+3k\phi_{,a}^{2}}{a}-\rho_{m,a}. \tag{41}\]
Now, differentiating Eq. (40) with respect to \(a\) and using equation Eq. (41) we get the following second order differential equation
\[\frac{-\phi_{,a}^{2}\rho_{\phi}a^{2}-(3+a^{2}\phi_{,a}^{2})\rho_{m }+3k\phi_{,a}^{2}}{a}-\rho_{m,a} = \frac{1}{3(1-\frac{\phi_{,a}^{2}a^{2}}{6})^{2}}\{3V_{,\phi}\phi_ {,a}+\frac{\rho_{m,a}\phi_{,a}^{2}a^{2}}{2}+\rho_{m}a\phi_{,a}^{2}-\frac{\rho _{m,a}\phi_{,a}^{4}a^{4}}{12}\] \[+ \rho_{m}\phi_{,a}\phi_{,aa}a^{2}-\frac{V_{,\phi}\phi_{,a}^{3}a^{ 2}}{2}+V(\phi)a\phi_{,a}^{2}+V(\phi)a^{2}\phi_{,a}\phi_{,aa}-3k\phi_{,a}\phi_{, aa}-\frac{k\phi_{,a}^{4}a}{2}\}\,\]
where \(\phi_{,aa}\) is the second order derivative of scalar field with respect to \(a\). As we have mentioned before, we have to choose only one function among \(V(\phi),\phi(a),\dot{a}(a)\) to solve the dynamics of collapse. Therefore, here, we choose \(V(\phi)=V_{0}e^{-\lambda\phi}\) which is generally considered as the potential of quintessence-like scalar fields [18]. Now,
\[-4V_{0}e^{-\lambda\phi}\phi_{,a}a^{3}+9\phi_{,a}a + \frac{V_{0}e^{-\lambda\phi}\phi_{,a}^{3}a^{5}}{2}+\frac{\rho_{m_{0} }\phi_{,a}^{3}a^{2}}{4}-\phi_{,a}^{3}a^{3}+3\lambda a^{2}V_{0}e^{-\lambda\phi}- \frac{5\rho_{m_{0}}\phi_{,a}}{2}-\rho_{m_{0}}a\phi_{,aa}+3a^{2}\phi_{,aa}-V_{0} e^{-\lambda\phi}a^{4}\phi_{,aa}\] \[- \frac{\lambda V_{0}e^{-\lambda\phi}\phi_{,a}^{2}a^{4}}{2}=0\,\]
Figure 6: Figure shows variation of different variables with variation of \(\rho_{m_{0}}\) for scalar potential \(V(\phi)=V_{0}e^{-\lambda\phi}\) for Quintessence field.
and for \(k=0\),
\[-4V_{0}e^{-\lambda\phi}\phi_{,a}a^{3}+\frac{V_{0}e^{-\lambda\phi} \phi_{,a}^{3}a^{5}}{2}+\frac{\rho_{m_{0}}\phi_{,a}^{3}a^{2}}{4}+3\lambda a^{2}V_{ 0}e^{-\lambda\phi}-\frac{5\rho_{m_{0}}\phi_{,a}}{2}-\rho_{m_{0}}a\phi_{,aa}- \frac{\lambda V_{0}e^{-\lambda\phi}\phi_{,a}^{2}a^{4}}{2}-V_{0}e^{-\lambda\phi} a^{4}\phi_{,aa}=0\.\]
We can now solve the above differential equations for \(k=0,1\) to get the functional form of \(\phi(a)\) and using the solution of \(\phi(a)\) and the differential Eqs. (31), (32), we can get the expression of scale factor \(a\) as a function of comoving time \(t\). Since the differential Eq. (43) corresponds to \(k=1\), solving that equation and using Eqs. (31), (32), we can get the dynamics of the resultant fluid in the over-dense region. On the other hand, the solution of Eq. (IV.1) shows the dynamics of the resultant fluid in the background, since that equation corresponds to \(k=0\).
It is generally considered that the phantom-like scalar field has negative kinetic energy and therefore, for the phantom field \(\epsilon=-1\). For the phantom field, the above two differential equations become,
\[4V_{0}e^{-\lambda\phi}\phi_{,a}a^{3}-9\phi_{,a}a + \frac{V_{0}e^{-\lambda\phi}\phi_{,a}^{3}a^{5}}{2}+\frac{\rho_{m_{ 0}}\phi_{,a}^{3}a^{2}}{4}-\phi_{,a}^{3}a^{3}+3\lambda a^{2}V_{0}e^{-\lambda \phi}+\frac{5\rho_{m_{0}}\phi_{,a}}{2}+\rho_{m_{0}}a\phi_{,aa}-3a^{2}\phi_{, aa}+V_{0}e^{-\lambda\phi}a^{4}\phi_{,aa} \tag{45}\] \[+ \frac{\lambda V_{0}e^{-\lambda\phi}\phi_{,a}^{2}a^{4}}{2}=0\,\]
and for k=0,
\[4V_{0}e^{-\lambda\phi}\phi_{,a}a^{3}+\frac{V_{0}e^{-\lambda\phi} \phi_{,a}^{3}a^{5}}{2}+\frac{\rho_{m_{0}}\phi_{,a}^{3}a^{2}}{4}+3\lambda a^{2 }V_{0}e^{-\lambda\phi}+\frac{5\rho_{m_{0}}\phi_{,a}}{2}+\rho_{m_{0}}a\phi_{, aa}+\frac{\lambda V_{0}e^{-\lambda\phi}\phi_{,a}^{2}a^{4}}{2}+V_{0}e^{- \lambda\phi}a^{4}\phi_{,aa}=0\, \tag{46}\]
where we consider \(V(\phi)=V_{0}e^{-\lambda\phi}\) for the phantom like scalar field.
The differential Eqs. (43),(IV.1),(IV.1),(IV.1) are second order differential equations of \(\phi(a)\). Therefore, we need to consider two initial conditions \(\phi(a=1)\) and \(\phi^{\prime}(a=1)\) to solve the differential equations. Here we have taken the initial conditions as \(\phi(a=1)=.001\), \(\phi^{\prime}(a=1)=0.00001\) for solving the differential Eqs. (43, IV.1). We have three parameters \(V_{0}\), \(\rho_{m_{0}}\) and \(\lambda\). In order to compare our model with the standard top-hat collapse model, in this paper, we only discuss those scenarios where the initial value of \(\dot{a}\) is positive. The initial positive value of \(\dot{a}\) ensures an initial expansion phase of the over-dense region. Now, depending on the values of the parameters \(V_{0}\), \(\rho_{m_{0}}\) and \(\lambda\), the over-dense region may reach its maximum physical radius (i.e., at the turnaround time \(t=t_{max}\)) where from it starts collapsing. In Fig. (2) and Fig. (3), we show the parameters' space of \(V_{0}\) and \(\rho_{m_{0}}\) which allows the above-mentioned dynamics of the over-dense region in the presence of quintessence-like scalar field and phantom-like scalar field, respectively. In both cases, we consider \(\lambda=1\). The values of \(V_{0}\) and \(\rho_{m_{0}}\) in the unshaded region correspond to the ever-expanding dynamics of the over-dense patches.
Considering \(k=1\) and the values of \(V_{0}\) and \(\rho_{m_{0}}\) from the shaded region of the Figs. (2, 3), it can be shown that the solution of the end state of the gravitational collapse of the two-fluid system is a space-time singularity. Therefore, in order to stabilize the system, like the standard top-hat collapse model, we invoke the Newtonian virialization technique in our model. In the top-hat collapse model, the matter in the over-dense sub-universe is pressureless, and therefore, as discussed before, it virializes when it reaches half of its maximum physical radius. However, as discussed before, when there exist two fluids inside a compact region, and if one of them is non-dust then the virialization radius may not be equal to half of the maximum physical radius. In Sec. II, we have briefly reviewed the works where the effect of dark energy on the virialization of dark matter is studied [25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38]. In the next section, we show that the scalar field behaves almost like homogeneous dark energy in our model and therefore, we can use Eq. (7) to calculate the virialization radius of the over-dense region.
## V Modeling of homogeneous dark energy scenario by the two-fluid model
As we stated before, in our model the scalar field plays the role of dark energy, and the dust-like matter is considered dark matter. In this section, we show how the dynamics of the over-dense region vary when we change the values of \(V_{0},\rho_{m_{0}}\), and \(\lambda\). Below we list the various dynamics of the over-dense region for different values of \(V_{0},\rho_{m_{0}}\), and \(\lambda\).
In Fig.(4), we show how the dynamical quantities like \(a,\omega_{\phi}=p_{\phi}/\rho_{\phi},\rho_{\phi}/\bar{\rho}_{\phi}\), and \(\omega_{t}=p_{\phi}/(\rho_{m}+\rho_{\phi})\) evolve with time for different values of \(V_{0}\), where \(\omega_{\phi}\) is the equation of state of the quintessence-like scalar field, \(\rho_{\phi}/\bar{\rho}_{\phi}\) is the density-ratio between the energy densities of the scalar field in the over-dense region and background, and \(\omega_{t}\) is the effective equation of state of the two-fluid system. In order to show, the dynamics of the above-mentioned dynamical quantities for different values of \(V_{0}\), we consider \(\rho_{m_{0}}=5\), and \(\lambda=1\). From Figs. (2,3), it can be understood that for a fixed value of \(\rho_{m0}\) there exists a \(\mathcal{V}_{0}\) such that for all values of \(V_{0}<\mathcal{V}_{0}\), the over-dense region can have a collapsing phase after the initial phase of expansion. Therefore, for a fixed value of \(\rho_{m_{0}}\), one cannot consider any arbitrarily large value of \(V_{0}\) in order to model the desired top-hat collapse-like dynamics. The above statement is also true for \(\rho_{m_{0}}\) since there exists a lower limit \(\rho_{m_{0}}\) for a fixed value of \(V_{0}\). Therefore, one cannot consider arbitrary large values of both the parameters \(\rho_{m_{0}}\) and \(V_{0}\) to model a top-hat collapse-like dynamics. Hence, in our model, we consider suitable small values of these two parameters. The plots of \(a,\omega_{\phi},\frac{\rho_{\phi}}{\bar{\rho}_{\phi}}\), and \(\omega_{t}\) with respect to time for \(V_{0}=0.01,0.001\) are shown in Fig. (4), (4), (4), (4), (4), respectively. In Fig. (4), it can be seen that the density ratio \(\frac{\rho_{\phi}}{\bar{\rho}_{\phi}}\) slowly increases with comoving time and stays close to one throughout the total evolution of the over-dense region. The reason behind this increment of the value of the density ratio is that the background density of the scalar field decreases while the internal scalar field density approaches a constant value. However, one can consider suitable small values of \(V_{0}\) to make the density ratio close to one throughout the evolution. Therefore the quintessence-like scalar field in our model approximately behaves like homogeneous dark energy. On the
Figure 7: Figure shows variation of different variables with variation of \(V_{0}\) for scalar potential \(V(\phi)=V_{0}e^{-\lambda\phi}\) for Phantom field.
other hand, from Fig. (4(b)), we can see that inside the over-dense region, the equation of state of the scalar field \(\omega_{\phi}\sim-1\) throughout the evolution, and that is the reason why the internal density of the scalar field approaches to a constant value. Therefore, internally, the scalar field behaves like a cosmological constant \(\Lambda\). In Sec. (II), we discussed the homogeneous dark energy scenario where the dark energy is the cosmological constant. For this case, the solution of \(\eta\) which is the ratio of virialized radius (\(R_{Vir}\)) and the turnaround radius (\(R_{max}\)) becomes:
\[\eta=0.5-0.25q-0.125q^{2}+\mathcal{O}(q^{3})\,\]
where \(q=\left(\frac{\rho_{DE}}{\rho_{DM}}\right)_{a=a_{max}}\). In our model, at the initial stage, we get \(\rho_{\phi_{0}}=9.990\times 10^{-4}\) and \(\rho_{\phi_{0}}=9.990\times 10^{-3}\) for \(V_{0}=0.001\) and \(V_{0}=0.01\), respectively. Therefore, initially, \(\rho_{\phi_{0}}/\rho_{m_{0}}=1.998\times 10^{-4}\) for \(V_{0}=0.001\) and \(\rho_{\phi_{0}}/\rho_{m_{0}}=1.998\times 10^{-3}\) for \(V_{0}=0.01\) which at the turnaround, becomes \(4.09\times 10^{-3}\) and \(4.09\times 10^{-2}\), respectively. Therefore, in our model, the value of \(\eta\) does not differ much from that in the top-hat model where \(\eta=0.5\). In the top-hat model, the time interval taken by the over-dense region to reach half of its maximum scale factor is \(2.40\). In our model, the time intervals are \(2.52\) and \(2.79\) for \(V_{0}=0.001\) and \(V_{0}=0.01\), respectively. Therefore, due to the effect of the quintessence-like scalar field, the over-dense region takes larger time to virialize. Fig. (4(d)) shows that the total or effective equation of state (\(\omega_{t}\)) of the two-fluid system stays close to zero throughout the evolution. It should be noted that here and throughout the remaining paper, we consider scale factor \(a=1\) at the initial time to solve the differential Eqs. (43),(44),(45),(46).
In Fig. (5), we show the evolution of \(a,\omega_{\phi},\frac{\rho_{\phi}}{\bar{\rho}_{\phi}}\), and \(\omega_{t}\) for \(\lambda=1\) and \(\lambda=0.1\). In this case, the values of \(\rho_{m_{0}}\) and \(V_{0}\) are fixed at \(5\) and \(0.001\), respectively. The Figs. (5(a), 5(b), 5(c), 5(d)) show similar type of behavior of \(a,\omega_{\phi},\frac{\rho_{\phi}}{\bar{\rho}_{\phi}}\), and \(\omega_{t}\) as we have seen in the previous case. In this case, also the ratio \(\frac{\rho_{\phi}}{\bar{\rho}_{\phi}}\) stays close to one, and the \(\omega_{\phi}\sim-1\). Consequently, for different values of \(\lambda\), we can still say our model approximately resembles the homogeneous dark-energy model. Here, for \(\lambda=0.1\), at \(t=0\), we get \(\rho_{\phi_{0}}=9.999\times 10^{-4}\). Therefore, at the initial stage, \(\rho_{\phi_{0}}/\rho_{m_{0}}=1.999\times 10^{-4}\) and at turnaround time, this ratio becomes \(4.2\times 10^{-3}\). Therefore, similar to the previous case, here also the value of \(\eta\sim 0.5\) and the time interval taken by the over-dense region to reach the virialized radius is \(2.69\) that is \(1.12\) times greater than
the virialization time in top-hat collapse model.
The same similarity can be seen in Fig. (6) where we vary the \(\rho_{m_{0}}\). Therefore, observing the behavior of \(a,\omega_{\phi},\frac{\rho_{\phi}}{\bar{\rho}_{\phi}}\), and \(\omega_{t}\) for all three cases, it can be concluded that our model of two-fluid system consisting of pressureless matter and quintessence-like scalar field approximately resembles the homogeneous dark energy model.
In Figs. (7),(8), and (9), we show the dynamics of \(a,\omega_{\phi},\frac{\rho_{\phi}}{\bar{\rho}_{\phi}}\), and \(\omega_{t}\) for various values of \(V_{0},\lambda\) and \(\rho_{m_{0}}\) in the presence of phantom-like scalar field. From Figs. (7(a))-(7(d)), Figs. (8(a))-(8(d)), and Figs. (9(a))-(9(d)), it can be seen that similar to the previous case here also \(\frac{\rho_{\phi}}{\bar{\rho}_{\phi}}\sim 1\) and \(\omega_{\phi}\sim-1\). Therefore, the phantom-like scalar field in our model also behaves like homogeneous dark energy.
Till now what we have discussed deals with top-hat-like collapse in the presence of quintessence or phantom-like
Figure 9: Figure shows variation of different variables with variation of \(\rho_{m_{0}}\) for scalar potential \(V(\phi)=V_{0}e^{-\lambda\phi}\) for Phantom field.
scalar fields. The ranges of potential parameter value and the initial matter density, which gives rise to such kind of collapse, are shown in Figs. (2,3). What happens if the potential parameter value and the initial matter density does not lie in the shaded region of Figs. (2,3)? In such cases we see that our model predicts that instead of gravitational collapse, the spherical over dense patch starts to expand. These patches expand forever producing void like structures, inside which the matter energy density is one order less than the background matter density for some period. Later the matter density goes down. For the phantom scalar fields, it is seen that the dark energy density grows inside the spherical expanding patch, when compared with the background dark energy density. For quintessence fields the dark energy density in the spherical patch becomes less than the corresponding energy density outside of the patch. If one assumes \(\rho_{m_{0}}\) has a wide distribution in space for various non-linear perturbations then, for a fixed \(V_{0}\) in the shaded regions of Figs. (2,3), one can have collapse or expansion depending on the value of \(\rho_{m_{0}}\). Our work predicts that some regions of the universe will collapse gravitationally whereas other regions will expand eternally to produce voids. For gravitational collapse of pressureless matter in the absence of scalar fields, one only obtains collapsing solutions.
## VI Conclusion
In this paper we have studied the gravitational dynamics of a two component system consisting of pressureless matter and a scalar field, where the scalar field does not have any direct coupling with the matter component. The motivation to investigate this type of two-fluid dynamics is to understand how at a certain cosmological epoch, dark energy affects gravitational collapse of pressure-less dark matter, where the scalar field and the pressure-less matter play the role of dark energy and dark matter, respectively. We have chosen the scalar field potential in such a way that it represents the potential of quintessence or phantom like fields. In order to model the dynamics of the primordial over-dense regions of dark matter in the presence of dark energy, we have chosen a closed FLRW metric as the internal space-time of the over-dense region which is seeded by the two component system. On the other hand, the background is modeled by flat FLRW metric which is also seeded by the two components: pressure-less matter and a scalar field.
Previous authors have attempted this problem phenomenologically, where the guiding equation for the gravitational collapse of the dark matter component in presence of the scalar field was obtained from the Friedmann equations, but the complete relativistic framework was not used. The primary reason for not using the full general relativistic machinery is related to the fact that the dark energy component do not collapse with the dark matter part. In such a case one cannot use an isolated, closed FLRW spacetime which collapses towards a virial state. In the present work we have tried to implement a full general relativistic scheme to monitor the spherical collapse of the dark matter component in presence of the scalar field, up to virialization of the dark matter sector. To incorporate the relativistic treatment we have abandoned the idea of an isolated, closed FLRW spacetime collapse. Although we have used the FLRW spacetime with positive spatial curvature as the collapsing spacetime, we have matched this spacetime with an external, radiating Vaidyia spacetime at a suitable radial distance. In doing so the system has become an open system which can radiate. In order to describe a matter flux through the boundary of the over-dense region, in the immediate neighborhood of the over-dense region, we consider an external generalized Vaidya space-time. We consider the potential of the scalar field \(V(\phi)=V_{0}e^{-\lambda\phi}\) which is the typical potential of quintessence-like and phantom-like scalar fields. We solve the Friedmann equations considering the above type of potential to investigate the dynamics of the over-dense region. In order to compare our results with that of the top-hat collapse model, we restrict ourselves to investigating those scenarios where the over-dense region collapses after an initial expansion phase. The collapsing spacetime is homogeneous and isotropic up to the matching radius, after which the spacetime remains isotropic but becomes inhomogeneous.
In our scheme, the collapsing dark matter affects the dark energy sector locally and induces radiation in the Vaidya region. Our work predicts that there will be over-dense regions in the universe which will not collapse, they will expand forever producing voids. Which regions will collapse and which regions will not collapse depends upon the potential parameter \(V_{0}\) and the initial value of the local dark matter density \(\rho_{m_{0}}\). The nature of the outgoing flux, in the Vaidya region, will depend on whether there is a collapse or an expansion. Gravitational collapse in general always produce uncustered dark energy kind of a model, where the dark energy density inside the collapsing core remains practically the same as that of the background dark energy density. On the other hand expanding patches can have clustering of dark energy as in these cases, the expanding patches have different dark energy density compared to the background spacetime. The Vaidya radiation from collapsing regions are an unique prediction of our model and in near future we will like to work on the observational side of this problem.
In this paper, we qualitatively discuss our model of spherical gravitational collapse of a two component system and do not attempt any comparison with observational data. One straightforward comparison can be done by comparing the theoretical value of the effective equation of state \(w_{t}\) at the virialized state with the observed equation of state of the over-dense regions in the galactic cluster scale. This comparison would give constraints on the values of \(V_{0},\lambda\) and \(\rho_{m_{0}}\) and that would be important to understand the effects of the homogeneous dark energy on structure formation at the galactic cluster scale. We will discuss this phenomenological aspect in the future.
## VII Acknowledgement
DD would like to acknowledge the support of the Atlantic Association for Research in the Mathematical Sciences (AARMS) for funding the work.
|
2303.15368 | 2S-UDF: A Novel Two-stage UDF Learning Method for Robust Non-watertight
Model Reconstruction from Multi-view Images | Recently, building on the foundation of neural radiance field, various
techniques have emerged to learn unsigned distance fields (UDF) to reconstruct
3D non-watertight models from multi-view images. Yet, a central challenge in
UDF-based volume rendering is formulating a proper way to convert unsigned
distance values into volume density, ensuring that the resulting weight
function remains unbiased and sensitive to occlusions. Falling short on these
requirements often results in incorrect topology or large reconstruction errors
in resulting models. This paper addresses this challenge by presenting a novel
two-stage algorithm, 2S-UDF, for learning a high-quality UDF from multi-view
images. Initially, the method applies an easily trainable density function
that, while slightly biased and transparent, aids in coarse reconstruction. The
subsequent stage then refines the geometry and appearance of the object to
achieve a high-quality reconstruction by directly adjusting the weight function
used in volume rendering to ensure that it is unbiased and occlusion-aware.
Decoupling density and weight in two stages makes our training stable and
robust, distinguishing our technique from existing UDF learning approaches.
Evaluations on the DeepFashion3D, DTU, and BlendedMVS datasets validate the
robustness and effectiveness of our proposed approach. In both quantitative
metrics and visual quality, the results indicate our superior performance over
other UDF learning techniques in reconstructing 3D non-watertight models from
multi-view images. Our code is available at
https://bitbucket.org/jkdeng/2sudf/. | Junkai Deng, Fei Hou, Xuhui Chen, Wencheng Wang, Ying He | 2023-03-27T16:35:28Z | http://arxiv.org/abs/2303.15368v3 | NeUDF: Learning Unsigned Distance Fields from Multi-view Images for Reconstructing Non-watertight Models
###### Abstract
Volume rendering-based 3D reconstruction from multi-view images has gained popularity in recent years, largely due to the success of neural radiance fields (NeRF). A number of methods have been developed that build upon NeRF and use neural volume rendering to learn signed distance fields (SDFs) for reconstructing 3D models. However, SDF-based methods cannot represent non-watertight models and, therefore, cannot capture open boundaries. This paper proposes a new algorithm for learning an accurate unsigned distance field (UDF) from multi-view images, which is specifically designed for reconstructing non-watertight, textureless models. The proposed method, called NeUDF, addresses the limitations of existing UDF-based methods by introducing a simple and approximately unbiased and occlusion-aware density function. In addition, a smooth and differentiable UDF representation is presented to make the learning process easier and more efficient. Experiments on both texture-rich and textureless models demonstrate the robustness and effectiveness of the proposed approach, making it a promising solution for reconstructing challenging 3D models from multi-view images.
## 1 Introduction
In recent years, volume rendering-based 3D model and scene reconstruction from multi-view images has gained popularity, largely due to the success of neural radiance fields (NeRF) [27]. A number of methods have been developed that build upon NeRF and use neural volume rendering to learn signed distance fields (SDFs) for reconstructing 3D models [37, 33, 35, 7]. While these SDF-based methods are effective in reconstructing watertight models, they cannot represent non-watertight models, as SDFs distinguish between the interior and exterior of a model, and therefore cannot capture open boundaries.
Recent research has attempted to address the limitation of signed distance fields in reconstructing non-watertight models by using unsigned distance fields (UDFs). For instance, NeuralUDF [22] extends NeuS [33] to learn UDFs for reconstructing open models. However, UDF-based methods have their own limitations. For example, as shown in Figure 1, NeuralUDF struggles to reconstruct _textureless_ models or models with few distinguishable features, as it relies on texture information to improve the accuracy of reconstruction. Also, UDF is not differentiable at the zero level-set, making it difficult to learn. Reconstructing open, textureless models from multi-view images using volume rendering remains a challenging problem.
We propose a new algorithm for learning an accurate UDF from multi-view images, which we call NeUDF. Our method is specifically designed for reconstructing non-watertight, textureless models, which are particularly challenging to reconstruct using existing UDF-based methods. The key to learning an accurate UDF is to design
Figure 1: We show a reconstruction result using our method and NeuS [33] on a mostly blue short trouser with little texture. Our method successfully reconstructs the structure of the trouser, while NeuS fails to learn the correct structure of the object, and instead represents it as a closed model.
an unbiased and occlusion-aware density function of the UDF. Since the inside and outside of the model are not distinguished, the S-shape density function used in SDF-based methods [33] is not appropriate for UDFs. Long et al. [22] proposed a piecewise density function adapted from NeuS [33], but this function is rather complicated and difficult to learn, limiting its applicability to texture-rich models.
To overcome these limitations, we propose a new UDF density function for learning UDFs, which can tolerate a small amount of bias to ease learning. Our density function is dense enough to be almost opaque, making it occlusion-aware and effective for reconstructing non-watertight models. Additionally, we present a smoothed UDF representation that makes the UDF differentiable at the zero level set, which is crucial for the learning process. We evaluate our proposed approach on both texture-rich and texture-less models from the DeepFashion3D [41] and DTU [17] datasets. Our experiments demonstrate that our method is robust and can effectively learn UDFs for both types of inputs.
Our work makes the following contributions:
1. We propose a method for reconstructing non-watertight models based on NeRF that is capable of reconstructing both texture-rich and challenging textureless models.
2. We introduce a theoretically sound density function for UDFs and in practice adopt a variant that allows a small bias and is approximately occlusion-aware.
3. We present a smooth and differentiable UDF representation that makes the learning process easier and more efficient.
4. We present a simple yet effective method for extracting the target surface from the learned UDFs, which can reduce the bias.
## 2 Related Work
### 3D Reconstruction from Multi-View Images
Surface reconstruction from multi-view images has been a subject of study for several decades, and can generally be classified into two categories: voxel-based and point-based methods. Voxel-based methods [2, 3, 18, 19, 32] divide the 3D space into voxels and determine which ones belong to the object. These methods can be computationally expensive and may not be suitable for reconstructing complex surfaces. Point-based methods [12, 30, 36] use structure-from-motion [15] to calibrate the images and generate a dense point cloud using multi-view stereo [11]. Finally, surface reconstruction methods (e.g., [1, 20, 16]) are used to generate a mesh. Since multi-view stereo requires dense correspondences to generate a dense point cloud, which are often difficult to compute, its results often contain various types of artifacts, such as noise, holes and incomplete structures.
Neural network-based 3D surface reconstruction has received attention in recent years with the emergence of neural rendering [27]. Several methods have been proposed for volume rendering and surface reconstruction using neural networks. VolSDF [37] uses the cumulative distribution function of Laplacian distribution to evaluate the density function from SDF for volume rendering and surface reconstruction. NeuS [33] adopts an unbiased density function for SDFs for more accurate reconstruction. SparseNeuS [23] extends NeuS to use fewer images for reconstruction. HF-NeuS [35] improves NeuS by proposing a simplified and unbiased density function and using hierarchical MLPs for detail reconstruction. Geo-NeuS [9] incorporates structure-from-motion to add more constraints. NeuralWarp [7] improves the accuracy by optimizing consistency between warped views of different images. All of these methods learn SDFs, which can only reconstruct watertight models.
More recently, Long _et al_. proposed NeuralUDF [22] for learning UDF for reconstructing open models. It adapts the density function of NeuS to UDFs by introducing an indicator function. However, this method can only learn texture-rich models due to the complex density function used in training. In contrast, our proposed UDF learning method is capable of reconstructing both texture-rich and textureless models without the need for masks.
### 3D Reconstruction from Point Clouds
There has been recent interest in surface representation using signed distance fields (SDFs) and occupation fields. Several methods have been proposed for learning SDFs [28, 4, 31, 24, 34], while occupation fields have been used in methods such as [26, 5]. However, both SDFs and occupation fields can only represent watertight models.
To represent non-watertight models, some methods are proposed to learn UDF from 3D point clouds [6, 39, 40]. Our proposed method also uses UDF for non-watertight models representation, but we learn it directly from multi-view images, which is a challenging problem.
## 3 Method
Our goal is to extract a set of surface points from a set of input images along with their corresponding known camera poses and intrinsic parameters. To achieve this, we first learn an implicit unsigned distance field, whose zero level set represents the 3D surface. Unlike signed distance fields, UDFs have positive distance values, making them suitable for representing non-watertight models. However, learning a UDF from multi-view images is a challenging task
that requires a proper density function balancing occlusion-awareness, ease of learning, and unbiasedness. Another technical issue is that UDFs are not differentiable at the zero level-set, making it difficult to train. Our proposed method is designed to address the above challenges of learning UDFs from multi-view images.
In this section, we first review the key concepts of volume rendering and neural radiance fields. Next, we propose an unbiased density function and use its variant to balance the density properties for UDF learning. We then introduce a differentiable form of UDF that can be used for stable learning. Finally, we detail our loss function design.
### Review of Volume Rendering
Volume rendering is a crucial component of neural radiance fields [27]. During volume rendering, a ray \(\mathbf{r}\) is cast from the camera position \(\mathbf{o}\) to each pixel of the virtual canvas. The direction of the ray is denoted by \(\mathbf{d}\), and any point \(\mathbf{r}(t)\) along the ray can be expressed as \(\mathbf{o}+t\mathbf{d}\), where \(t\) is the arc-length parameter. In the following, we refer to the point \(\mathbf{r}(t)\) using the parameter \(t\) if there is no confusion.
The color of the corresponding pixel is determined by the color of the ray, which is the weighted integral of all colors \(c(t)\) along the ray \(\mathbf{r}\), \(c(\mathbf{r})=\int_{0}^{\infty}w(t)c(t)\,\mathrm{d}t\).
The integral is often computed numerically using quadrature, which is a sum of color values at a set of sampled points \(t_{1},t_{2},\cdots,t_{n}\) on the ray, i.e., \(c(\mathbf{r})=\sum_{i=1}^{n}w(t_{i})c(t_{i})\).
The weight \(w(t)\) of each color value is the product between the volume density \(\sigma\) and transparency \(T\)[25]\(w(t)=T(t)\sigma(t)\), which measures the accumulated transmittance and occlusion of the ray up to that point. The transparency \(T\), which is defined as \(T(t)=\exp\left(-\int_{0}^{t}\sigma(u)\,\mathrm{d}u\right),\) is the probability that the ray travels from \(0\) to \(t\) without hitting any other particle. \(T(t)\) is a monotonic decreasing function with a starting value of \(T(0)=1\).
In NeRF [27], the volume density \(\sigma\) is directly computed by a multi-layer perceptron (MLP), which takes the 3D coordinates of a point as input and outputs the corresponding density value. This allows for a flexible representation of the volume density field and enables the synthesis of novel views from arbitrary viewpoints. However, since the volume density field does not provide explicit information about the surface geometry, extracting a high-quality surface from it is difficult. Moreover, the density field may not correspond to a valid surface at all due to the ill-posed nature of the reconstruction problem [38].
To overcome this limitation, NeuS [33] utilizes signed distance functions to represent 3D geometry. Specifically, it uses a scalar function \(f\) that maps each point in 3D space to the signed distance from that point to the surface, where the zero level set of the function represents the target surface. It then maps the signed distance function \(f\) to an S-density function \(\phi_{s}(f)\) using a logistic density distribution \(\phi_{s}(x)=se^{-sx}/(1+e^{-sx})^{2}\), which assigns a normalized density value to each point in space based on its signed distance from the surface. The normalized S-density function is then used to define the weight function \(w(t)\) for volume rendering.
Wang et al. [33] proved that the weight function \(w(t)\) in NeuS is both unbiased and occlusion-aware. Being unbiased means that the weight function attains a locally maximal value at a surface intersection point, while an occlusion-aware weight function implies that when two points have the same SDF value, the point closer to the viewpoint has a larger contribution to the output color than the other point. This accounts for the occlusion effect, where points behind the surface are not visible and should not contribute to the output color.
### Density Function
A core part of NeUDF is an unbiased and occlusion-aware weight function. Inspired by HF-NeuS [35], we propose a bell-shaped weight function that maps unsigned distance to density. Our weight function is _unbiased_ in that the weights are maximum _on_ the surface.
**Theorem 1**.: _A ray \(\mathbf{r}=\mathbf{o}+t\mathbf{d}\) hits a planar object \(M\). Let \(\theta\) be the angle between the UDF gradient and the ray direction \(\mathbf{d}\). Define the bell-shaped density \(\sigma\) as_
\[\sigma(t)=\frac{se^{-sf(t)}}{1+e^{-sf(t)}}|\cos(\theta)|, \tag{1}\]
_where \(s>0\) is a learnable parameter. Then the weight function \(w(t)=T(t)\sigma(t)\) is unbiased._
Proof.: Let \(\mathbf{r}(t_{0})\) be the intersection point.
We first consider the transparency \(T(t)\) with \(t<t_{0}\), i.e., for any point along the ray up to \(\mathbf{r}(t_{0})\). Since the point \(t\) is
Figure 2: Weight function. (a) A biased weight function can have its maximum at a point away from the surface, reducing the accuracy of the reconstructed surfaces. (b) In contrast, an unbiased weight function attains its local maximum at the point where the ray intersects the surface. The use of an unbiased weight function is essential for accurate surface reconstruction in volume rendering.
in front of \(M\), we have \(90^{\circ}<\theta\leq 180^{\circ}\) and \(\cos(\theta)<0\). The transparency \(T(t)\), \(t<t_{0}\), is given by
\[T(t) =\exp\left(-\int_{0}^{t}\frac{se^{-sf(u)}}{1+e^{-sf(u)}}(-\cos( \theta))\mathrm{d}u\right)\] \[=\exp\left(\int_{0}^{t}\frac{se^{-sf(u)}}{1+e^{-sf(u)}}\mathrm{d }f(u)\right)\] \[=\exp\left(\ln(1+e^{-sf(0)})-\ln(1+e^{-sf(t)})\right)\] \[=\frac{1+e^{-sf(0)}}{1+e^{-sf(t)}}.\]
Next we compute the weight function \(w(t)\), which is proportional to
\[w(t)=T(t)\sigma(t)\propto\frac{e^{-sf(t)}}{(1+e^{-sf(t)})^{2}}.\]
It is straightforward to verify that the sign of \(\frac{\mathrm{d}w}{\mathrm{d}f}\) is the same as \(e^{-sf(t)}-1\). Using the chain rule, we can show that the derivative \(\mathrm{d}w/\mathrm{d}t>0\) is positive for \(t<t_{0}\), implying that the weight \(w\) increases as the ray approaches \(M\) from the front.
After the ray passes through \(M\), \(\cos\theta\) becomes positive and both \(T(t)\) and \(\sigma(t)\) are monotonically decreasing for \(t>t_{0}\). Therefore, the product \(T(t)\sigma(t)\) decreases as the ray leaves \(M\).
Since the weight is continuous, \(w\) attains its maximum on the plane \(M\), implying that it is unbiased.
The density function \(\sigma\) in Equation (1) is theoretically sound, satisfying the requirements of unbiased sampling. However, in practice, it may be too "transparent" to use (due to its bell-shaped geometry), leading to poor sampling efficiency. Moreover, the dependence of the bell-shaped function on the UDF gradients, as indicated by the \(\cos(\theta)\) term, can introduce instability and sensitivity to noise and oscillations, thereby posing challenges in learning the function.
To address these issues, we replace the \(\cos(\theta)\) term with a constant \(c>1\), resulting in a modified density function
\[\hat{\sigma}(t)=\frac{cse^{-sf(t)}}{1+e^{-sf(t)}},\;s>0,\;c>1.\]
This modification increases both numerical stability and opacity of the original density function \(\sigma\).
However, unlike the original density \(\sigma\), the modified density \(\hat{\sigma}\) introduces bias in the weight function, since the maximum value of weight occurs at a point \(t^{*}\) in front of \(M\), which has a distance value
\[f(t^{*})=\frac{1}{s}\ln\frac{-c}{\cos(\theta)}. \tag{2}\]
While the modified density function \(\hat{\sigma}\) is not theoretically unbiased, setting \(c\) as a small constant can greatly reduce the bias in practice. Additionally, we expect \(\hat{\sigma}\) to be approximately occlusion-aware. To further understand the numerical properties of \(\hat{\sigma}\), we consider the extreme case where the incident light ray is perpendicular to the planar surface \(M\). In this case, the unsigned distance function is \(f(t)=1-t\) for points in front of \(M\). As \(\hat{\sigma}\) is symmetric for the two sides of \(M\), the surface transparency is the square of the transparency of the front side. Our computation shows
\[\left(e^{-\int_{0}^{1}\hat{\sigma}(u)\mathrm{d}u}\right)^{2}\] \[=\left[\exp\left(-\int_{0}^{1}\frac{cse^{-s(1-u)}}{1+e^{-s(1-u)}} \mathrm{d}u\right)\right]^{2}=\left(\frac{1+e^{-s}}{2}\right)^{2c},\]
and to reduce transparency, we should choose a relatively large \(c\).
In our implementation, we set the constant \(c=5\) based on the typical value of the learned parameter \(s\), which ranges between \(1000\) and \(2000\) given that the models are learned in a unit sphere to become unitless. This enables us to estimate the upper bound of the bias. For points in front of the surface, the incident angle \(\theta\) between the ray and the surface normal is obtuse, so we restrict \(\theta\) to the range of \([91^{\circ},180^{\circ}]\). By setting \(c=5\), the offset width between 0.00161 and 0.00566 is obtained relative to the true zero level set, indicating that the maximum relative bias is below 0.5%. This error level is acceptable for most application scenarios. Moreover, the surface transparency in the extreme case mentioned above is less than \(0.001\). When a ray has a larger incident angle, its transparency becomes even smaller, resulting in an almost opaque density \(\hat{\sigma}\). As a result, the weight function \(w\) is approximately occlusion-aware. Thus, setting the constant \(c=5\) offers a good balance between occlusion-awareness and unbiasedness.
It is worth mentioning that our modified density function \(\hat{\sigma}\) is much simpler than the density function used in NeuralUDF [22]. Although both densities introduce biases, we can control the bias of our density function within a very small range. In contrast, it is unclear what the range of bias is for the density function used in NeuralUDF. Thanks to the simplicity, our density function is easier to learn, thereby our method can work for a wider range of models. This includes textureless models, which pose challenges for NeuralUDF.
### Training
**Differentiable UDFs.** NeuS uses an MLP network to learn the signed distance function \(f\), which is a differentiable function. In contrast, UDF is not differentiable at the zero level set, making the network difficult to learn the values and gradients of the UDF close to the zero level set.
Another crucial requirement is to ensure non-negative values for the computed distances, which seems a trivial
task as one may simply apply absolute value or normalization such as ReLU to the MLP output. However, applying the absolute value to the distance is not viable due to its non-differentiability at zero. Similarly, normalizing the output value using ReLU is not feasible as it is also non-differentiable at zero, and its gradient vanishes for negative inputs. This can be particularly problematic for learning UDFs, since when the MLP returns a negative distance value, the ReLU gradient vanishes, hindering the update of the distance to a positive value in the subsequent iterations.
We add a softplus function after the output layer of the MLP. The softplus function [8] is a smooth and differentiable approximation of the ReLU function that is defined as
\[\text{softplus}(x)=\frac{1}{\beta}\ln(1+e^{\beta x}).\]
Softplus has the same shape as ReLU, but it is continuous and differentiable at every point, and its gradients do not vanish anywhere. Using the softplus function allows us to ensure that the output of the MLP is non-negative and differentiable, making it suitable for learning the UDF. We set \(\beta=100\) in our experiments.
**Loss functions.** Following NeuralUDF [22], we adopt an iso-surface regularizer to penalize the UDF values of the non-surface points from being zero, therefore encouraging smooth and clean UDFs. The regularization loss is defined as [22]
\[\mathcal{L}_{reg}=\frac{1}{MN}\sum_{i,k}\exp{(-\tau\cdot f(t_{i,k}))},\]
where \(\tau=5.0\) is a constant scalar that scales the learned UDF values, \(M\) is the total number of sampled rays per training iteration, and \(N\) is the number of sampled points on a single ray.
The value of \(s\), which is learnable in our method, significantly affects the quality of the reconstruction. When \(s\) is small, it introduces a larger bias and leads to a more blurred output. We observe that \(s\) typically converges to a relatively large value between 1000 and 2000, leading to visually pleasing results. However, in rare cases when \(s\) stops increasing during training, we apply a penalty to force it to increase. The penalty is defined as follows
\[\mathcal{L}_{s}=\frac{1}{M}\sum_{i,k}\frac{1}{s_{i,k}},\]
where \(M\) is the number of rays during a training epoch. This term \(\mathcal{L}_{s}\) aggregates the reciprocals of all \(s\) values used for the point \(t_{i,k}\) on ray \(r_{i}\). Intuitively speaking, it encourages a larger \(s\) during the early stage of training. In our implementation, we make this term optional since \(s\) generally increases with a decreasing rate during training, and the penalty term is only necessary in rare cases when \(s\) stops at a relatively low value.
As in other SDF- and UDF-based methods [33, 35, 22], we adopt color loss and Eikonal loss in our approach. Specifically, the color loss \(\mathcal{L}_{color}\) is the \(L_{1}\) loss between the predicted color and the ground truth color of a single pixel as used in [33]. The Eikonal loss \(\mathcal{L}_{eik}\) is used to regularize the learned distance field to have a unit gradient [13]. Putting it all together, we define the combined loss function as a weighted sum
\[\mathcal{L}=\mathcal{L}_{color}+\lambda_{1}\mathcal{L}_{eik}+\lambda_{2} \mathcal{L}_{reg}+\lambda_{3}\mathcal{L}_{s},\]
where \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) are hyperparameters that control the weight of each loss term. In our experiments, we empirically set \(\lambda_{1}=0.1\), \(\lambda_{2}=0.01\), and \(\lambda_{3}=0.001\), although \(\lambda_{2}\) is occasionally set to \(0.02\), and \(\lambda_{3}\) is optional.
**Adaptive hierarchical sampling.** We propose an adaptive hierarchical sampling (AHS) strategy for efficiently finding sample points along each ray for color computation. Existing approaches, such as NeuS [33] and NeuralUDF [22], adopt a fixed set of pre-defined sampling rates for all models, regardless of their geometry differences. They first uniformly sample 64 points on the ray and then iteratively conduct importance sampling on top of the coarse probability estimation. The sampling rate is doubled after each iteration, resulting in a maximum 512 samples on each ray.
We argue that an effective sampling strategy should be self-adaptive to geometry, which is connected to the learnable parameter \(s\). Essentially, a larger value of \(s\) leads to a narrower bell-shaped density function, resulting in higher weights given to points closer to the surface and more concentrated samples on the surface. Our density function depends on the learnable parameter \(s\), which usually ranges between 1000 and 2000. We use the learned \(s\) to guide more precise sample selection. Specifically, in the \(i\)-th iteration of sampling, we set \(s\) to \(\max{\left(32\times 2^{i},\frac{s}{2^{k-1}}\right)}\), where \(k\) is the maximum number of sampling iterations, which we set as \(k=4\) in our implementation. During the early stage of training, when \(s\) is still relatively small, we use the sampling rates as in NeuS and NeuralUDF. As the training progresses and the value of \(s\) consistently increases, our adaptive sampling strategy is activated. Our ablation study confirms that adaptive hierarchical sampling increases the accuracy of our approach by up to 25%. See Section 4.3.
### Surface Extraction
After learning the UDF \(f\), extracting the object surface is essential. Instead of extracting the zero level set of the function \(f\), we locate the sample points with the maximum weights to construct the surface. The reason for doing so is that extracting the zero level set from UDFs is notoriously unstable [14], often leading to artifacts such as
holes, flipped triangles, and self-intersections in the output meshes.
The main technical challenge in locating the samples with maximum weights is that they depend on the view direction, due to the \(\cos\theta\) term in Equation (2). However, since our method penalizes small values of \(s\) and yields a relatively large \(s\in[1000,2000]\), the \(\ln\frac{-c}{\cos\theta}\) term has little effect to the distance value \(f(t^{*})\).
In our implementation, we obtain a subset of rays for each input image by uniformly sampling rays every \(k=5\) pixels in both horizontal and vertical directions. For each ray \(\mathbf{r}\), we sample points along it to find the location \(t^{*}\) with the maximum weight. We classify the ray \(\mathbf{r}\) as a foreground ray if the sum of weights for the samples along \(\mathbf{r}\) inside the region of interest (a unit sphere centered at origin) is greater than 0.5. Otherwise, we consider \(\mathbf{r}\) as a background ray that misses the target object.
After collecting all foreground rays from all input images, we use the sample with the maximum weight for each ray as a surface point. Computational results show that our strategy is effective in producing satisfactory results (see Section 4.3).
## 4 Experiments
_Implementation Details._ The MLP for the UDF network consists of 8 hidden layers, each with 256 elements. We also use skip connections after every 4 hidden layers. The output of the UDF network is a single value representing the predicted UDF and a 256-dimensional feature vector used in the color network.
For the color network, we use another MLP with 4 hidden layers, each having 256 elements. We use the coarse-to-fine strategy proposed by Park [29] for position encoding, setting the maximum number of frequency bands to 16 for the UDF network and 6 for the color network. For background rendering, we use NeRF++ [38] for background prediction. During training, we use the Adam optimizer [21] with a global learning rate of 5e-4. We sample 512 rays per batch and train our model for 300,000 iterations.
_Data Sets._ To evaluate our method, we use two datasets: DeepFashion3D [41] and DTU [17]. The DeepFashion3D dataset consists of clothing models, which are open models with boundaries. As only 3D points are available, we render 72 images of resolution \(1024\times 1024\) with a white background from different viewpoints for each model. The DTU dataset consists of models captured in a studio, and all the models are watertight. We use this dataset to validate that our method also works well for watertight models. These datasets have been widely used in previous works such as [37, 33, 35].
_Baselines._ Several methods have been proposed for learning signed distance functions (SDFs) from multi-view images, which generate watertight models. To validate the effectiveness of our proposed method, we compare with state-of-the-art methods, namely VolSDF [37], NeuS [33], and HF-NeuS[35].
To our knowledge, NeuralUDF [22]1 is the only method designed for open model reconstruction, but it is limited to texture-rich models. Our method, on the other hand, can reconstruct both textureless and texture-rich models thanks to the new and simpler density function we propose in this paper.
Footnote 1: Since the source code of NeuralUDF is unavailable at the moment of submission, we cannot make quantitative comparison with it in the paper.
### Comparisons on Open Models
We evaluate our method and compare it with baselines using the clothes from DeepFashion3D, where the models have multiple open boundaries. VolSDF, NeuS, and HF-NeuS always close the boundaries since they learn SDFs. In contrast, our method learns UDFs, which can generate open models. Table 1 shows the point-to-point Chamfer distances of the results. Some of the Chamfer distances of the compared methods are large because the open holes are closed, resulting in significant errors.
It is worth noting that HF-NeuS requires mask supervision to produce reasonable results on the DeepFashion3D dataset. With mask supervision, it is capable of producing double-covered surfaces that capture the geometric features well. However, these surfaces are still closed from a topological perspective, and their Chamfer distances are higher than those produced by our method. Additionally, the ability of HF-NeuS to generate valid results relies heavily on the mask; if the mask lacks boundary information, HF-NeuS may close the boundaries. For example, as shown in Figure 3, the sleeve structure is not labeled in the mask, resulting in a closed sleeve. In contrast, our method is able to properly reconstruct the open sleeve structure without relying on a mask.
Figure 3: Qualitative comparison between our method and mask-supervised HF-NeuS [35]. Since the mask does not show the hole of the sleeve, HF-NeuS generates a watertight sleeve for the model. Our method, which is mask-free, correctly learns the structure of the sleeve.
As demonstrated in Figure 4, we test various types of garments, some of which have rich textures, while others are nearly a single color. Learning UDFs for textureless models is more challenging since various regions of a model are ambiguous without clear color differences. However, our NeUDF generates satisfactory results even without masks. On the other hand, NeuralUDF [22] is unable to properly reconstruct textureless models, possibly due to their complex density function which is difficult to converge.
### Comparisons on Watertight Models
We compare the performance of NeUDF and HF-NeuS in reconstructing watertight models using the DTU dataset, which is known for its rich geometric details. The comparison focuses on the visual quality of the models' output, specifically surface smoothness and the presence of artifacts. The DTU dataset poses challenges in 3D reconstruction due to the fact that many of the images only show a part of the object-of-interest. Despite this challenge, NeUDF is able to reconstruct watertight models with acceptable visual quality, especially in regions that are only visible in a few images. In comparison, HF-NeuS produces smoother surfaces than ours but introduces noticeable artifacts in regions with limited visibility. To illustrate the comparison, Figure 5 shows two examples of the output from both NeUDF and HF-NeuS on the DTU dataset.
### Ablation Studies
_Regularization loss_. The effectiveness of the iso-surface regularizer \(\mathcal{L}_{reg}\) is verified through an ablation study. Figure 6 demonstrates that when trained with the regularization loss, the network successfully removes small and unwanted
Figure 4: Qualitative comparisons with VolSDF [37], NeuS [33], and HF-NeuS [34] on the DeepFashion3D [41] dataset. Row 1 is texture-rich, while rows 2 and 3 don’t contain highly contrasting colors, and thus they are textureless. The surfaces produced by NeuS and VolSDF are closed watertight models, resulting in large reconstruction errors near the boundaries. HF-NeuS with the aid of masks can produce reasonable results. However, the results are still double-covered surfaces, which are closed from a topology perspective, and post-processing is required to extract the single-layered surface. In contrast, our NeUDF can effectively reconstruct non-watertight models, leading to more faithful reconstruction results without relying on masks. See the supplementary material for additional results.
Figure 5: Qualitative comparison on the DTU dataset. While HF-NeuS is able to produce smooth surfaces, it also introduces noticeable artifacts in regions with limited visibility. Our method uses UDFs which are generally more difficult to learn than SDFs, thereby our resulting surfaces are not as smooth as theirs. However, our results contain very small artifacts.
pieces, such as the parts covering the cuff of the sleeve in the reconstructed model. The iso-surface regularizer encourages clean and smooth UDFs, which reduces artifacts and produces a high-quality reconstructed model.
Adaptive hierarchical sampling.We train the model with and without AHS and compare their performance in terms of accuracy and visual quality. We found that using adaptive hierarchical upsampling helps to sample points more closely to the surface, especially for the parts close to boundaries, resulting in more accurate color computation in volume rendering. As shown in Figure 6, training with AHS produces better reconstruction on the sleeves.
Non-negativity.Ensuring that the computed distances in the proposed method are non-negative is important, and can be achieved by applying either ReLU or softplus to the MLP output. However, ReLU is not differentiable at 0 and has vanishing gradients for negative inputs, which can make the network difficult to train. An ablation study confirms that training with ReLU only results in early progress, but fails to learn a valid UDF later on. See Figure 7 for details.
Extracting surface points.Instead of extracting the zero level-set from the learned UDFs, we construct the object surface by finding the points with the maximum weights. We compare our surface extraction method with MeshUDF [14], the state-of-the-art method for extracting the zero level-set from UDFs. Since our method is tailored to the proposed density function \(\hat{\sigma}\), it produces more accurate results than MeshUDF, which achieves the second best for most cases. See Table 1.
Limitations.Our method introduces a small bias during training, we can reduce the bias, but cannot eliminate it. Also, if the object has a very similar color to the background, our method has difficulty differentiating between the foreground object and the background, leading to incorrect result.
## 5 Conclusions
Overall, NeUDF offers a promising approach to the problem of reconstructing both open and watertight models from multi-view images. Its advantages over existing
\begin{table}
\begin{tabular}{c|c c c c c c c c c c c c|c} \hline Method & \#1 & \#2 & \#3 & \#4 & \#5 & \#6 & \#7 & \#8 & \#9 & \#10 & \#11 & \#12 & Mean \\ \hline HF-NeuS [35] & 3.28 & 2.93 & 4.11 & 3.22 & 5.41 & 4.18 & 5.51 & 5.51 & 4.61 & 5.46 & 4.30 & 3.37 & 4.32 \\ \hline VolSDF [37] & 6.22 & 5.75 & 9.45 & 8.33 & 8.99 & 16.37 & 17.01 & 5.95 & 12.88 & 9.08 & 17.39 & 11.54 & 10.75 \\ NeuS [33] & 6.75 & 4.60 & 4.35 & 7.95 & 13.52 & 10.74 & 14.54 & 6.23 & 16.69 & 17.07 & 13.21 & 5.13 & 10.07 \\ NeUDF w/ MeshUDF [14]) & 2.68 & 4.33 & 2.90 & 3.81 & 4.34 & 3.32 & 4.53 & 3.57 & 3.65 & 3.78 & 3.26 & 4.73 & 3.74 \\ NeUDF w/ max-weight & **2.09** & **1.77** & **1.75** & **2.00** & **2.34** & **2.16** & **3.54** & **2.60** & **2.81** & **3.63** & **2.64** & **2.61** & **2.50** \\ \hline \end{tabular}
\end{table}
Table 1: Quantitative evaluation of Chamfer distance (lower is better) on the DeepFashion3D [41] dataset. HF-NeuS is trained with additional object mask supervision, while the other methods are not. Our NeUDF results extracts by maximum weights achieve the lowest Chamfer distance for all cases, demonstrating its superior reconstruction capability for open models even without mask supervision. For comparison, our results extracted by MeshUDF [14] are also quantified, which achieve the second best for most cases.
Figure 6: Ablation studies. Left two: The regularization loss \(\mathcal{L}_{reg}\) helps to remove unwanted surfaces on the cuff of the sleeve. The Chamfer distance drops by 5.85%. Right two: Adaptive hierarchical sampling leads to more accurate point sampling and helps to learn more accurate models. The Chamber distance also drops by 25%.
Figure 7: Ablation study on the usage of ReLU (orange) [10] vs softplus (blue) [8] in training. The former is non-differentiable at 0 and its gradient vanishes for negative input, whereas the latter is differentiable everywhere. Using ReLU after the output layer of the MLP, the network makes progress at the early stage of training, but collapses after 40K iterations, leading to a training loss reduction through the rendering of only backgrounds. In contrast, softplus leads to correct learning of both geometry and color, and consistently decreases the training loss over iterations.
methods lie in the use of a simpler and more accurate density function, a smooth and differentiable UDF representation, and a simple yet effective surface reconstruction strategy tailored to the density function, which greatly improves the learning process. Results from our experiments on the DeepFashion3D and DTU datasets demonstrate the effectiveness of our method, particularly in reconstructing textureless models and regions with limited visibility. Moreover, our method does not rely on object masks, making it more practical in real-world applications.
In the future, we plan to investigate strictly unbiased and occlusion-aware density functions to fill the theoretical gap and further improve the accuracy of our method. We also aim to explore the use of sparse views for UDF learning.
|
2305.11280 | Complexity = Anything Can Grow Forever in de Sitter | Recent developments in anti-de Sitter holography point towards the
association of an infinite class of covariant objects, the simplest one being
codimension-one extremal volumes, with quantum computational complexity in the
microscopic description. One of the defining features of these gravitational
complexity proposals is describing the persistent growth of black hole interior
in classical gravity. It is tempting to assume that the gravitational
complexity proposals apply also to gravity outside their native anti-de Sitter
setting in which case they may reveal new truths about these cases with much
less understood microscopics. Recent first steps in this direction in de Sitter
static patch demonstrated a very different behavior from anti-de Sitter
holography deemed hyperfast growth: diverging complexification rate after a
finite time. We show that this feature is not a necessity and among
gravitational complexity proposals there are ones, which predict linear or
exponential late-time growth behaviors for complexity in de Sitter static
patches persisting classically forever. | Sergio E. Aguilar-Gutierrez, Michal P. Heller, Silke Van der Schueren | 2023-05-18T19:55:45Z | http://arxiv.org/abs/2305.11280v2 | # Complexity = Anything Can Grow Forever in de Sitter
###### Abstract
Recent developments in anti-de Sitter holography point towards the association of an infinite class of covariant objects, the simplest one being codimension-one extremal volumes, with quantum computational complexity in the microscopic description. One of the defining features of these gravitational complexity proposals is describing the persistent growth of black hole interior in classical gravity. It is tempting to assume that the gravitational complexity proposals apply also to gravity outside their native anti-de Sitter setting in which case they may reveal new truths about these cases with much less understood microscopics. Recent first steps in this direction in de Sitter static patch demonstrated a very different behavior from anti-de Sitter holography deemed hyperfast growth: diverging complexification rate after a finite time. We show that this feature is not a necessity and among gravitational complexity proposals there are ones, which predict linear or exponential late-time growth behaviors for complexity in de Sitter static patches persisting classically forever.
## I Introduction
Understanding de Sitter (dS) space holography at a level comparable to AdS/CFT [1; 2; 3] is an important open question in quantum gravity dating back to the early days of AdS/CFT [4; 5; 6].
Key drivers of progress in AdS quantum gravity have been ideas native to quantum information theory and quantum computing, see e.g. [7; 8; 9; 10; 11; 12] for reviews. In recent years these tools have started being applied also to positively curved universes, see e.g. [13; 14; 15; 16]. The focal object in the present article is holographic complexity, which arose as a conjectured geometric counterpart of the hardness of dual state or operator preparation using limited resources on the boundary of AdS/CFT [9; 12; 17]. Considerations based on quantum circuit models of the boundary Hamiltonian time evolution led to two defining features of such geometric quantities in AdS black hole spacetimes: late-time linear growth with time and switchback effect accounting for a delay in the late-time growth due to external perturbations (shock waves). Between 2014 and 2016 three such geometric quantities were identified and thoroughly studied over the past decade: codimension-one boundary-anchored maximal volume slices (CV) [18], gravitational action in the Wheeler-de Witt patch (CA) [19] and spacetime volume of the Wheeler-de Witt patch (CV2.0) [20].
The approach to dS holography that is relevant for our article is the stretched horizon one [14; 15; 21; 22; 23; 24; 25; 26]. It can be thought of to mimic AdS holography in the native to holographic complexity setting of eternal AdS Schwarzschild black holes [27]. The exterior of the latter corresponds to two dS static patches and the AdS asymptotic boundary is mimicked by two stretched horizons, see Fig. 1. Since holographic complexity proposals are geometric constructs, there are no fundamental obstacles to studying them also in this setting. Indeed, over the course of the past two years first CV in two spacetime dimensions [14; 28; 29] and subsequently in [30] also CV, CA and CV2.0 in quite a generality were studied in dS stretched horizon holography, see also [31; 32; 33; 34] for related recent developments. The common outcome of these studies is holographic complexity diverging (its time derivative diverging in two-dimensional dS) after a finite stretched horizon time. This hyperfast growth [14] is in stark contrast with predictions of holographic complexity proposals for AdS black holes exhibiting a persistent linear growth at a classical level consistent with a local quantum circuit model and might signal a very nonlocal nature of stretched horizon degrees of freedom.
In parallel to first works studying holographic complexity in dS, it was realized that the space of holographic complexity proposals contains infinitely many members [35; 36]. Such Complexity = Anything proposals (CAny) are defined by obeying the late-time linear growth and switchback effect for AdS black holes and can be defined by codimension-1 as well as codimension-0 geometric objects. However, a priori it is not guaranteed that their other behaviors, in particular in dS, will also be shared with CV, CA, and CV2.0. This leads to our motivating question:
_Is hyperfast growth as universal for dS holographic complexities as linear growth and switchback for AdS ones?_
In the present paper, we demonstrate that hyperfast growth is not a necessity within CAny proposals, but a feature appearing for some of them, with different kinds of growth present for another subset. We demonstrate this using a family of CAny proposals defined on constant mean curvature (CMC) spatial slices with our arguments covering also Schwarzschild dS black holes (SdS).
## Setup
The asymptotically dS geometries of interest in \(d+1\) spacetime dimensions are described by the metric
\[\mathrm{d}s^{2}=-f(r)\mathrm{d}t_{L/R}^{2}+\frac{\mathrm{d}r^{2}}{f(r)}+r^{2} \mathrm{d}\Omega_{d-1}^{2}\, \tag{1}\]
where
\[f(r)=1-r^{2}-\tfrac{2\mu}{r^{d-2}} \tag{2}\]
and \(\mathrm{d}\Omega_{d-1}^{2}\) is the metric on a unit \((d-1)\)-dimensional sphere. Meanwhile, the (dimensionless) parameter \(\mu\),
\[\mu\in[0,\,\mu_{N}],\quad\mu_{N}\equiv\tfrac{1}{d}\big{(}\tfrac{d-2}{d}\big{)} ^{\tfrac{d-2}{2}}. \tag{3}\]
allows us to study spacetimes from the empty dS (\(\mu=0\)) all the way to the Nariai black hole space (\(\mu=\mu_{N}\)), i.e. the largest black hole that can fit in dS space. Note that in this paper we set the curvature scale associated with a cosmological constant (both positive and negative) to unity. The coordinates (1) are Schwarzschild coordinates and cover the region outside the horizon (the static patch for dS), hence the presence of two-time variables, one for each exterior.
In analogy with AdS holography [27] and following [28; 29; 30; 14], we will be interested in introducing stretched horizons at \(r=r_{\mathrm{st}}\) with constant \(t_{L}\) and \(t_{R}\) slices thereof defining states in a putative microscopic description involving two Hilbert spaces, one for each stretched horizon. We orient both time directions to increase towards future infinity and consider left-right symmetric time evolution in \(t_{L}=t_{R}\equiv\tfrac{t}{2}\). The venerable CV proposal amounts to finding stretched horizon anchored codimension-1 volumes and studying them as a function of \(t\). Since this and any other holographic complexity proposal require connecting two boundaries through an inflating region complementary to the static patch, in explicit calculations we will be using ingoing Eddington-Finkelstein coordinates given by
\[\mathrm{d}s^{2}=-f(r)\mathrm{d}v^{2}+2\mathrm{d}v\mathrm{d}r+r^{2}\mathrm{d} \Omega_{d-1}^{2}. \tag{4}\]
Because of the left-right symmetry, it will be enough to consider only one patch of such coordinates.
## Key idea
Fig. 1 depicts the outcome of CV calculating in stretched horizon dS holography from [28; 29; 14; 30]. Similar considerations apply to CA and CV2.0. What one sees is that extremal volume slices cease to exist for large or small enough \(t\) on the stretched horizon. This occurs because as a result of extremization the outermost CV carriers approach and touch future or past infinity. In \(d=1\) this implies a singular derivative of the complexity with respect to \(t\) and in \(d\geq 2\) this implies on top of a divergence of complexity itself, which is the precise statement of the hyperfast growth.
In dS or SdS geometry, there are infinitely many other spatial slices that do not exhibit hyperfast growth. For example, constant global time slices of dS depicted with orange in Fig. 1 exhibit persistent exponential growth at late times. Of course, at this level, such slices are not covariantly defined and it is not clear if their volumes arise from a particular CAny proposal.
The key idea in the present paper is to find a family of codimension one objects that avoid the future infinity (to start with, and later also the past infinity) in a similar manner as orange slices do in Fig. 1, which fall into the class of CAny proposals.
As it turns out, we do not have to search far: CMC slices that appeared earlier in the context of holographic complexity in [36; 37; 38] will have precisely the desired property. Such slices will bend towards the past or future light cone as their curvature, respectively, increases or decreases. Then, there should exist a class of holographic complexity notions that without fine-tuning avoids the hyperfast growth associated with touching \(\mathcal{I}^{+}\) or \(\mathcal{I}^{-}\), or at best both, rendering the observables finite during the time evolution.
Figure 1: Penrose diagram of dS\({}_{d+1}\) space, where the stretched horizon is shown in green at \(r_{\mathrm{st}}\) and the extremal volumes of the CV proposal in pink. The origin of the hyperfast growth is approaching the infinity touching lightcone in finite stretched horizon time (i.e. at \(\tau_{\infty}\) and \(-\tau_{\infty}\)). In orange, we display slices of constant global time, which exhibit persistent growth as they avoid future infinity. The key idea of our paper is to find analogous slices, but belonging to CAny and understand their properties.
The relevant class of complexity proposals
CAny proposals [35; 36] are defined in a two-step procedure. First one defines a boundary (here: stretched horizon) anchored geometric region using extremization and, subsequently, one characterizes it in terms of, in general, another geometric functional yielding a non-negative number - a value of holographic complexity. Of course, the challenge lies in carving out the space of such functionals, which gives rise to the linear growth and the switchback effect for AdS black holes. What is known so far are several classes of objects specified by continuous parameters for which these properties have been demonstrated.
In our work, we will be interested in (spatial) volumes of stretched horizon-anchored CMC slices. Maximal volume slices giving rise to CV fall into this class, but we will be clearly interested in other members. Along the lines of CAny, they can be obtained by extremizing
\[\begin{split}\mathcal{C}_{\text{CMC}}=\tfrac{1}{G_{N}}\bigg{[} \alpha_{+}\int_{\Sigma_{+}}&\mathrm{d}^{d}\sigma\,\sqrt{h}+ \alpha_{-}\int_{\Sigma_{-}}&\mathrm{d}^{d}\sigma\,\sqrt{h}\\ +&\alpha_{B}\int_{\mathcal{M}}&\mathrm{d }^{d+1}x\sqrt{-g}\bigg{]},\end{split} \tag{5}\]
where \(\mathcal{M}\) is the codimension-zero bulk region that in the end will play no role; \(\Sigma_{\pm}\) are its crucial for us future and past boundaries and \(\alpha_{\pm}\), \(\alpha_{B}\) are positive. The extremization of (5) indeed confirms \(\Sigma_{\pm}\) are CMC
\[\left.K\right|_{\Sigma_{\epsilon}}=-\epsilon\frac{\alpha_{B}}{\alpha_{ \epsilon}},\quad\epsilon=\pm\, \tag{6}\]
where \(K\) is the trace of the extrinsic curvature. Our CAny complexity carrier will be
\[\mathcal{C}^{\epsilon}\equiv\frac{1}{G_{N}}\int_{\Sigma_{\epsilon}}\mathrm{d }^{d}\sigma\,\sqrt{h}\,, \tag{7}\]
where at this level we are free to pick either \(\Sigma_{+}\) or \(\Sigma_{-}\). The results of [36] guarantee that (7) is a valid CAny proposal.
## III Late time growth
The evaluation of the volume (7) of the CMC slice \(\Sigma_{\epsilon}\) can be recast as [38],
\[\mathcal{C}^{\epsilon}=\frac{2\Omega_{d-1}}{G_{N}}\int_{r_{\text{st}}}^{r_{ \text{t}}}\frac{r^{2(d-1)}\,\mathrm{d}r}{\sqrt{-\mathcal{U}(P_{v}^{\epsilon}, \,r)}}\, \tag{8}\]
where
\[P_{v}^{\epsilon}=\frac{G_{N}}{2\Omega_{d-1}}\frac{\partial\mathcal{C}_{\text{ CMC}}}{\partial v^{\prime}(r)} \tag{9}\]
is the conserved momentum in an analog particle motion problem.
\[\mathcal{U}(P_{v}^{\epsilon},\,r)=-f(r)r^{2(d-1)}-\left(P_{v}^{\epsilon}-|K| \frac{\epsilon}{d}r^{d}\right)^{2} \tag{10}\]
is the particle's effective potential, whereas \(r=r_{t}\) is the turning point, which is the location where \(\mathcal{U}(P_{v}^{\epsilon},\,r_{t})=0\) or in geometric terms it is the tip of CMC (\(r^{\prime}(v)=0\) there). We are interested in the time evolution of (7) measured with respect to \(r_{\text{st}}\). Using the technology of [35; 36] one finds at late times
\[\lim_{t\to\infty}\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{C}^{\epsilon}=\frac{ \Omega_{d-1}}{G_{N}}\sqrt{-f(r_{f})r_{f}^{2(d-1)}} \tag{11}\]
where we consider solutions characterized by
\[\lim_{t\to\infty}\frac{\mathrm{d}P_{v}^{\epsilon}}{\mathrm{d}t}=0 \tag{12}\]
and \(r_{f}\equiv\lim_{t\to\infty}r_{t}\) is the final value of the turning point. Condition (12) can also be reformulated as finding the maximum of the potential (10):
\[\mathcal{U}\bigg{|}_{r_{f}}=0,\quad\partial_{r}\mathcal{U}\bigg{|}_{r_{f}}=0, \quad\partial_{r}^{2}\mathcal{U}\bigg{|}_{r_{f}}<0. \tag{13}\]
From (13), one derives a relation for \(r_{f}\) valid for \(K\neq 0\)
\[\begin{split} 0=& 4r_{f}f\left(r_{f}\right)\left((d-1 )f^{\prime}\left(r_{f}\right)+K^{2}r_{f}\right)\\ &+4(d-1)^{2}f\left(r_{f}\right){}^{2}+r_{f}^{2}f^{\prime}\left(r_ {f}\right){}^{2}\.\end{split} \tag{14}\]
We now specialize in asymptotically dS backgrounds, employing the factor (2). We discuss different cases under our proposal.
* **Empty dS space**, \(\mu=0\), \[r_{f}^{2}=\frac{K^{2}-2d(d-1)\pm|K|\sqrt{K^{2}-4(d-1)}}{2(K^{2}-d^{2})}\.\] (15) Then, in order to have a least one turning point at late times, \(r_{f}\in\mathbb{R}\), in empty dS space with \(d\geq 2\) spatial dimensions, we find: \[|K|\geq K_{\text{crit, dS}}=2\sqrt{d-1}\.\] (16) The CMC slices obeying this bound are displayed in Fig. 2. However, notice that the relation (16) is not valid when \(d=1\), since for \(d=1,|K|<1\) equation 15 does not have a valid solution; instead one finds \(K\geq 1\) for the CMC slices to evolve at arbitrarily late times.
* For the **Nariai black hole spacetime**, \(\mu=\mu_{N}\), one finds that \(f(r_{f})=f^{\prime}(r_{f})=0\) at the location \[r_{f}=\sqrt{\frac{d-2}{d}}\,\] (17)
such that the turning point coincides with the cosmological horizon. However, for \(r_{f}\) to be the final slice, it also needs to be a maximum of the potential \(\mathcal{U}(P_{v}^{e},\,r)\) in (10), which leads to the requirement,
\[|K|\geq K_{\text{crit,\,N}}\equiv\sqrt{d}. \tag{18}\]
* For generic \(\mu\), one cannot derive closed-form solutions for \(r_{f}\) in (14), except for the \(\text{SdS}_{3}\) space, which is locally identical to \(\text{dS}_{3}\). We explicitly find that (16) is always respected in such a case. For higher dimensions and generic \(\mu\), the bounds on \(|K|\) will lay between (16) and (18). Such black holes are unstable and decay in empty \(\text{dS}_{d+1}\) space [39].
Note that the solutions for (15) in \(d=1\) and \(d=2\) as well as for \(\text{SdS}_{3}\), lead to \(r_{f}\to\infty\) when \(|K|=K_{\text{crit}}\). We find exponential growth for (8) in these two very special cases. For \(d=3\) and higher we find finite \(r_{f}\) for \(K=K_{\text{crit}}\), which translates to the linear growth.
## IV Restoring time symmetry
Although the rate of growth of the observables in (7) evaluated on the CMC slices asymptotes to a constant value at late times (11) when \(K>K_{\text{crit}}\) the CMC slices still hit \(\mathcal{I}^{-}\) at minus the critical time, as illustrated in Fig. 2. This produces hyperfast behavior in the past. The opposite case occurs by symmetry for \(K<-K_{\text{crit}}\).
A natural way to restore time-reversal symmetry in the observables is to modify the second step of the CAny prescription so that it selects the result with the minimal value among the slices with a given value for \(|K|\),
\[\mathcal{C}_{\text{sym}}=\min_{\epsilon=+,-}\mathcal{C}^{\epsilon}\,. \tag{19}\]
The minimization is performed over the existing slices, so technically it is only a minor modification. This procedure does not alter the conclusion that the constructed covariant notions are complexity proposals, as the linear growth and the switchback effect for AdS black holes remain present. As a result, this idea can be thought of as a further enlargement of the space of CAny proposals and might even be advantageous when considering more complicated black holes in AdS.
In the dS case, our improved proposal (19) will receive a contribution from a slice with \(K<0\) at early times, and \(K>0\) at late times, as shown in figure 3. A potential subtlety with this generalization is that the complexity growth rate (11) might become discontinuous at the time when the change of CMC slice occurs. However, this is in principle allowed in the definition of holographic complexity proposals [35; 36].
Figure 3: Our time symmetric complexity proposal (19) in empty \(\text{dS}_{d+1}\) allowing both early- and late-time linear growth. At negative times, the CMC with \(K<-K_{\text{crit}}\) dominates, shown in blue; while at positive times, the CMC with \(K>K_{\text{crit}}\) dominates. The exchange of dominance at \(t=0\) between CMC slices is indicated by the green dots on the stretched horizon.
Figure 2: CMC slices for \(K\geq K_{\text{crit,\,dS}}\) in empty \(\text{dS}_{d+1}\) space (above) and \(\text{SdS}_{d+1}\) (below). All the slices remain bounded below \(\mathcal{I}^{+}\) and the corresponding complexity observable (8) generically displays a late-time linear growth (11), except for some fine-tuned situations discussed in the main text. The solutions with \(K<0\) can be obtained by a top-bottom reflection.
Discussion
Our paper demonstrates that the hyperfast growth of holographic complexity in asymptotically dS spacetimes, as found earlier in the CV, CA, and CV2.0 proposals, is not a universal feature in the CAny landscape. Employing volumes of codimension-one CMC slices, being members of CAny family, we show that holographic complexity can exhibit persistent linear or exponential growth in asymptotically dS universes. Physically, this exponential behavior occurs when the final slice asymptotes the future/past infinity of the inflating region. Let us emphasize that linear growth can be also obtained upon cutting out the dS geometry past some final (for late times) slice, as was done in [30]. In the present paper, it was obtained without modifying dS geometry in any way and it originates from the properties of CAny proposals we considered.
From the perspective of the holographic description, it is tempting to speculate that the presence or the absence of the hyperfast growth is related to the choice of a penalty schedule in the microscopic definition of complexity, i.e. different designations which operations are hard and which are easy to implement. It would be very interesting to study this idea further in the context of a class SYK models associated with JT gravity with positive cosmological constant [40; 41; 42].
More along these lines, specializing in CV proposal in two spacetime dimensions, where the complexity carriers are geodesics, it is known that in dS past the critical time on a stretched horizon, there are no spatial geodesics. However, using a closed-form expression for a geodesic distance one obtains an answer with both real and imaginary parts [43; 44]. While it might be tempting to speculate about the interpretation in terms of complexity, e.g. with real part accounting for unitary and imaginary part for possible non-unitary gates, we find it important to stress that our final result (19) does not require any departure from a standard counting interpretation of unitary gates.
Furthermore, the space of CAny proposals is vast, and arguably one of the main open problems for the field of holographic complexity is to study it in a more systematic manner. To this end, our results show the existence of a so far unrecognized structure in the CAny landscape stemming from the presence (so far demonstrated for CV, CA, and CV2.0) or the absence of the hyperfast growth demonstrated here for CMC complexity carriers. One intriguing future research direction would be to find more members of the hyperfast growth escaping CAny proposals and, another, to seek other structures present. On the former front, we want to emphasize that there is a continuum of CAny proposals that do not exhibit hyperfast growth, as encapsulated by (16).
We also want to highlight a potentially puzzling feature for a class of CAny proposals we considered, which to the best of our knowledge has not been previously seen in the literature. As illustrated in Fig. 2, asymmetric time evolution may occur such that hyperfast growth is observed in the past or future, while the linear or exponential growth remains for the late or early time regime respectively. If we want to assign a Nielsen unitary complexity [12; 45] interpretation to this setting, then complexity of a unitary is the same as its inverse. This implies time asymmetric quantities in time-symmetric setups either do not capture (this type of) complexity or the considered time evolution is not unitary.
The lack of unitary evolution for such observables would point out that they might not represent complexity in asymptotically dS space. However, one can restore unitarity in the observables by introducing a covariant protocol that alternates between CMC slices of opposing sign, where the slice that minimizes complexity is chosen. This consideration led us to a new CAny proposal encapsulated by (19), which is time-symmetric. It would be interesting to understand the issue of possible time asymmetry in CAny proposals, which might lead to additional restrictions or adopting approaches like (19), by bringing their understanding to the level at which CV, CA, and CV2.0 have been tested over the past several years.
The same occurs even in the best-understood for holographic complexity case of the AdS Schwarzschild black hole with the location of the early/late turning point not being time-reversal symmetric. This causes the general codimension-one CAny proposals on CMC slices to violate unitary evolution. However, if one proceeds as we do in (19), the time symmetry is restored. This point illustrates a potentially important subtlety in the CAny approach that calls for further studies.
Finally, let us reiterate that the defining features for CAny proposals are the late-time linear growth and the switchback effect for AdS black holes. If one were to add to this list the hyperfast growth in dS, our paper could be then viewed as ruling out a subclass of CAny proposals.
Acknowledgements.We would like to thank Damian Galante, and Qi-Feng Wu for useful discussions on de Sitter space and complexity and Alexandre Serantes for a collaboration on a related topic. The work of SEAG is partially supported by the KU Leuven C1 grant ZKD1118 C16/16/005.
|
2306.08281 | 3-Dimensional Sonic Phase-invariant Echo Localization | Parallax and Time-of-Flight (ToF) are often regarded as complementary in
robotic vision where various light and weather conditions remain challenges for
advanced camera-based 3-Dimensional (3-D) reconstruction. To this end, this
paper establishes Parallax among Corresponding Echoes (PaCE) to triangulate
acoustic ToF pulses from arbitrary sensor positions in 3-D space for the first
time. This is achieved through a novel round-trip reflection model that
pinpoints targets at the intersection of ellipsoids, which are spanned by
sensor locations and detected arrival times. Inter-channel echo association
becomes a crucial prerequisite for target detection and is learned from feature
similarity obtained by a stack of Siamese Multi-Layer Perceptrons (MLPs). The
PaCE algorithm enables phase-invariant 3-D object localization from only 1
isotropic emitter and at least 3 ToF receivers with relaxed sensor position
constraints. Experiments are conducted with airborne ultrasound sensor hardware
and back this hypothesis with quantitative results. | Christopher Hahne | 2023-06-14T06:39:34Z | http://arxiv.org/abs/2306.08281v2 | # 3-Dimensional Sonic Phase-invariant Echo Localization
###### Abstract
Parallax and Time-of-Flight (ToF) are often regarded as complementary in robotic vision where various light and weather conditions remain challenges for advanced camera-based 3-Dimensional (3-D) reconstruction. To this end, this paper establishes Parallax among Corresponding Echoes (PaCE) to triangulate acoustic ToF pulses from arbitrary sensor positions in 3-D space for the first time. This is achieved through a novel round-trip reflection model that pinpoints targets at the intersection of ellipsoids, which are spanned by sensor locations and detected arrival times. Inter-channel echo association becomes a crucial prerequisite for target detection and is learned from feature similarity obtained by a stack of Siamese Multi-Layer Perceptrons (MLPs). The PaCE algorithm enables phase-invariant 3-D object localization from only 1 isotropic emitter and at least 3 ToF receivers with relaxed sensor position constraints. Experiments are conducted with airborne ultrasound sensor hardware and back this hypothesis with quantitative results. The code and data are available at [https://github.com/hahnec/spiel](https://github.com/hahnec/spiel).
## I Introduction
Certain animal species that move in 3-Dimensional (3-D) space perceive surroundings by pulses emitted and bounced off from obstacles. Can tomorrow's robots do likewise?
Over recent decades, 3-D computer vision has been dominated by stereoscopic parallax and Time-of-Flight (ToF) sensing. Depth perception in the light spectrum is a well-studied subject adopted by Simultaneous Localization and Mapping (SLAM) to help robots navigate. However, varying weather (e.g., fog, rain, etc.) or severe lighting conditions impair computational imaging. Only a little attention has thus far been given to 3-D reconstruction from detectors working at other wavelengths. To address these challenges, this paper introduces Parallax among Corresponding Echoes (PaCE) as a depth-sensing hybrid incorporating triangulation and ToF concepts at a geometric level. To the best of the author's knowledge, this is the first systematic feasibility study on 3-D object localization from parallax-based ToF being invariant of the frequency and phase.
Early research on sonar-based echolocation for mobile robots retrieved object points in 2-D utilizing ellipse intersections [1, 2, 3]. There is also a considerable amount of patents that claim 2-D ellipse intersections as a localization method lately [4, 5, 6]. Several studies attempted to mimic a bat's acoustic perception by modelling ears with two microphones and inferring obstacle locations using spherical coordinates [7, 8, 9, 10]. A recent breakthrough in 3-D reconstruction from audible acoustics is based on deep learning architectures trained by camera-based depth maps without physical modelling [11, 10, 12, 13].
The existing literature and inventis disregarded the potential of 3-D localization from intersecting ellipsoids and a preceding echo association. Here, echo association refers to the correct assignment of echoes to actual targets. Opposed to phased arrays, where transducers are separated by a multiple of the wavelength, the presented echo correspondence and geometric localization model fill this gap by relaxing sensor and position constraints in 3-D sonar tracking. This study hypothesizes that phase correlation methods can be substituted with machine learning algorithms to gain confidence in the localization and achieve more flexibility on the hardware and application side. In particular, the novel ellipsoid intersection model proposed in this work is considered the most generic solution for 3-D localization from at least 3 detectors and only one emitter without constraints on their positions in space. A European patent application covering PaCE has been filed by the University of Bern [14].
This paper begins with a literature overview in Section II. The ellipsoid intersection model is introduced in Section III, establishing the need for echo association in Section IV, where further details on novel algorithmic aspects are provided. The experimental work in Section V shows results rendered by PaCE as a proof-of-concept. Section VI concludes while reflecting on the framework's potential and prospects.
## II Related Work
Technology families related to PaCE are Phased-Arrays (PAs), stereoscopic vision cameras, and Real-Time Locating Systems (RTLS) using Time Difference of Arrival (TDoA). PaCE is an active ToF-based triangulation method and substantially different from prior work. Its localization scheme
Fig. 1: Robotic navigation demands 3-D sensing in challenging environments. This study proposes Parallax among Corresponding Echoes (PaCE) as a novel framework (left) to triangulate echoes in 3-D space (right).
is phase-invariant, detectors can be arbitrarily positioned, and targets are not carrying a tag or beacon unit. Also, PAs generally comprise a large number of transducers, which - from the author's standpoint - is considered a redundancy. Thus, the motivation of this study originates from the idea that the transducer number requirement in PAs can be traded for computational efforts while aiming for comparable performance.
An initial attempt for sonar (sound navigation and ranging) with a sparse number of sensors dates back to Peremans _et al._[1], who had striven to locate objects along a 2-D plane for robotic navigation. A follow-up study by Wijk and Christensen extended this triangulation-based approach by fusing measurements from different points in time for 2-D mobile robot indoor pose tracking [2]. Bank and Kampke generalized this 2-D triangulation by proposing tangential regression for ellipse intersections and built a robot equipped with an array of transducers to reconstruct a high-resolution 2-D map of surroundings [3]. Notably, intersection methods were reported as part of trilateration in the radar imaging field [15, 16]. For instance, Malanowski and Kulpa explored 3-D target localization based on 3 transmitters and a receiver for multi-static radar in aeronautics [17]. In the same years, the group led by Peremans published an avid study working towards 3-D localization based on the direction of spectral cues from two bat-shaped outer ears in an attempt to mimic bat perception [8]. Kuc and Kuc investigated echolocation as an orientation assistance for blind people [18]. More recently, a 2-D localization system for pen or finger tracking was proposed by Juan and Hu [19], who employed multiple ultrasonic sensors in conjunction with Newton-Raphson optimization and Kalman filtering to recover 2-D positions at standard deviations below 1 \(\mathrm{cm}\).
An emerging related research field concerns learned acoustic 3-D reconstruction in the audible frequency range [10, 11, 12, 13]. In an experimental study, Generative Adversarial Networks (GANs) are trained from audible sweep chirps with the goal of recreating stereo depth maps from only a speaker and at least two consumer microphones [10, 11, 12, 13]. However, end-to-end supervised learning of acoustic 3-D reconstruction is in an early research stage and faces challenges from potential biases implicit to the datasets, domain gap and disturbances from other audible sound sources, as commonly encountered in industrial environments.
Over the last decade, peers have patented ultrasound transducer setups for object localization based on 2-D ellipse intersection models [4, 5, 6]. The start-up Toposens GmbH has taken up from there with commercial products targeting industrial environments [5]. Their solution employs two consecutive ellipse intersections and requires orthogonally arranged transducers and a phase correlation method [5]. Future trends on ultrasound systems indicate a surge of collaborative and autonomous robotics in medical applications demanding effective localization capabilities [20].
The herein demonstrated framework complements prior work by enabling round-trip 3-D tracking from at least 3 sensors using a novel ellipsoid intersection model. This method is distinct in that it entirely relieves sensor position constraints. In particular, the proposed framework enables all sensors to be arbitrarily located or even move freely in 3-D space as long as their current position is known. Further, the presented method is invariant of the phase signal, distinguishing it from methods that rely on beamforming or matched filters. Instead, echo correspondence is recognized as an ambiguity problem that may potentially yield false object locations. Correct matching is addressed by training a Siamese MLP stack with features from Multimodal Exponentially Modified Gaussian (MEMG) distributions to overcome this ambiguity. Thereby, the proposed framework avoids the need for mechanical extensions such as waveguide or baffle designs and prevents related artefacts (e.g., cross-talk). To the best of the author's knowledge, this work presents the most generic model-based 3-D target localization scheme for a sparse number of arbitrarily located sensors.
## III Ellipsoid Intersection
### _Echo Detection_
In classical 1-dimensional (1-D) range finding, the transmitting and receiving transducers are identical, which implies that the outward and return travel paths coincide. In this special case, scalar distances \(s_{k}^{\star}\) from \(k\in\{1,2,\ldots,K\}\) echoes can be readily obtained by
\[s_{k}^{\star}=c_{s}\frac{t_{k}^{\star}}{2}\,,\,\text{where }t_{k}^{ \star}=\{t_{i}|t_{i}\in\mathbb{R}^{T}\wedge\nabla_{t}\left|\mathcal{H}\left[y_ {n}(t_{i})\right]\right|>\tau\} \tag{1}\]
where \(y_{n}(t_{i})\) denotes the captured amplitude data from sensor \(n\), \(c_{s}\) is the propagation velocity and the divider accounts for the equidistant forward and backward travel paths. Each time sample \(t_{i}\in\mathbb{R}^{T}\), with a total number of \(T\), qualifies to be a detected Time-of-Arrival (ToA) denoted by \(t_{k}^{\star}\) once the gradient of the Hilbert-transformed magnitude \(\nabla_{t}\left|\mathcal{H}\left[y_{n}(t_{i})\right]\right|\) surpasses a threshold \(\tau\). Note that such a single sensor setup generally yields radial distances \(s_{k}^{\star}\) making accurate directional information retrieval of the surrounding targets intractable.
### _Ellipsoid Surface Geometry_
Pinpointing a 3-D landmark generally involves advanced geometric modelling. Using a receiver and a transmitter in a so-called round-trip setup, ToA detection yields \(t_{k}^{\star}\), whereas outward and return travel paths may be non-equidistant, i.e., (1) does not hold. This is because a distinct receiver position causes the travel direction to change after target reflection spanning a triangle between a possible target position \(\bar{\mathbf{s}}_{1}\), transmitter \(\mathbf{u}\in\mathbb{R}^{3}\) and receiver \(\mathbf{v}_{1}\in\mathbb{R}^{3}\) (see Fig. 2). This triangle has its roots in the parallax concept, where an object point is observed from at least 2 different viewpoints [21]. The vector between \(\mathbf{u}\) and \(\mathbf{v}_{n}\) can be regarded as the _baseline_, and while this is given, the triangle's side lengths (i.e., travel paths) remain unknown in the single receiver case. Here, all travel path candidates form triangles with equal circumferences fixed at the baseline. Closer inspection of Fig. 2 reveals that feasible object positions \(\bar{\mathbf{s}}_{n}\in\mathbb{R}^{3}\) yield
an infinite set of solutions located on an ellipse for a 2-D plane, and - when extended to 3-D space - this solution set is represented by an ellipsoid. The surface of an ellipsoid thus reflects potential target locations in a 3-D round-trip scenario comprising a single transmitter and receiver. Adding a second receiver capturing a ToA from the same target spans a second ellipsoid that intersects with the first ellipsoid along a curve, carrying a subset of solution points and thus narrowing down the position candidate set. Only by introducing a third receiver and its respective ellipsoid, the target's 3-D location ambiguity can be resolved as the surface curve and third ellipsoid exhibit at an intersection point in the send direction. It is mathematically demonstrated hereafter that a group of detected echoes \(t_{n,k}^{\star}\) reflected from the same object and captured by \(N\geq 3\) sensors enables retrieval of the target position that resides on \(N\) ellipsoid surfaces.
Let any point \(\bar{\mathbf{s}}_{n}=[\bar{x}_{n},\bar{y}_{n},\bar{z}_{n}]^{\intercal}\) lie on the surface of ellipsoid \(n\) if \(Q\left(\bar{\mathbf{s}}_{n},\mathbf{r}_{n}\right)=0\), which is given by
\[Q\left(\bar{\mathbf{s}}_{n},\mathbf{r}_{n}\right)=\left(\frac{\bar{x}_{n}}{r_ {n}^{(a)}}\right)^{2}+\left(\frac{\bar{y}_{n}}{r_{n}^{(b)}}\right)^{2}+\left( \frac{\bar{z}_{n}}{r_{n}^{(c)}}\right)^{2}-1 \tag{2}\]
where \(\mathbf{r}_{n}=[r_{n}^{(a)},r_{n}^{(b)},r_{n}^{(c)}]^{\intercal}\). is a radii vector. It consists of a major axis \(r_{n}^{(b)}\) drawn from \(t_{n,k}^{\star}\) which is given by
\[r_{n}^{(b)}=\frac{t_{n,k}^{\star}}{2} \tag{3}\]
and the minor axes \(r_{n}^{(a)}=r_{n}^{(c)}\) are obtained by
\[r_{n}^{(a)}=r_{n}^{(c)}=\frac{1}{2}\sqrt{\left(t_{n,k}^{\star}\right)^{2}- \|\mathbf{u}-\mathbf{v}_{n}\|_{2}^{2}} \tag{4}\]
with \(\mathbf{u}\in\mathbb{R}^{3\times 1}\) as the transmitter and \(\mathbf{v}_{n}\in\mathbb{R}^{3\times 1}\) for receiver positions found at the focal points of each ellipsoid. In fact, note that the two minor axes span oblate spheroids. The above definitions are only valid for those ellipsoids whose center resides on the coordinate origin and whose axes \(\mathbf{r}_{n}\) are in line with the coordinate axes. Generally, a transducer ellipsoid may be displaced and arbitrarily oriented. To account for that, global space coordinates \(\mathbf{s}=[x,y,z]^{\intercal}\) are translated by an ellipsoid center \(\mathbf{c}_{n}=[\hat{x}_{n},\hat{y}_{n},\hat{z}_{n}]^{\intercal}\) and mapped onto its surface by a rotation matrix \(\mathbf{R}_{n}\in\text{SO}(3)\) so that
\[\bar{\mathbf{s}}_{n}=\mathbf{R}_{n}^{\intercal}\left(\mathbf{s}-\mathbf{c}_{ n}\right) \tag{5}\]
which makes use of \(\mathbf{R}_{n}^{\intercal}=\mathbf{R}_{n}^{-1}\) as a rotation matrix property.
### _Intersection via Root-finding_
According to the aforementioned geometric definitions, a potential target ideally resides on the surface of at least 3 ellipsoid bodies. In a mathematical sense, this statement holds true for a point \(\mathbf{s}^{\star}\in\mathbb{R}^{3}\) that satisfies
\[Q\left(\mathbf{R}_{n}^{\intercal}(\mathbf{s}^{\star}-\mathbf{c}_{n}),\mathbf{ r}_{n}\right)=0\,\quad\forall n \tag{6}\]
by plugging (5) into (2). Consequently, solving for \(\mathbf{s}^{\star}\) breaks down to classical root-finding, so employing a multivariate Gradient Descent (GD) method is sufficient here. The GD update at iteration \(j\) with step size \(\gamma\) reads
\[\mathbf{s}^{(j+1)}=\mathbf{s}^{(j)}-\gamma\mathbf{J}^{-1}\mathbf{f} \tag{7}\]
where the ellipsoid function vector \(\mathbf{f}\in\mathbb{R}^{N\times 1}\) is given by
\[\mathbf{f}=\left[Q\left(\bar{\mathbf{s}}_{1}^{(j)},\mathbf{r}_{1}\right),Q \left(\bar{\mathbf{s}}_{2}^{(j)},\mathbf{r}_{2}\right),\ldots,Q\left(\bar{ \mathbf{s}}_{N}^{(j)},\mathbf{r}_{N}\right)\right]^{\intercal} \tag{8}\]
with \(\bar{\mathbf{s}}_{n}^{(j)}=\mathbf{R}_{n}^{\intercal}\left(\mathbf{s}^{(j)}- \mathbf{c}_{n}\right)\). The Jacobian \(\mathbf{J}\in\mathbb{R}^{N\times 3}\) w.r.t. \(\mathbf{s}^{(j)}\) is composed of analytical partial derivatives obtained by
\[\mathbf{J}=\begin{bmatrix}\partial_{x}Q(\bar{\mathbf{s}}_{1}^{(j)},\mathbf{r }_{1})&\partial_{y}Q(\bar{\mathbf{s}}_{1}^{(j)},\mathbf{r}_{1})&\partial_{z}Q( \bar{\mathbf{s}}_{1}^{(j)},\mathbf{r}_{1})\\ \partial_{x}Q(\bar{\mathbf{s}}_{2}^{(j)},\mathbf{r}_{2})&\partial_{y}Q(\bar{ \mathbf{s}}_{2}^{(j)},\mathbf{r}_{2})&\partial_{z}Q(\bar{\mathbf{s}}_{2}^{(j)},\mathbf{r}_{2})\\ \vdots&\vdots&\vdots\\ \partial_{x}Q(\bar{\mathbf{s}}_{N}^{(j)},\mathbf{r}_{N})&\partial_{y}Q(\bar{ \mathbf{s}}_{N}^{(j)},\mathbf{r}_{N})&\partial_{z}Q(\bar{\mathbf{s}}_{N}^{(j)},\mathbf{r}_{N})\end{bmatrix} \tag{9}\]
which are computed for each iteration \(j\) until convergence. The estimated location \(\mathbf{s}^{\star}=[x^{\star},y^{\star},z^{\star}]^{\intercal}\) is selected via
\[\mathbf{s}^{\star}=\operatorname*{arg\,min}_{\mathbf{s}^{(j)}}\left\{\sum_{n=1 }^{N}Q\left(\mathbf{R}_{n}^{\intercal}(\mathbf{s}^{(j)}-\mathbf{c}_{n}), \mathbf{r}_{n}\right)\right\} \tag{10}\]
considering \(N\) ellipsoid surfaces.
## IV Echo Correspondence
Until this stage, it is premised that ellipsoid radii \(\mathbf{r}_{n}\) are drawn from detected echoes \(t_{n,k}^{\star}\) originating from the same distinct target. However, real-world scenes comprise complex topologies with reflections from multiple objects resulting in several echoes per channel. In particular, echoes emanating from different targets yield false positive solutions \(\mathbf{s}^{\star}\). Hence, inter-channel echo assignment is a crucial, non-trivial undertaking for the proposed echolocation scheme to work and addressed hereafter. An overview of the architectural design for the echo association is outlined in Fig. 3.
### _Echo Feature Extraction_
The extraction of acoustic features has been widely explored using Generalized Cross-Correlation (GCC) methods, for instance, Phase Transform (PhaT) [11, 23, 24]. For reasons of phase invariance, we employ the MEMG model [22] instead as a viable starting point for reducing
Fig. 2: **Cross-sectional ellipsoid intersection** showing transmitter \(\mathbf{u}\) and receivers \(\mathbf{v}_{n}\) with radii \(r_{n}^{(a)}\), \(r_{n}^{(b)}\). Surface points \(\bar{\mathbf{s}}_{n}\) and location candidates \(\mathbf{s}^{(j)}\) span triangles (dotted lines) on the continuous ellipsoidal solution space (dashed curves).
echo information while the oscillation term is skipped here. Accordingly, an echo is modeled as \(m(\mathbf{p};t_{i})\) given by
\[m(\mathbf{p};t_{i})=\alpha\exp\left(-\frac{\left(t_{i}-\mu\right) ^{2}}{2\sigma^{2}}\right)\left(1+\text{erf}\left(\eta\frac{t_{i}-\mu}{\sigma \sqrt{2}}\right)\right) \tag{11}\]
with parameters \(\mathbf{p}=[\alpha,\mu,\sigma,\eta]^{\mathsf{T}}\in\mathbb{R}^{4\times 1}\). The \(\exp(\cdot)\) term is the uni-variate Gaussian distribution with \(\mu\) as the mean and \(\sigma\) being the standard deviation. While \(\alpha\) controls the echo amplitude, the exponentially modified term covers asymmetric shapes with \(\eta\) and \(\text{erf}(\cdot)\) as the error function. Integrating over \(k\in\{1,2,\ldots,K\}\) echo components takes care of the multimodal distribution \(M\left(\mathbf{\hat{p}}_{n};t_{i}\right)\), which reads
\[M\left(\mathbf{\hat{p}}_{n};t_{i}\right)=\sum_{k=1}^{K}m(\mathbf{p}_{k};t_{i}) \tag{12}\]
where frame vector \(\mathbf{\hat{p}}_{n}=[\mathbf{p}_{1}^{\mathsf{T}},\mathbf{p}_{2}^{\mathsf{T }},\mathbf{p}_{k}^{\mathsf{T}},\ldots,\mathbf{p}_{K}^{\mathsf{T}}]^{\mathsf{T }}\in\mathbb{R}^{4K}\) concatenates each echo variable \(\mathbf{p}_{k}\) from frame \(n\). Each \(\mathbf{\hat{p}}_{n}^{\star}\) is estimated using an optimization framework to minimize the energy \(L(\mathbf{\hat{p}}_{n})\) by
\[L(\mathbf{\hat{p}}_{n})=\left\|y_{n}(t_{i})-M\left(\mathbf{\hat{p}}_{n};t_{i} \right)\right\|_{2}^{2} \tag{13}\]
for every frame \(n\). The Levenberg-Marquardt solver is used for minimization of (13) where Hessians are obtained from analytical Jacobians w.r.t. \(\mathbf{\hat{p}}_{n}^{(j)}\) at each iteration \(j\). The best approximated MEMG vector \(\mathbf{\hat{p}}_{n}^{\star}\) is given by
\[\mathbf{\hat{p}}_{n}^{\star}=\operatorname*{arg\,min}_{\mathbf{ \hat{p}}_{n}^{(j)}}\,\left\{L\left(\mathbf{\hat{p}}_{n}^{(j)}\right)\right\} \tag{14}\]
which carries echo component estimates \(\mathbf{p}_{n,k}^{\star}\in\mathbf{\hat{p}}_{n}^{\star}\). For more details on robust MEMG convergence, the interested reader is referred to the original paper [22]. As in [22], \(\mathbf{p}_{n,k}^{\star}\) are extended by the hand-crafted frame confidence \(C_{n}\), echo confidence \(c_{n,k}\) as well as ToA \(t_{n,k}^{\star}\) and echo power \(p_{n,k}\), which is obtained by
\[p_{n,k}=\sum_{i=1}^{T}m(\mathbf{p}_{n,k}^{\star};t_{i})\,,\quad\forall n,k \tag{15}\]
so that \(\mathbf{\tilde{p}}_{n,k}^{\star}=[\mathbf{p}_{n,k}^{\star}\mathbf{{}^{\mathsf{ T}}},c_{n,k},p_{n,k},t_{n,k}^{\star},C_{n}]\in\mathbb{R}^{1\times 8}\) serves as the input for the subsequent echo association.
### _Feature Correspondence_
According to the Siamese correspondence architecture outlined in Fig. 3, echo features \(\mathbf{\tilde{p}}_{n,k}^{\star}\) are fed into a Multi-Layer Perceptron (MLP) for echo selection and correspondence decision-making. The scalar output of each MLP reads
\[b_{k}^{(n)}=h_{4}(h_{3}(h_{2}(h_{1}(\mathbf{\tilde{p}}_{n,k}^{ \star}))))\,,\quad\forall n,k \tag{16}\]
where \(h_{l}(\cdot)\) denote MLP function layers at indices \(l\in\{1,2,3,4\}\). Each layer \(h_{l}(\cdot)\) is equipped with trainable weights \(\mathbf{W}_{l}\) and activated by a Rectifier Linear Unit (ReLU) except for \(h_{4}(\cdot)\), which is followed by the sigmoid function. Learnable weight dimensions correspond to \(\mathbf{W}_{1}\in\mathbb{R}^{8\times 32}\), \(\mathbf{W}_{2}\in\mathbb{R}^{32\times 32}\), \(\mathbf{W}_{3}\in\mathbb{R}^{32\times 4}\) and \(\mathbf{W}_{4}\in\mathbb{R}^{4\times 1}\) with respective bias weights. The Binary Cross Entropy (BCE) is employed to learn predictions \(b_{k}\) during training via
\[\mathcal{L}_{\text{B}}(Y_{k},b_{k})=\sum_{k=1}^{K}-(Y_{k}\log(b_{ k})+(1-Y_{k})\log(1-b_{k})) \tag{17}\]
where \(Y_{k}\in\{0,1\}\) represents ground-truth binary labels for each echo \(k\) and channel index \(n\) while the latter is omitted in loss functions for the sake of readability. The BCE loss helps classify an appropriate reference echo \(\mathbf{\tilde{p}}_{\tau}^{\star}\) via
\[\mathbf{\tilde{p}}_{r}^{\star}=\operatorname*{arg\,min}_{\mathbf{ \tilde{p}}_{n,k}^{\star}}\,\left\{h_{4}(h_{3}(h_{2}(h_{1}(\mathbf{\tilde{p}}_{n,k}^{\star}))))\right\} \tag{18}\]
across all channels \(n\) and echoes \(k\). The sought echo correspondence is established through a dissimilarity score \(d_{k}^{(n)}\) between learned Siamese feature layer embeddings given by
\[d_{k}^{(n)}=\left\|h_{3}(h_{2}(h_{1}(\mathbf{\tilde{p}}_{r}^{ \star})))-h_{3}(h_{2}(h_{1}(\mathbf{\tilde{p}}_{n,k}^{\star})))\right\|_{2}\,, \,\,\forall n,k \tag{19}\]
where \(\left\|\cdot\right\|_{2}\) computes the component-wise Euclidean distance. The dissimilarity \(d_{k}\) indicates how reliably a selected echo matches the reference \(\mathbf{\tilde{p}}_{r}^{\star}\). This similarity metric is used during training through the contrastive loss initially postulated by Hadsell _et al._[25] and given as
\[\mathcal{L}_{\text{C}}(Y_{k},d_{k})=\sum_{k}^{K}\frac{(1-Y_{k}){ d_{k}}^{2}}{2}+\frac{Y_{k}\left\{\max\{0,q-{d_{k}}^{2}\}\right\}}{2} \tag{20}\]
for all \(n\) where \(q>0\) is the margin regulating the border radius. For training, the total loss is
\[\mathcal{L}_{\text{T}}(Y_{k},b_{k},d_{k})=\lambda_{\text{C}}\mathcal{L}_{\text{C }}(Y_{k},d_{k})+\lambda_{\text{B}}\mathcal{L}_{\text{B}}(Y_{k},b_{k}) \tag{21}\]
where weights \(\lambda_{\text{C}}\) and \(\lambda_{\text{B}}\) determine the loss ratio.
Fig. 3: **Echo correspondence architecture** where each pre-processed detector channel data \(y_{n}(t_{i})\) undergoes MEMG optimization [22], providing features fed into a Siamese MLP stack. The overall training loss \(\mathcal{L}_{\text{T}}\) aggregates the Binary Cross-Entropy (BCE) loss \(\mathcal{L}_{\text{B}}\) and the contrastive loss \(\mathcal{L}_{\text{C}}\) for Back-Propagation (BP).
## V Experimental Work
### _Prototyping_
A prototype sensor device is built from \(N=4\) airborne Micro-Electro-Mechanical Systems (MEMS) transducers offering low power consumption and a compact form factor. Pulse emittance and echo reception benefit from a 180\({}^{\circ}\)-wide field-of-view that enables omni-directional tracking. The transducers operate at 175 \(\mathrm{kHz}\), whereas each receiver captures \(T=64\) samples at a frequency of 22 \(\mathrm{kHz}\). Figure 5 depicts an ellipsoid intersection where receiver positions span an equilateral triangle with the emitter located at its centroid having a point-symmetry in mind. The spacing between each transducer and the emitter is a radius \(B=75\)\(\mathrm{mm}\).
### _Dataset and Training_
For data acquisition, a six-axis, vertically-articulated robot arm (Meca500) is employed to navigate a convex target to Ground-Truth (GT) positions (see Fig. 1). To suppress reflections from the robot, frames are subtracted by captures from an empty run in the absence of the target. A dedicated training and validation set of 302 frames is captured by \(N=4\) sensors where each acquisition contains at least 3 detected echoes per channel, yielding approximately \(3600\) EMG components for training overall. From this, a fraction of 0.3 is reserved as a validation set. Labels \(Y_{k}\) are inferred by projecting GT positions as GT ToA \(\mu_{gt}\) in the time domain. Only a single MLP is trained due to Siamese networks sharing weights. Using an Adam optimizer, a learning rate of \(5\times 10^{-4}\) has shown to perform best. The frame batch size is 1, whereas losses of every \(k\)-th echo are back-propagated at each step. Weights are chosen to be \(\lambda_{\mathrm{C}}=1\) and \(\lambda_{\mathrm{B}}=10\) to balance the numerical loss gap. To prevent over-fitting, the maximum number of epochs is limited by early stopping criteria with \(\text{tolerance}=5\) and \(\text{min. delta}=0\).
### _Quantitative Results_
Numerical object localization results from the acquired test data taken with a robot from Fig. 1 are provided in Table I.
An important observation from this experiment is a tendency of more significant errors with increasing radial distance from the \(xy\)-origin \(\mathbf{u}=[0,0,z]^{\intercal}\) of the sensor device. To make this visible, Fig. 4 depicts cross-sectional projections of the results. The radial error in \((x,y)\) is expected as minor deviations in relative ToAs (i.e., TDoAs) produce an enormous impact when jointly propagating to the \(xy\)-plane during an ellipsoid intersection. Therefore, the prototype sensor arrangement is radially symmetric, creating a point-symmetric error distribution around the centre \(\mathbf{u}\) in the ideal case. It is also essential to consider that deviations
\begin{table}
\begin{tabular}{c c c|c c c|c c} \multicolumn{2}{c|}{Ground-Truth \(\mathbf{s}\)} & \multicolumn{2}{c|}{Estimates \(\mathbf{s}^{\star}\)} & \multicolumn{2}{c}{RMSE} \\ \(x\) & \(y\) & \(z\) & \(x^{\star}\) & \(y^{\star}\) & \(z^{\star}\) & [mm] & [\%] \\ \hline \hline -80.0 & -80.0 & 100.0 & -93.9 & -89.2 & 77.1 & 28.3 & 18.7 \\ -80.0 & -80.0 & 180.0 & -69.6 & -60.1 & 195.6 & 27.4 & 12.9 \\ -80.0 & 0.0 & 100.0 & -71.8 & -8.9 & 107.9 & 14.4 & 11.3 \\ -80.0 & 0.0 & 180.0 & -86.0 & 19.1 & 176.5 & 20.4 & 10.3 \\ -80.0 & 80.0 & 100.0 & -69.8 & 65.2 & 113.5 & 22.5 & 14.9 \\ -80.0 & 80.0 & 180.0 & -110.4 & 84.2 & 167.6 & 33.1 & 15.6 \\ 0.0 & -80.0 & 100.0 & -1.2 & -85.3 & 98.5 & 5.7 & 4.4 \\ 0.0 & -80.0 & 180.0 & 17.4 & -76.7 & 182.6 & 17.9 & 9.1 \\ 0.0 & 0.0 & 100.0 & 3.9 & 4.2 & 105.0 & 7.6 & 7.6 \\ 0.0 & 0.0 & 180.0 & 8.2 & -0.6 & 178.6 & 8.4 & 4.7 \\ 0.0 & 80.0 & 100.0 & -13.7 & 66.4 & 105.1 & 20.0 & 15.6 \\ 0.0 & 80.0 & 180.0 & -19.6 & 49.0 & 189.9 & 38.0 & 19.3 \\ 80.0 & -80.0 & 100.0 & 81.9 & -72.2 & 102.6 & 8.5 & 5.6 \\ 80.0 & -80.0 & 180.0 & 93.6 & -17.8 & 192.8 & 64.9 & 30.5 \\ 80.0 & 0.0 & 100.0 & 69.9 & 7.0 & 106.9 & 14.1 & 11.0 \\ 80.0 & 0.0 & 180.0 & 59.6 & -11.6 & 182.0 & 23.6 & 12.0 \\ 80.0 & 80.0 & 100.0 & 68.3 & 87.7 & 97.8 & 14.2 & 9.4 \\ 80.0 & 80.0 & 180.0 & 100.0 & 119.4 & 155.9 & 50.4 & 23.7 \\ \hline \hline \end{tabular}
\begin{tabular}{c c|c c c|c c c} \multicolumn{2}{c|}{Mean} & \multicolumn{2}{c}{23.3} & \multicolumn{2}{c}{13.1} \\ \multicolumn{2}{c|}{St.} & \multicolumn{2}{c}{\(15.1\)} & \multicolumn{2}{c}{\(6.6\)} \\ \hline \hline \end{tabular}
\end{table} TABLE I: Experimental 3-D localization results in \(\mathrm{mm}\)
Fig. 4: **Experimental object localization results** showing 18 position estimates \(\mathbf{s}^{\star}\) in the \([-80\ \mathrm{mm},0\ \mathrm{mm},80\ \mathrm{mm}]\) interval of the (\(xy\))-plane and \([100\ \mathrm{mm},180\ \mathrm{mm}]\) interval in \(z\)-direction. The left diagram shows \(\mathbf{s}^{\star}\) in 3-D, whereas adjacent plots depict 2-D projections of the same. Colors illustrate the individual RMSE while dashed circles represent the mean RMSE.
Fig. 5: **Localization result based on ellipsoid intersection** showing the solution \(\mathbf{s}^{\star}\), ground-truth position, transmitter \(\mathbf{u}\) and receivers \(\mathbf{v}_{n}\) with respective ellipsoids spanned by ToAs.
are specific to the sensor hardware. This includes a limited temporal resolution (\(T=64\)), just as potential mechanical sensor misalignments. Also, minor object movements caused sonic interference in the experiment, letting target echoes fluctuate and almost disappear in certain frames. Besides that, echo detection and MEMG regression occasionally fuse closely overlapping echoes to a single detected component.
To set the results from Table I in a broader context, a fair comparison of the proposed model with state-of-the-art methods would involve using the same sensor hardware and setup, which exceeds the scope of this study. Instead, error measures from closely related experiments are reported hereafter. Recently, Juan and Hu [19] presented a 2-D finger position tracking with an RMSE of \(0.7\pm 0.5\) cm using an extended Kalman filter for 6 transducers, each running at 40 kHz. The 3-D object tracking device released by manufacturer Toposens [26] achieves \(1.0\pm 2.5\) cm errors by correlating bounced-off phase signals from 3 receiving transducers running at 40 kHz, which are perpendicularly placed to each other in the range of the wavelength. Given that these deviations are from different sensor hardware, the results in Table I are within the expected error range.
### _Echo association_
Another crucial premise for PaCE to work is the proposed echo correspondence solver, which gives a promising \(F_{1}\)-score of 1.0 on the test data from Table I, where all 18 echo correspondences are matched correctly. Figure 6 depicts an exemplary correspondence data sample.
Table II provides an overview of each module's impact on the overall matching performance of the proposed framework. The ablation study of the echo correspondence network is carried out by substituting the MLP from (17) with an \(\arg\max\) operator of the amplitude scale \(\alpha_{k}\) and the contrastive loss from (20) with the Munkres (also known as Hungarian) algorithm. The impact of MEMG features from eqs. (11) to (15) is evaluated by its replacement with the ToA \(t_{k}^{*}\) from (1) and its amplitude value \(\alpha_{k}^{*}\). Table II demonstrates that MLP and contrastive loss outperform the Munkres-based echo association accuracy. Furthermore, MEMG features are more reliable correspondence indicators than just using ToAs. A suitable alternative to MEMG, e.g., based on learned convolutions, is yet to be devised since existing methods in the field (e.g., PhaT [23, 24]) employ waveform data.
## VI Conclusions
With robots manoeuvring in 3-D space, tomorrow's computational vision requires reliable recognition of surroundings under difficult lighting conditions. Through quantitative assessment of actual targets, this study demonstrates that the proposed PaCE model facilitates 3-D tracking by at least 3 arbitrarily located ToF detectors and one emitter without phase information. A broad variance of RMSEs is reported by relative ToA displacements owing to sensor characteristics and occasional correspondence mismatches.
Once PaCE reaches maturity, it will serve as a considerable and cost-effective alternative to beamforming, joining the ranks of Delay-And-Sum (DAS), Synthetic Aperture Focusing Technique (SAFT), and Direction-of-Arrival (DoA) algorithms. Although the results presented here are not yet comparable to the former, more established principles, this feasibility study lays the groundwork for more active research to come.
Follow-up studies will further investigate the performance of PaCE in an extensive experimental analysis, for instance, when ported to other sensor hardware. Its ability to localize several objects simultaneously will be an essential milestone. Exploiting the temporal domain via target tracking will stabilize the proposed localization scheme. Another central research question will be how PaCE can support SLAM as part of a multi-modal data fusion scheme.
## Acknowledgment
The author hereby thanks Milica Bulatovic for kindly sharing the laboratory equipment as well as Meret Ruch and Urs Rohrer for helping with the prototype's mechanical design and assembly. The author is also grateful to Raphael Sznitman for his invaluable advice throughout this research project at the ARTORG Center. This project is funded by the Hasler Foundation under number 22027 and the author's gratitude is also extended to the foundation for their trust and support.
\begin{table}
\begin{tabular}{l|c|c|c|c} Features & MEMG & MEMG & \([t_{k}^{*},\alpha_{k}^{*}]\) & MEMG \\ \hline Reference & \(\arg\max(\alpha_{k})\) & MLP & MLP & MLP \\ \hline Association & Munkres & Munkres & Contrastive & Contrastive \\ \hline \hline Accuracy & 0.7778 & 0.8889 & 0.8889 & 1.0000 \\ \(F_{1}\)-score & 0.5173 & 0.4682 & 0.9866 & 1.0000 \\ \end{tabular}
\end{table} TABLE II: Ablation overview for echo association |
2301.03373 | Chatbots As Fluent Polyglots: Revisiting Breakthrough Code Snippets | The research applies AI-driven code assistants to analyze a selection of
influential computer code that has shaped modern technology, including email,
internet browsing, robotics, and malicious software. The original contribution
of this study was to examine half of the most significant code advances in the
last 50 years and, in some cases, to provide notable improvements in clarity or
performance. The AI-driven code assistant could provide insights into
obfuscated code or software lacking explanatory commentary in all cases
examined. We generated additional sample problems based on bug corrections and
code optimizations requiring much deeper reasoning than a traditional Google
search might provide. Future work focuses on adding automated documentation and
code commentary and translating select large code bases into more modern
versions with multiple new application programming interfaces (APIs) and
chained multi-tasks. The AI-driven code assistant offers a valuable tool for
software engineering, particularly in its ability to provide human-level
expertise and assist in refactoring legacy code or simplifying the explanation
or functionality of high-value repositories. | David Noever, Kevin Williams | 2023-01-05T23:17:17Z | http://arxiv.org/abs/2301.03373v1 | # Chatbot's as Fluent Polyglots:
###### Abstract
The research applies AI-driven code assistants to analyze a selection of influential computer code that has shaped modern technology, including email, internet browsing, robotics, and malicious software. The original contribution of this study was to examine half of the most significant code advances in the last 50 years and, in some cases, to provide notable improvements in clarity or performance. The AI-driven code assistant could provide insights into obfuscated code or software lacking explanatory commentary in all cases examined. We generated additional sample problems based on bug corrections and code optimizations requiring much deeper reasoning than a traditional Google search might provide. Future work focuses on adding automated documentation and code commentary and translating select large code bases into more modern versions with multiple new application programming interfaces (APIs) and chained multi-tasks. The AI-driven code assistant offers a valuable tool for software engineering, particularly in its ability to provide human-level expertise and assist in refactoring legacy code or simplifying the explanation or functionality of high-value repositories.
Transformers, Text Generation, Malware Generation, Generative Pre-trained Transformers, GPT
## 1 Introduction
The latest generation of artificial intelligence (AI) and chat applications [1-13] shows particular promise as software generators [4, 11], presenting a new interactive way to learn complex coding principles [6], comment on existing code in multiple languages [8], and generally serve as coding assistants [8-12]. Recent efforts by OpenAI have put large language models (LLMs) into public access [1-2]. As an experimental platform, particularly for understanding software principles, its interactive chat [1] simulates a vast knowledge base, expert role-playing, and long-term memory spanning 8000 tokens, or approximately 20-25 pages of generated text. Several tests or benchmarks, such as QuixBugs [8] and HackerRank [12], have demonstrated the potential of generative coders as software assistants [10]. A recent review from the University of Washington and Microsoft Research [14] estimated that 1.2 million coders currently use OpenAI's copilot for tasks formerly requiring searches, such as code completion, commentary, or bug detection.
The present research seeks to understand the chatbot interface as a way for programmers to ask diverse and challenging questions to a knowledge base trained on internet-scale data but contextualized across both domain and style expertise that separates its writing quality from previous personal assistants. One observation to highlight in this analysis stems from the obviousness of history in retrospect. While a web user today might see the rise of computer codes as an inevitable consequence of networked nodes communicating securely, the inventors of the last fifty years have no such foresight into the future of image-based browsers, secure email, or eCommerce sites, or malware hurdles. In this context, applying an LLM to revisit historically significant code innovations provides a novel benchmark for future improvements in coding assistants.
## 2 Methods
The experimental approach builds on the survey of software history's most consequential computer code [15]. As shown in Figure 1, one of the latest contributions to influential code is the transformer architecture (first proposed by Google Brain, Google Research, and the University of Toronto) [16], subsequently adopted by the AI community to build ground-breaking natural language [13] and vision models. In the example, OpenAI's latest chat interface [1] answers a question to code itself by creating a large language model unit that can scale to the billions of parameters used to build ChatGPT.
The paper explores this hypothesis by surveying other historical code snippets that prove both small enough to prompt the chat interface as viable questions and sufficiently consequential to require elaboration. The 2019 collaboration between Arizona State University, Slate, and New America highlighted short lines of code or concise snippets in their article _"The Lines of Code That Changed Everything"_[15]. Based on 75 respondents, "pick the pieces of code that had a huge influence or warped our lives." The results showcased 36 example snippets from the first code (1725) to the Boeing 737 Max takeoff and landing software (2017) that triggered nosolves and killed hundreds.
Tasks 1-19 defines code snippets in 19 sample problems surveyed by Future Tense [15] as small code sections that changed the modern technological world over a half-century from 1961 to 2014. The experimental format includes defining the task, presenting ChatGPT with a code snippet, and then directing OpenAI's model to elaborate or explain this breakthrough software step. Appendices A-B provide examples of how detailed a coding chatbot can go for analyzing software snippets when repeatedly questioned or probed for additional suggestions. In some cases, the code is famous and recognizable enough that a Google search might similarly point the user to a human-curated interpretation of the code function on another assistance site like StackOverflow.com or StackExchange. In other cases, the highly obfuscated code of some encryption algorithms receives deep analysis that even expert coders in multiple languages might struggle to define. For example, the calculation originally proposed in Bitcoin's probability of compromise or the export-controlled code that spawned the launch of secure e-commerce.
Our approach treats ChatGPT as a new scientific instrument that might turn out historically to rival the telescope or microscope in its future incarnations when AI provides increasingly sophisticated access to not just software solutions but general capabilities for problem-solving or scientific and mathematical proofs [17]. We apply this new instrument to 19 challenge problems with a minimal hint to ChatGPT regarding their ultimate historical significance. Some code snippets correspond to image compression (JPEG), which enables the online transition from text to image-based browsing. Other code sections only hint at their
Figure 1: Example coding task to self-invent a large language model.
ultimate role in computing museums as the first malicious worm (Morris), one-line virus (fork bomb), or pernicious security vulnerability (Heartbleed) that, upon first discovery, rendered two-thirds of the websites vulnerable overnight.
The format of the paper first summarizes the task as a benchmark problem to solve. The prompt to ChatGPT asks minimally, "what does this code do?" or "comment on this obfuscated function." We refer to any hints using the term glossator, meaning to "speak, talk or chat" and derived from Greek as even "speaking in tongues" [18]. The heuristic term underscores that, at present, these models are so large and broad that few can deeply understand whether the model technically knows what it is predicting as the next token in a long-trained memory chain or whether it genuinely can be probed for increasingly detailed analysis [19] that rivals the best computer experts in multiple languages. One notable and accepted skill for ChatGPT is its robust capabilities as a software polyglot. We demonstrate that a relatively obscure sampling of 8,945 computing languages [20] can translate ground-breaking code to underserved examples in Curry, Fish, and Glee [21]. For some languages like COBOL or FORTRAN, this translation ease may provide early profitable applications to modernize legacy code that still dominates the financial industry (COBOL) [22] or high-performance scientific libraries (FORTRAN) [23].
## 3 Results
The work presents each task as an initial challenge of known importance and then revisited with coding assistance from ChatGPT. In all cases, the chat interface provides sufficient context [24] that if the initial response proves less insightful, further probing reveals that the LLM effectively comments, explains, or implements as a software polyglot.
### 3.1 Consequential Code Exploration
Task 1 asks ChatGPT to explain the Assembly code underpinning the first distributed video game, MIT's version of Spacewars!
**Task 1:**_Revisit the first distributed video game in Assembly (1961), MIT DEC PDP-1 Computer, as demonstrated regularly at the Computer History Museum_
\begin{tabular}{|p{284.5pt}|p{284.5pt}|} \hline
**Prompt and code:** What does this code do? & **Glossator:** It plays spacewars in which two players control spaceships, the needle and the wedge, and engage in a one-on-one space battle while avoiding a gravity well of a star at the center of the screen. \\ \hline hp1, dap hp2 & **Response:** Based on this additional context, it looks like the code you provided is handling the movement of one of the spaceships in the game, possibly the "needle" spaceship. It appears to be using some randomness in its movement, and there are references to certain conditions (such as "m11" and "mb1") that may be relevant to the game's mechanics. It is also possible that the code is interacting with other parts of the game, such as handling the display of the spaceship or processing input from the player controlling the spaceship. Without more information about the specific game mechanics and the purpose of the different variables and instructions in the code, it is difficult to say exactly what the code is doing. \\ \hline
**Glossator:** can you translate it to a python spacewars game doing approximately the same task \\ \hline
**Response:** Sure, here is a rough translation of the code you provided into Python that accomplishes roughly the same task: _(Appendix A code in python)_ \\ \hline \end{tabular}
Task 2 asks ChatGPT to explain the COBOL code underpinning the early origins of a networked email system.
Task 3 asks ChatGPT to explain the Assembly code underpinning the memory manager and task buffer from the Apollo 11 computer and lunar lander.
**Task 3:**_Revisit the Apollo 11 Lunar Module's BAILOUT Code (1969), the code that kept the lunar module's computer from running out of space in space_
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline
**Prompt and code:** can you tell me what this code does? POODO INHINT TA SLMCADR TC BANKCALL CAR VAC5STOR # STORE ERASABLES FOR DEBUGGING PURPOSES. INDEX ALMCADR CAF 0 ABORT2 TC BORTENT OCT77770 OCT 77770 # DONT MOVE CA V37FLBIT # IS AVERAGE G ON MASK FLAGWRD7 CCS A TC WHIMPER -1 # YES. DONT DO POODO. DO BAILOUT. TC DOWNFLAG ADRES STATEFLG TC DOWNFLAG ADRES REINTFLG TC DOWNFLAG ADRES NODOFLAG TC BANKCALL **Glossator:** Its part of the memory manager for the Apollo lunar lander guidance system **Response:** This code appears to be written in a variant of Assembly programming language and appears to be for use in a guidance or navigation system of some kind. It seems to be setting a flag to indicate that a certain operation (called "POODOO") has been initiated, and it is also storing certain data for debugging purposes. It appears to be checking if a certain condition (indicated by the "V37FLBIT" variable) is met, and if it is not met, the code sets some flags to specific values and calls a subroutine called "MR.KLEAN" before exiting. Without more context, it is difficult to say exactly what the code is doing or what the variables and subroutines in the code represent. **Glossator:** when the computer runs out of space or overflows, the program triggers BAILOUT to prioritize more important data **Response:** It sounds like the "BAILOUT" operation is a mechanism that is used to free up memory space or to prevent buffer overflows in the Apollo lunar lander guidance system. It may be triggered when the computer runs out of space or when the memory usage of the guidance system exceeds a certain threshold. When "BAILOUT" is triggered, the program may prioritize certain data or operations as more important and temporarily halt or suspend less important tasks or data in order to make room in the memory. The specific behavior of the "BAILOUT" operation and the
Task 4 asks ChatGPT to explain the familiar code underpinning the programming introduction, hello world, in multiple obscure languages, including Glee, Fish, and Curry.
**Task 4:**_Revisit Hello World Code (1972), the phrase that has introduced generations to code_
\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline
**Prompt and code:** explain what this code might be used for? main( ) { printf("hello, world\n"); } \\ \hline
**Response:** This code is a simple example of a C program that prints the text "hello, world" to the console. When the program is compiled and run, it will output the text "hello, world" followed by a newline character. This code is often used as a starting point for learning how to program in C, or as a simple example to test that a C compiler is working correctly. \\ \hline
**Glossator:** rewrite the code in the Curry language: Here is the equivalent code in the Fish \\ \hline
**Response:** Here is the equivalent code in the Fish \\ \hline
**Curry programming language:** \\ \hline
**main = putStrLn "hello, world"** \\ \hline
This code defines a function called "main" that takes no arguments and outputs the text "hello, world" to the console, followed by a newline character. In Curry, functions are defined using the "=" operator, and the "putStrLn" function is used to output a string to the console followed by a newline character. \\ \hline \end{tabular}
Task 5 asks ChatGPT to explain the buffer overflow attack and various potential fixes for memory managers.
**Task 5:**_Revisit the Null Terminated String (1972), the most catastrophic design bug in the history of computing_
\begin{tabular}{|p{113.8pt}|} \hline
**Prompt and code:** tell me what might go wrong with the following C code: char yellow[26] = {'y', 'e', 'l', 'l', 'o', 'w', '\0'}; \\ \hline
**Response:** Here are a few things that might go wrong with the given C code: \\ \hline
If the array "yellow" is accessed outside of its bounds, it could lead to a buffer overflow or segmentation fault. If the array "yellow" is not null-terminated, it could cause issues with functions that expect null-terminated strings.
If the array "yellow" is accessed after it goes out of scope (e.g. after the function it is defined in returns), it could lead to undefined behavior.
If the array "yellow" is not properly initialized, it could contain garbage values that might cause issues if the array is used for a specific purpose.
There may be other issues that could arise with this code as well, depending on how it is used and the specific environment it is running in.
**Glossator:** how could it be exploited in a buffer overflow attack
**Response:** In the case of the code you provided, a buffer overflow attack could potentially occur if the array "yellow" is used to store a string that is longer than 7 characters, and that string is passed to a function that does not properly check the length of the string before copying it into a fixed-size buffer. If the string is longer than the buffer, the extra data will overwrite adjacent memory locations, potentially allowing an attacker to execute malicious code or access sensitive data.
**Glossator:** in single line of code, can i fix it?
**Response:** Here is a single line of code that you could use to fix the issue of the array being too small to hold the null terminator:
char yellow[27] = {'y', 'e', 'l', 'l', 'o', 'w', '\0'}; This code declares an array "yellow" with 27 elements, which is large enough to hold the 6 characters in the string "yellow" plus the null terminator. This will prevent the array from overflowing and help to avoid buffer overflow vulnerabilities.
However, keep in mind that this is just a simple fix for the specific issue of the array being too small to hold the null terminator. There are other potential issues with the code that may still need to be addressed, such as the possibility of the array being used to store strings that are too long or the array being accessed using out-of-bounds indices. To fully address all potential issues with the code, you may need to use additional lines of code or implement more robust input validation and error handling.
Task 6 asks ChatGPT to explain the rounding up and down errors that biases many addition and multiplication steps.
**Task 6:**_Revisit the Vancouver Stock Exchange's Rounding Error (1982), the minor decimal distinction with a major cost causing the electronic stock index to lose half its original value after 23 months in operation_
**Prompt and code:** How could these two lines of code produce drastically different output
"return floor(value)" vs. "return round(value)" when doing many addition operations
**Response:** The "floor" function rounds a number down to the nearest integer, while the "round" function rounds a number to the nearest integer using standard rounding rules. These functions can produce different results when applied to the same number
**Glossator:** explain how it could affect the financial markets if implemented to evaluate the electronic index value summed over many trades
**Response:** For example, if the "floor" function is used to round the index value down to the nearest integer, it could result in the reported index value being slightly lower than it would be if the "round" function or another rounding method was used. This could potentially lead to discrepancies in the reported prices of the underlying assets and could affect the decisions of traders and investors. Overall, it is important to carefully consider the implications of using any rounding function in the financial markets, and to choose a method that is transparent, accurate, and fair to all market participants.
Task 7 asks ChatGPT to explain the first significant cyberattack based on the Morris Worm.
**Task 7:**_Revisit the Morris Worm (1988), what has been called the "first major attack on the Internet." Approximately 10 percent of the 60,000 computers connected to the internet were hit, causing millions of dollars of damage and leading the New York Times to print the words "the Internet" for the first time._
**Prompt and code:** Please explain what the following code does: checkother() /* 0x57d0 */ {** int s, 18, 112, 116, optval; struct sockaddr_in sin; /* 16 bytes */** optval = 1; if ((random() % 7) == 3) return; /* 612 */** s = socket(AF_INET, SOCK_STREAM, 0); if (s < 0) return; ```
**Response:** This code appears to be a function in the C programming language that creates a socket and sets a socket option. A socket is a communication endpoint that is used to send and receive data over a network. The socket is created using the "AF_INET" domain (which stands for "Internet Address Family"), the "SOCK_STREAM" type (which indicates that the socket will use a stream-based protocol, such as TCP), and a protocol value of 0 (which specifies that the default protocol should be used). If the "socket" function is successful, it returns a socket descriptor that is stored in the "s" variable. If the "socket" function fails, it returns a negative value and the function immediately returns.
Task 8 asks ChatGPT to explain the image compression algorithm underpinning the JPEG image format.
**Task 8:**_Revisit the JPEG format (1992), what the Joint Photographic Experts Group published specifications for a standard--the JPEG--to make image files smaller, using royalty-free lossy compression, and enable internet transport without raw or bulky data transfer_
**Prompt and code:** Please comment on each line of the following: (_JPEG code without commentary_).
**Response:** This code appears to implement a function that performs a discrete cosine transform (DCT) on an input array of doubles. The DCT is a mathematical operation that is used to transform a signal from the time domain to the frequency domain.
**Task 9 asks ChatGPT to explain the basic image algorithm underpinning the first web browser, Mosaic.
**Task 9:**_Revisit the Mosaic browser (1993), what enabled non-textual birth of the web_
MaskImage(dsp, data, width, height, depth, img_info, clip) Display *dsp; unsignedchar *data; int width, height; int depth; ImageInfo *img_info; int clip; { int linepad, shiftnum; int shiftstart, shiftstop, shiftinc; int bytesperline,bpp; int temp; int w, h; XImage *newimage; unsignedchar *bit_data, *bitp, *datap; Visual *theVisual; int bmap_order; unsignedlong c; int rshift, gshift, bshift;
#ifdef NEW switch(bpp=bits_per_pixel(dsp,depth)) #else switch(depth) ```
Task 10 asks ChatGPT to explain the invisible pixel tracker that enabled social media, advertising models.
**Task 11 asks ChatGPT to explain the link sharing code for early Wikipedia compilations of large crowd-sourced knowledge bases.**
**Task 11:**_Revisit the Wiki (1994), what paved the way for Wikipedia_
Task 12 asks ChatGPT to explain the RSA encryption code that enabled e-commerce sites to take early payments in its obfuscated format.
```
#!/bin/perl-s---export-a-crypto-system-sig-RSA-3-lines-PERLSm=unpack(H.Sw,Sm."0"xSw),S_='echo"16doSw 2+4Qi0Sd+^1[d2%Sa 2/d0<X+d*La1=z|USn%0]SX5k"[Sm*]|EszlXX++p|dc's/n_|W/g,printpack(H*'S_)whileread(STDIN,Sm,(Sw=2*Sd-1+length(Sn)&-1)/2)
```
**Prompt and code:** What does this code do? **Response:** This code appears to define a function in the Perl programming language called "AsLink" that takes a single argument, called "Snum". The function appears to be used to create a link or an image tag for a given reference.
Task 13 asks ChatGPT to explain the RSA encryption code that enabled e-commerce sites to take early payments in its obfuscated format.
**Task 13:**_Revisit the secure e-commerce enabling RSA code (1995), how the Rivest-Shamir-Adleman algorithm became a secret that could not be exported or shown to a foreign national_
Task 14 asks ChatGPT to explain the Google PageRank algorithm that organized link priorities and relevance to user queries.
**Task 14:**_Revisit the Google's PageRank Algorithm (1996), how the most relevant links could be sorted based on the ratio of higher incoming links_
Task 15 asks ChatGPT to explain the one-line virus that acts the first distributed fork bomb in its obfuscated format.
**Task 15:**_Revisit the one line virus (2002), how a fork bomb works_
\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline
**Prompt and code:** & **Response:** It looks like this code is actually a series of Linux shell commands separated by semicolons. It is not a valid Python code. \\ : ( ) & : : \\ \hline \end{tabular}
The first command, :, is a shell builtin that does nothing. The second command, ( ), creates a subshell and runs the commands inside it. The third command, { : : & }, runs the command : in the background and pipes its output to the command : using the \(|\) operator. The fourth command, :, is a command separator that allows multiple commands to be run on the same line. The fifth command, :, is a shell builtin that does nothing.
Overall, this code does not perform any meaningful actions and would have no effect if it were executed in a shell.
**Glossator:** could it be a fork bomb?
**Response:** It is possible that this code could be used to create a fork bomb, which is a type of denial of service (DoS) attack that can cause a computer's system resources to be exhausted.
A fork bomb works by repeatedly creating new processes (forks) until the system runs out of resources and becomes unresponsive. The : command is used as a placeholder for the command to be run in the background, and the & operator is used to run the command in the background. The \(|\) operator pipes the output of one command to another command, but in this case, both commands are :, which does nothing. This causes the shell to run an infinite loop, continually creating new processes and consuming more and more system resources.
It is important to be careful when executing unfamiliar code, especially in a shell, as it could potentially cause harm to the system.
Task 16 asks ChatGPT to explain the simple robotic navigation code redirecting the robot when it hits a barrier.
**Task 16:**_Revisit the Roomba Vacuum Guidance System (2002), how technology was made to move robotically in the world in complex ways using simple recovery rules_
Task 17 asks ChatGPT to explain the priority algorithm underpinning wireless cell networks.
**Task 18 asks ChatGPT to explain the bitcoin validation algorithm that rates the (low) probability of blockchain attacks.**
**Task 18:** _Revisit Bitcoin (2008), how cryptocurrency reports the small likelihood of an attacker taking over the Bitcoin blockchain_
**Prompt and code: Comment the code:**
**Glossator: what does the code do in a single sentence?**
**Response: The code defines a function called AttackersuccessProbability that takes two arguments: a probability q and an integer z, and returns a probability representing the probability of success of an attacker based on the values of q and z.**
**Glossator: how would the code assist in the blockchain?**
**Response: One possible way in which the code could be used in the context of the blockchain is to calculate the probability of success of an attacker trying to compromise the security of the blockchain network. For example, the q and z arguments might represent the probability of the attacker finding a valid solution to a cryptographic puzzle and the number of attempts the attacker has made, respectively. The return value of the function could then be used to assess the likelihood that the attacker will be able to successfully compromise the network.**
Task 19 asks ChatGPT to explain the Heartbleed security vulnerability, one that had widespread effects but also evaded human detection for a long period.
**Task 19:**_Revisit Heartbleed (2014), how one security vulnerability crippled credentials, encryption keys and 66% of all websites after decades of use_
```
Prompt andcode:Explainwhatthiscodemightdo
``` Response:ThiscodeappearstobepartofaprogramthatisusingtheOpenSSLlibrarytoimplementheHearbeatextensionoftransportLayerSecurity(TLS)protocol.TheHearbeatextensionallowsalticintsonedamessagetoaserver,whichtheserverthenchoesbacktoclient,allowingtheclienttoverifythattheconnectionistillactiveandthattheserverisresponding. Overall,thiscodeappearstobeconstructinga TLSHearbeatmessageandstoringitinabackofmemory.Themessageconsistsofafixedtypefield,variable-lengthpayloadfield,andavalariable-lengthpaddingfieldfilledwithpseudo-randombytes.Themessageisstoredinad contiguousblockofmemory,withthevariousfieldsbeingwrittentotheblockinsequence.TheHearbeatextensionoftransport,whichisimplementedusingthecodeyouprovided,wasfoundtohaveavulnerabilitythatallowedanattackertosendcarefullycraftedmessagetoaserverandpotentiallyrevealsensitiveinformationfromtheserver'smemory. ```
**Task 20:**_Probea challenging SQL_
``` Promptandcode:UsingtheCOALESCEunction,addaFullNamecolumnusingthefollowingquery:SELECT[Title] SELECT [Title],[FirstName],[MiddleName],[LastName] FROM [LastName] [AdventureWorks].[Person].[Person] [Person] [AdventureWorks].[Person].[P
```
**Prompt and code:** Fix this query:
``` SELECT ChatGPT:[Title],[FirstName],[MiddleName],[LastName],COALESCE([Title] +'','') & COALESCE([FirstName] +'', ') & COALESCE([MiddleName] +'', ') & COALESCE([LastName], ') AS FullName FROM [AdventureWorks].[Person].[Person] ```
**Response:** It looks like there is a problem with the query because the & operator is being used to concatenate the strings instead of the + operator. The correct query should be:
``` SELECT [Title],[FirstName],[MiddleName],[LastName],COALESCE([Title] +'', '') + COALESCE([FirstName] +'', '') + COALESCE([MiddleName] +'', '') + COALESCE([LastName], ') AS FullName FROM [AdventureWorks].[Person].[Person] ```
**Response:** It looks like there is a problem with the query because the & operator is being used to concatenate the strings instead of the + operator. The correct query should be:
``` SELECT [Title],[FirstName],[MiddleName],[LastName],COALESCE([Title] +'', '') + COALESCE([FirstName] +'', '') + COALESCE([MiddleName] +'', '') + COALESCE([LastName], '') AS FullName FROM [AdventureWorks].[Person].[Person] ```
**Response:** It looks like there is a problem with the query because the & operator is being used to concatenate the strings instead of the + operator. The correct query should be:
``` SELECT [Title],[FirstName],[MiddleName],[LastName],COALESCE([Title] +'', '') + COALESCE([FirstName] +'', '') + COALESCE([MiddleName] +'', '') + COALESCE([LastName], '') AS FullName FROM [AdventureWorks].[Person].[Person] ```
The COALESCE function returns the first non-NULL value in a list of arguments. In this case, it is used to concatenate the values in the Title, FirstName, and LastName columns, with a space character between each column. If any of the columns are NULL, they will be replaced with an empty string, so that the FullName column does not contain any NULL values.
The resulting FullName column will contain the concatenated values of the Title, FirstName, and LastName columns, with a space character between each column. If any of these columns are NULL, they will be replaced with an empty string in the FullName column.
**Prompt and code:** Extract the Access Query from this text:
``` 'Status disagrees strSQLSStatement ="UPDATE PartData INNER JOIN PartStatus " & "ON PartData.PartNumber = PartStatus.PartNumber " & "AND PartData.Manufacturer = PartStatus.Manufacturer = PartStatus.Manufacturer " & "SET PartData.Status = "Incorrect', "& _ "PartData.Comments = 'Tools Disagree" & _ ```
**Response:** Here is the Access query in plain text:
``` UPDATE PartData INNER JOIN PartStatus ON PartData.PartNumber = PartStatus.PartNumber AND PartData.Manufacturer = PartStatus.Manufacturer SET PartData.Status = 'Incorrect', PartData.Comments = 'Tools Disagree' WHERE ((PartData.VendorStatus > ') And PartData.VendorStatus <> ') And PartData.VendorStatus IS NOT NULL And PartData.VendorStatus <> ') AND (PartData.Status = PartStatus.FinalStatus)
*WHERE ((PartData.VendorBstatus IS NOT NULL And PartData.VendorBStatus <> '') " & _ "And (PartData.VendorAStatus IS NOT NULL And PartData.VendorAStatus <> '')) " & _ "AND (PartData.Status = PartStatus.FinalStatus) " ```
**Prompt and code:** Convert that to work in SQL 2016
**Response:** Here is the query converted to work in SQL Server 2016:
UPDATE pd SET pd.Status = 'Incorrect', pd.Comments = 'Tools Disagree' FROM PartData pd INNER JOIN PartStatus ps ON pd.PartNumber = ps.PartNumber AND pd.Manufacturer = ps.Manufacturer WHERE (pd.VendorBStatus IS NOT NULL AND pd.VendorBStatus <> '') AND (pd.VendorAStatus IS NOT NULL AND pd.VendorAStatus <> '') AND (pd.Status = ps.FinalStatus)
**Response:** Here are a few suggestions for optimizing the WHERE clause in the query:
Consider using the COALESCE() function to check for NULL values in the VendorAStatus and VendorBStatus columns. This can avoid having to use multiple AND clauses to check for NULL and empty values.
WHERE COALESCE (pd.VendorAStatus, '') <> '' AND COALESCE (pd.VendorBStatus, '') <> '' Use the EXISTS operator to check for the existence of rows in the PartStatus table that match the PartNumber and Manufacturer columns in the PartData table. This can improve the performance of the query,
especially if the PartStatus table is large. WHERE EXISTS ( SELECT 1 FROM PartStatus ps WHERE pd.PartNumber = ps.PartNumber AND pd.Manufacturer = ps.Manufacturer )
Consider using table indexes to improve the performance of the
JOIN and WHERE clauses. This can make the query run faster by allowing the database server to quickly locate the rows it needs to update.
CREATE INDEX idx PartNumber Manufacturer ON PartStatus (PartNumber, Manufacturer);
Here is the revised query using these suggestions:
UPDATE pd SET pd.Status = 'Incorrect',
pd.Comments = 'Tools Disagree' FROM PartData pd INNER JOIN PartStatus ps ON pd.PartNumber = ps.PartNumber AND pd.Manufacturer = ps.Manufacturer
## 4 Discussion and Conclusions
The research has applied AI-driven code assistants to explore half of the surveyed computer code that changed the course of modern technology, including email, internet browsing, robotics, and malicious software. The original contribution of this research analyzes half of the most significant code advances in the last half-century and, in some cases, offers considerable improvements in clarity or performance. In no instances examined was ChatGPT unable or unwilling to shed light on obfuscated code or software lacking explanatory commentary. Some benchmark tasks, such as the Apollo Lunar Lander memory management software, would strain the best modern software analysts. We generated additional sample problems based on bug corrections and code optimizations requiring much deeper reasoning than a traditional Google search might provide. Future work will emphasize adding automated documentation and code commentary along with translating selective large code bases into more modern versions that make contact with multiple new application programming interfaces (API) and chained multi-tasks. In software engineering, the advent of human-level software expertise promises new approaches to refactoring legacy code or simplifying the explanation or overall functionality of high-value repositories. Akin to a new scientific instrument, the AI-driven code assistant seems fruitful for further practical exploration, particularly in its most recent incarnation as a chat interface capable of long-term memory, context, and polyglot understanding of software principles learned by examples without explicit rules or human guidance.
## Acknowledgements
The authors thank the PeopleTec Technical Fellows program for encouragement and project assistance. The authors thank the researchers at OpenAI for developing large language models and allowing public access to ChatGPT.
|
2305.17829 | Time-Varying Vector Error-Correction Models: Estimation and Inference | This paper considers a time-varying vector error-correction model that allows
for different time series behaviours (e.g., unit-root and locally stationary
processes) to interact with each other to co-exist. From practical
perspectives, this framework can be used to estimate shifts in the
predictability of non-stationary variables, test whether economic theories hold
periodically, etc. We first develop a time-varying Granger Representation
Theorem, which facilitates the establishment of asymptotic properties for the
model, and then propose estimation and inferential methods and theory for both
short-run and long-run coefficients. We also propose an information criterion
to estimate the lag length, a singular-value ratio test to determine the
cointegration rank, and a hypothesis test to examine the parameter stability.
To validate the theoretical findings, we conduct extensive simulations.
Finally, we demonstrate the empirical relevance by applying the framework to
investigate the rational expectations hypothesis of the U.S. term structure. | Jiti Gao, Bin Peng, Yayi Yan | 2023-05-28T23:52:09Z | http://arxiv.org/abs/2305.17829v1 | # Time-Varying Vector Error-Correction Models:
###### Abstract
This paper considers a time-varying vector error-correction model that allows for different time series behaviours (e.g., unit-root and locally stationary processes) to interact with each other to co-exist. From practical perspectives, this framework can be used to estimate shifts in the predictability of non-stationary variables, test whether economic theories hold periodically, etc. We first develop a time-varying Granger Representation Theorem, which facilitates the establishment of asymptotic properties for the model, and then propose estimation and inferential methods and theory for both short-run and long-run coefficients. We also propose an information criterion to estimate the lag length, a singular-value ratio test to determine the cointegration rank, and a hypothesis test to examine the parameter stability. To validate the theoretical findings, we conduct extensive simulations. Finally, we demonstrate the empirical relevance by applying the framework to investigate the rational expectations hypothesis of the U.S. term structure.
**Keywords:** Cointegration, Gaussian Approximations, Granger Representation Theorem, Iterated Time-Varying Functions, Term Structure of Interest Rates
Introduction
Vector error-correction models (VECM) have been widely used in practice to study the short-run dynamics and long-run equilibrium of multiple non-stationary time series (e.g., Barigozzi et al. 2022). These models have proven to be useful in forecasting non-stationary time series and testing the validity of various hypotheses and theories, such as the rational expectations hypothesis of the term structure (Bauer & Rudebusch 2020), money demand theory (Benati et al. 2021), real business-cycle theory (King et al. 1991), and more. However, it is important to note that all of these studies are based on the assumption of constant parameters. In practice, parameters may evolve over time, and failure to take these changes into account leads to incorrect policy implications and predictions (Fan & Yao 2003, p. 15).
To model the changes over time, various parametric VECM models (e.g., Hansen & Seo 2002, Hansen 2003, Bergamelli et al. 2019) have been proposed to allow for abrupt structural breaks. However, for this line of research, the corresponding estimation and inferential theory has not been well established, which can undermine the reliability of these models, perhaps because the relevant empirical process theory does not typically apply to this case when the data generating process exhibits some uncertain information (e.g., unknown structural breaks and dates) and involves different time series behaviours (e.g., unit-root and piecewise stationary processes). For instance, Hansen & Seo (2002) propose a method to test for the presence of a threshold effect in the VECM model with a single cointegration vector, but do not establish the corresponding estimation theory. Similarly, Hansen (2003) assumes the change points and the number of cointegration relations are given, while Bergamelli et al. (2019) consider testing the structural breaks of a VECM model with unknown break dates but do not provide estimation and inferential theories for model parameters. What is more, model misspecification and parameter instability may undermine the performance of parametric time-varying VECM models. Hansen (2001) notes that it is unlikely that a structural break would be immediate and that it might be more reasonable to allow for a structural change to take a period of time to take effect. Therefore, models that allow for smooth changes over time may provide a more accurate representation of the dynamics in reality.
Having said that, we consider the time-varying vector error-correction model (VECM):
\[\Delta\mathbf{y}_{t}=\boldsymbol{\Pi}(\tau_{t})\mathbf{y}_{t-1}+\sum_{j=1}^{ p_{0}-1}\boldsymbol{\Gamma}_{j}(\tau_{t})\Delta\mathbf{y}_{t-j}+\mathbf{u}_{t}, \quad\mathbf{u}_{t}=\boldsymbol{\omega}(\tau_{t})\boldsymbol{\varepsilon}_{t} \tag{1.1}\]
for \(1\leq t\leq T\), where \(\boldsymbol{\Pi}(\tau)=\boldsymbol{\alpha}(\tau)\boldsymbol{\beta}^{\top}\), \(\boldsymbol{\alpha}(\tau)\) is the \(d\times r_{0}\) adjustment coefficients, \(\boldsymbol{\beta}\) is the \(d\times r_{0}\) cointegration matrix, and \(r_{0}\) is the cointegration rank to be determined by the dataset. Here, \(\boldsymbol{\omega}(\cdot)\) governs the time-varying dynamics of the covariance matrix of \(\{\mathbf{u}_{t}\}\)
so it characterizes the permanent changes in unconditional volatility. Here, \(\{\mathbf{y}_{t}\}\) and \(\{\Delta\mathbf{y}_{t}\}\) naturally connect two types of nonstationarity using a time-varying setup. Note that we assume the cointegration matrix \(\boldsymbol{\beta}\) to be time-invariant, since there are ample empirical evidence showing that the short-run dynamics should be time-varying, while the long-run relationship between economic variables are quite stale (e.g., the expectations hypothesis of the term structure in Hansen 2003; long-run money demand theory in Benati et al. 2021). In some specific cases, such as testing the present-value theory for stock returns (e.g., Campbell and Shiller 1987), the cointegration matrix is even known a priori. Mathematically, the decomposition of \(\boldsymbol{\Pi}(\tau)=\boldsymbol{\alpha}(\tau)\boldsymbol{\beta}^{\top}\) is simply an identification restriction. Without any structure, it is impossible to identify the elements in the decomposition of \(\boldsymbol{\Pi}(\tau)\) due to the multiplication form, so we go along with the existing literature by retaining that \(\boldsymbol{\beta}\) is a time-invariant matrix.
Having presented our model in (1.1), we comment on a challenge from the methodological perspective. In the extant literature of time-varying dynamic models, one often adopts the so-called "stationary approximation" technique in order to find a weakly dependent stationary approximation for time-varying dynamical processes before being able to establish asymptotics (e.g., Dahlhaus 1996, Chandler and Polonik 2012, Zhang and Wu 2012 on time-varying AR models; Dahlhaus and Polonik 2009 on time-varying ARMA models; Dahlhaus and Rao 2006, Truquet 2017 on time-varying ARCH models; Karmakar et al. 2022 on time-varying AR-ARCH models; Gao et al. 2022 on time-varying VARMA-GARCH models). However, for (1.1), \(\Delta\mathbf{y}_{t}\) is expressed in terms of iterated time-varying functions, so the properties of \(\Delta\mathbf{y}_{t}\) involve the integrated processes \(\{\mathbf{y}_{s}\}_{s<t}\) and also depend on infinity past points \(\{s/T\}_{s<t}\). As a consequence, how to approximate \(\{\Delta\mathbf{y}_{t}\}\) and \(\{\mathbf{y}_{t}\}\) becomes challenging and the extant empirical process theories for stationary or locally stationary processes do not apply in this case, to the best of our knowledge.
In view of the aforementioned issues, in this study our contributions to the literature are as follows:
1. We first develop a time-varying version of Granger Representation Theorem for model (1.1), which indicates that \(\Delta\mathbf{y}_{t}\) can be approximated by a time-varying vector moving average infinity (\(\text{VMA}(\infty)\)) process, and thus \(\mathbf{y}_{t}\) can be approximated by a partial sum of time-varying \(\text{VMA}(\infty)\) processes;
2. We then provide a comprehensive study on time-varying \(\text{VMA}(\infty)\) processes, including the Nagaev-type inequality, Gaussian approximation and the limit theorem for quadratic forms. With these fundamental results in hand, we are able to establish an estimation theory and the corresponding properties for both short-run and long-run coefficients.
3. In addition, we propose an information criterion to estimate the lag length, a
singular-value ratio test to determine the cointegration rank, and a hypothesis test to examine the parameter stability.
In our empirical analysis, we utilize the newly developed framework to investigate the rational expectations hypothesis of the interest rate term structure. Our results indicate that (1). the predictability of the term structure exhibits significant time-varying behaviour, and (2). the expectations hypothesis of the term structure holds periodically, particularly during periods of unusually high inflation. These findings lend support to the work of Andreasen et al. (2021), who propose a macro-finance model of the term structure to explain variations in bond return predictability and identify the role of Federal Reserve monetary policy in stabilizing inflation as a key factor.
The rest of the paper is organized as follows. Section 2 presents the estimation methodology and theory. In Section 3, we conduct extensive simulation studies to examine the theoretical findings. Section 4 investigates the rational expectations hypothesis of the U.S. term structure. Section 5 concludes. All proofs are collected in the online supplementary Appendix A of the paper.
Before proceeding further, it is convenient to introduce some notations: \(|\cdot|\) denotes the absolute value of a scalar or the spectral norm of a matrix; for a random variable \(\mathbf{v}\), let \(\|\mathbf{v}\|_{q}=(E|\mathbf{v}|^{q})^{1/q}\) for \(q\geq 1\); \(\otimes\) denotes the Kronecker product; \(\mathbf{I}_{a}\) stands for an \(a\times a\) identity matrix; \(\mathbf{0}_{a\times b}\) stands for an \(a\times b\) matrix of zeros, and we write \(\mathbf{0}_{a}\) for short when \(a=b\); for a function \(g(w)\), let \(g^{(j)}(w)\) be the \(j^{th}\) derivative of \(g(w)\), where \(j\geq 0\) and \(g^{(0)}(w)\equiv g(w)\); let \(\tilde{c}_{k}=\int_{-1}^{1}u^{k}K(u)\mathrm{d}u\) and \(\tilde{v}_{k}=\int_{-1}^{1}u^{k}K^{2}(u)\mathrm{d}u\) for integer \(k\geq 0\); \(\mathrm{vec}(\cdot)\) stacks the elements of an \(m\times n\) matrix as an \(mn\times 1\) vector; \(\mathbf{A}^{+}\) denotes the Moore-Penrose (MP) inverse of matrix \(\mathbf{A}\); for a matrix \(\mathbf{A}\) with full column rank, we let \(\mathbf{P}_{\mathbf{A}}=\mathbf{A}(\mathbf{A}^{\top}\mathbf{A})^{-1}\mathbf{A} ^{\top}\) and \(\mathbf{\overline{A}}=\mathbf{A}(\mathbf{A}^{\top}\mathbf{A})^{-1}\); for \(m\geq n\), we denote by \(\mathbf{M}_{\perp}\) an orthogonal matrix complement of the \(m\times n\) matrix \(\mathbf{M}\) with \(\mathrm{Rank}(\mathbf{M})=n\); \(\rightarrow_{P}\), \(\rightarrow_{D}\) and \(\Rightarrow\) denote convergence in probability, convergence in distribution, weak convergence with respect to the uniform metric; let \(\mathbf{W}_{d}(\cdot,\boldsymbol{\Sigma})\) be a \(d\)-dimensional Brownian motion with covariance matrix \(\boldsymbol{\Sigma}\).
## 2 The Methodology and Asymptotics
In what follows, Section 2.1 studies the properties of \(\{\mathbf{y}_{t}\}\) and \(\{\Delta\mathbf{y}_{t}\}\), and provide (integrated) time-varying \(\mathrm{VMA}(\infty)\) approximations for both processes. In Section 2.2, we assume \(r_{0}\) and \(p_{0}\) are known for simplicity, and consider the estimation of \(\boldsymbol{\alpha}(\cdot)\) and \(\boldsymbol{\beta}\). Section 2.3 proposes an information criterion to estimate the lag length (i.e., \(p_{0}\)), and a singular-value ratio test to determine the cointegration rank (i.e., \(r_{0}\)). Finally, Section 2.4 gives a parameter stability test, which provides statistical evidence to support the necessity of time-varying VECM models practically.
### Approximations of \(\{\mathbf{y}_{t}\}\) and \(\{\Delta\mathbf{y}_{t}\}\)
To investigate (1.1), the first obstacle lies in the complexity of the dependence structure of \(\Delta\mathbf{y}_{t}\), which is expressed in terms of iterated time-varying functions with infinity memory. This implies that the population properites of \(\Delta\mathbf{y}_{t}\) involve the integrated processes \(\{\mathbf{y}_{s}\}_{s<t}\) and also depend on infinity past points \(\{s/T\}_{s<t}\). As a result, the extant empirical process theory does not apply. To address this issue, we initiate our analysis by seeking local (not necessarily stationary) approximations for each \(\Delta\mathbf{y}_{t}\) and \(\mathbf{y}_{t}\), and then establish some necessary asymptotic properties for these processes.
We now introduce some necessary assumptions.
**Assumption 1**.:
1. _Define_ \(\mathbf{C}_{\tau}(L)=(1-L)\mathbf{I}_{d}-\boldsymbol{\alpha}(\tau)\boldsymbol{ \beta}^{\top}L-\sum_{j=1}^{p_{0}-1}\boldsymbol{\Gamma}_{j}(\tau)(1-L)L^{j}\)_. Suppose that_ 1. \(\det(\mathbf{C}_{\tau}(L))=0\) _if and only if_ \(|L|>1\) _or_ \(L=1\) _uniformly over_ \(\tau\in[0,1]\)_; the number of unit roots,_ \(L=1\)_, is exactly_ \(d-r_{0}\)_;_ 2. \(\mathrm{Rank}(\boldsymbol{\alpha}(\tau))=r_{0}\) _uniformly over_ \(\tau\in[0,1]\)_, and_ \(\mathrm{Rank}(\boldsymbol{\beta})=r_{0}\)_;_ 3. \(\boldsymbol{\alpha}_{\perp}^{\top}(\tau)\left[\mathbf{I}_{d}-\sum_{i=1}^{p_{0} -1}\boldsymbol{\Gamma}_{i}(\tau)\right]\boldsymbol{\beta}_{\perp}\) _is nonsingular for each given_ \(\tau\in[0,1]\)_._
2. _Suppose that_ 1. _Each element of_ \(\boldsymbol{\omega}(\tau)\)_,_ \(\boldsymbol{\alpha}(\tau)\) _and_ \(\boldsymbol{\Gamma}_{j}(\tau)\) _with_ \(j=1,\ldots,p_{0}-1\) _is third order continuously differentiable on_ \([0,1]\)_;_ 2. \(\mathbf{y}_{0}=O_{P}(1)\)_, and, for_ \(\tau<0\)_,_ \(\boldsymbol{\omega}(\tau)=\boldsymbol{\omega}(0)\)_,_ \(\boldsymbol{\alpha}(\tau)=\boldsymbol{\alpha}(0)\)_, and_ \(\boldsymbol{\Gamma}_{j}(\tau)=\boldsymbol{\Gamma}_{j}(0)\) _with_ \(j=1,\ldots,p_{0}-1\)_._
3. \(\{\boldsymbol{\varepsilon}_{t}\}\) _is a sequence of martingale differences such that_ \(E(\boldsymbol{\varepsilon}_{t}\boldsymbol{\varepsilon}_{t}^{\top}\mid \mathscr{F}_{t-1})=\mathbf{I}_{d}\) _almost surely (a.s.), where_ \(\mathscr{F}_{t}=\sigma(\boldsymbol{\varepsilon}_{t},\boldsymbol{\varepsilon}_{ t-1},\ldots)\)_, and_ \(\max_{t}\|\boldsymbol{\varepsilon}_{t}\|_{\delta}<\infty\) _for some_ \(\delta>4\)_;_ \(\boldsymbol{\Omega}(\tau)=\boldsymbol{\omega}(\tau)\boldsymbol{\omega}(\tau)^{ \top}>0\) _is uniformly over_ \(\tau\in[0,1]\)_._
Assumption 1.1 explicitly excludes explosive processes, and ensures that \(\mathbf{y}_{t}\) is an integrated process of order one with \(d-r_{0}\) common unit root components, as well as \(r_{0}\) cointegration relationships. In Assumption 1.1.c, \(\boldsymbol{\alpha}_{\perp}^{\top}(\tau)[\mathbf{I}_{d}-\sum_{i=1}^{p_{0}-1} \boldsymbol{\Gamma}_{i}(\tau)]\boldsymbol{\beta}_{\perp}\) being nonsingular ensures the existence of a time-varying Granger Representation Theorem for \(\mathbf{y}_{t}\), which facilitates the asymptotic development later on.
Assumption 1.2.a allows the underlying data generating process to evolve over time in a smooth manner. Assumption 1.2.b regulates the behaviour of \(\mathbf{y}_{t}\) for \(t\leq 0\), which is standard in the literature of locally stationary models (e.g., Vogt 2012) and unit-root processes (e.g., Li et al. 2019).
Assumption 1.3 imposes conditions on the error innovations, which are standard in the time series literature (e.g., Lutkepohl 2005).
With these conditions in hand, we are able to provide local approximations for each \(\Delta\mathbf{y}_{t}\) and \(\mathbf{y}_{t}\) with \(t\geq 1\). For notational simplicity, we denote
\[\mathbf{B}_{\tau}(L)=\mathbf{J}^{-1}\mathbf{B}_{\tau}^{*}(L)\mathbf{J}\quad \text{and}\quad\mathbf{B}_{\tau}^{*}(L)=\mathbf{J}[\boldsymbol{\Gamma}_{\tau}( L)\overline{\boldsymbol{\beta}}(1-L)-\boldsymbol{\alpha}(\tau)L,\ \boldsymbol{\Gamma}_{\tau}(L)\boldsymbol{\beta}_{\perp}]\]
in which \(\mathbf{J}=[\boldsymbol{\beta},\overline{\boldsymbol{\beta}}_{\perp}]^{\top}\) and \(\boldsymbol{\Gamma}_{\tau}(L)=\mathbf{I}_{d}-\sum_{i=1}^{p_{0}-1}\boldsymbol {\Gamma}_{i}(\tau)L^{i}\).
**Lemma 2.1**.: _Let Assumption 1 hold. 1. We obtain that \(\det\left(\mathbf{B}_{\tau}(L)\right)\neq 0\) for all \(|L|\leq 1\), and \(\mathbf{B}_{\tau}(L)\) admits an expression of the form: \(\mathbf{B}_{\tau}(L)=\mathbf{I}_{d}-\sum_{i=1}^{p}\mathbf{B}_{i}(\tau)L^{i}\), thus we can denote_
\[\mathbf{B}_{\tau}^{-1}(L):=\boldsymbol{\Psi}_{\tau}(L)=\sum_{j=0}^{\infty} \boldsymbol{\Psi}_{j}(\tau)L^{j}\]
_that satisfies \(\mathbf{P}_{\boldsymbol{\beta}_{\perp}}\boldsymbol{\Psi}_{\tau}(1)= \boldsymbol{\beta}_{\perp}[\boldsymbol{\alpha}_{\perp}^{\top}(\tau) \boldsymbol{\Gamma}_{\tau}(1)\boldsymbol{\beta}_{\perp}]^{-1}\boldsymbol{ \alpha}_{\perp}^{\top}(\tau)\). 2. Equation (1.1) admits the following representation:_
\[\mathbf{y}_{t}=\mathbf{P}_{\boldsymbol{\beta}_{\perp}}\sum_{j=1}^{t}\mathbf{z }_{j}+\mathbf{P}_{\boldsymbol{\beta}}\mathbf{z}_{t}+\mathbf{P}_{\boldsymbol{ \beta}_{\perp}}\mathbf{y}_{0}\quad\text{in which}\quad\mathbf{z}_{t}=\sum_{i=1}^{ p}\mathbf{B}_{i}(\tau_{t})\mathbf{z}_{t-i}+\mathbf{u}_{t}.\]
3. _For any_ \(\tau\in[0,1]\)_, model (_1.1_) can be approximated by_ \[\Delta\widetilde{\mathbf{y}}_{t}(\tau)=\boldsymbol{\Pi}(\tau)\widetilde{ \mathbf{y}}_{t-1}(\tau)+\sum_{j=1}^{p_{0}-1}\boldsymbol{\Gamma}_{j}(\tau) \Delta\widetilde{\mathbf{y}}_{t-j}(\tau)+\widetilde{\mathbf{u}}_{t}(\tau) \quad\text{with}\quad\widetilde{\mathbf{u}}_{t}(\tau)=\boldsymbol{\omega}( \tau)\boldsymbol{\varepsilon}_{t},\]
_which admits the following representation:_
\[\widetilde{\mathbf{y}}_{t}(\tau)=\mathbf{P}_{\boldsymbol{\beta}_{ \perp}}\boldsymbol{\Psi}_{\tau}(1)\sum_{i=1}^{t}\widetilde{\mathbf{u}}_{i}( \tau)+\mathbf{P}_{\boldsymbol{\beta}}[\boldsymbol{\Psi}_{\tau}(1)\widetilde{ \mathbf{u}}_{t}(\tau)+\boldsymbol{\Psi}_{\tau}^{*}(L)\widetilde{\mathbf{u}}_{t -1}(\tau)]-\boldsymbol{\Psi}_{\tau}^{*}(L)\widetilde{\mathbf{u}}_{t}(\tau)+ \widetilde{\mathbf{y}}_{0}^{*}(\tau) \tag{21}\] \[\widetilde{\mathbf{y}}_{0}^{*}(\tau)=\mathbf{P}_{\boldsymbol{ \beta}_{\perp}}\boldsymbol{\Psi}_{\tau}^{*}(L)\widetilde{\mathbf{u}}_{0}(\tau) +\mathbf{P}_{\boldsymbol{\beta}_{\perp}}\widetilde{\mathbf{y}}_{0}(\tau),\]
_such that_
1. \(\max_{1\leq t\leq T}\|\Delta\mathbf{y}_{t}-\Delta\widetilde{\mathbf{y}}_{t}( \tau_{t})\|_{\delta}=O(1/T)\)_,_
2. \(\sup_{\tau,\tau^{\prime}\in[0,1]}\|\Delta\widetilde{\mathbf{y}}_{t}(\tau)- \Delta\widetilde{\mathbf{y}}_{t}(\tau^{\prime})\|_{\delta}=O(|\tau-\tau^{ \prime}|)\)_,_
3. \(\max_{1\leq t\leq T}\|\mathbf{y}_{t}-\mathbf{y}_{0}-\sum_{j=1}^{t}\Delta \widetilde{\mathbf{y}}_{j}(\tau_{j})\|_{\delta}=O(1/\sqrt{T})\)_,_
_where \(\widetilde{\mathbf{y}}_{0}(\tau)\) is an initial value of \(\widetilde{\mathbf{y}}_{t}(\tau)\), \(\boldsymbol{\Psi}_{\tau}^{*}(L)=\sum_{j=0}^{\infty}\boldsymbol{\Psi}_{j}^{*}(\tau)\), and \(\boldsymbol{\Psi}_{j}^{*}(\tau)=\sum_{i=j+1}^{\infty}\boldsymbol{\Psi}_{i}(\tau)\)._
_4. Finally, we obtain that_
\[T^{-1/2}\mathbf{y}_{\lfloor Tu\rfloor}\Rightarrow\mathbf{W}_{d}(u,\Sigma_{ \mathbf{y}}(u))\]
_uniformly over \(u\in[0,1]\), where \(\mathbf{\Sigma}_{\mathbf{y}}(u)=\mathbf{P}_{\boldsymbol{\beta}_{\perp}}\int_{0} ^{u}\boldsymbol{\Psi}_{s}(1)\boldsymbol{\Omega}(s)\boldsymbol{\Psi}_{s}^{\top} (1)\mathrm{d}s\mathbf{P}_{\boldsymbol{\beta}_{\perp}}\)._
Lemma 2.1 presents a local approximation of \(\{\Delta\mathbf{y}_{t}\}\), which is the foundation of our asymptotic developments. Using Lemma 2.1 we are able to establish basic properties of \(\{\Delta\mathbf{y}_{t}\}\) and \(\{\mathbf{y}_{t}\}\). To put it in a nutshell, studying \(\{\Delta\mathbf{y}_{t}\}\) and \(\{\mathbf{y}_{t}\}\) directly is technically challenging, we therefore consider their local approximations, which can help us understand \(\{\Delta\mathbf{y}_{t}\}\) and \(\{\mathbf{y}_{t}\}\) in every small neighbourhood. In addition, Lemma 2.1 indicates that \(\widetilde{\mathbf{y}}_{t}(\tau)\) admits a time-varying version of Granger Representation Theorem (Johansen 1995, Theorem 4.2), so that \(\Delta\widetilde{\mathbf{y}}_{t}(\tau)\) has a time-varying VMA\((\infty)\) representation. In the online supplementary Appendix A, we provide a comprehensive study on time-varying VMA\((\infty)\) processes (such as the Nagaev-type inequality, Gaussian approximation and the limit theorem for quadratic forms), which allows us to further derive estimation and inferential properties for (1.1) in the following subsections.
### Model Estimation
We assume \(p_{0}\) and \(r_{0}\) are known in this subsection, and shall come back to work on their estimation in Section 2.3.
The local linear estimator of \([\boldsymbol{\Pi}(\tau),\boldsymbol{\Gamma}(\tau)]\) with \(\boldsymbol{\Gamma}(\tau)=[\boldsymbol{\Gamma}_{1}(\tau),\ldots,\boldsymbol{ \Gamma}_{p_{0}-1}(\tau)]\) is given by
\[[\widehat{\boldsymbol{\Pi}}(\tau),\widehat{\boldsymbol{\Gamma}}(\tau)]=[ \mathbf{V}_{T,0}(\tau),\mathbf{V}_{T,1}(\tau)]\cdot\begin{bmatrix}\mathbf{S} _{T,0}(\tau)&\mathbf{S}_{T,1}(\tau)\\ \mathbf{S}_{T,1}(\tau)&\mathbf{S}_{T,2}(\tau)\end{bmatrix}^{+}\cdot\begin{bmatrix} \mathbf{I}_{dp_{0}}\\ \mathbf{0}_{dp_{0}}\end{bmatrix},\]
where \(\mathbf{h}_{t}=\left[\mathbf{y}_{t}^{\top},\Delta\mathbf{x}_{t}^{\top}\right] ^{\top}\),
\[\mathbf{V}_{T,l}(\tau) =\sum_{t=1}^{T}\Delta\mathbf{y}_{t}\mathbf{h}_{t-1}^{\top}\left( \frac{\tau_{t}-\tau}{h}\right)^{l}K\left(\frac{\tau_{t}-\tau}{h}\right)\text{ for }l=0,1, \tag{2.2}\] \[\mathbf{S}_{T,l}(\tau) =\sum_{t=1}^{T}\mathbf{h}_{t-1}\mathbf{h}_{t-1}^{\top}\left( \frac{\tau_{t}-\tau}{h}\right)^{l}K\left(\frac{\tau_{t}-\tau}{h}\right)\text{ for }l=0,1,2.\]
Here, we use MP inverse since \(\mathbf{S}_{T,l}(\tau)\) is asymptotically singular (cf., Lemma A.1 of the supplementary Appendix A), which is referred to as "kernel-induced degeneracy" in the unit-root literature (Phillips et al. 2017).
Accordingly, the local linear estimator of \(\mathbf{\Omega}(\tau)\) is defined as
\[\widehat{\mathbf{\Omega}}(\tau)=\frac{1}{T}\sum_{t=1}^{T}\widehat{\mathbf{u}}_{t} \widehat{\mathbf{u}}_{t}^{\top}w_{t}(\tau),\]
where \(\widehat{\mathbf{u}}_{t}=\Delta\mathbf{y}_{t}-\widehat{\mathbf{\Omega}}(\tau_{ t})\mathbf{y}_{t-1}-\widehat{\mathbf{\Gamma}}(\tau_{t})\Delta\mathbf{x}_{t-1}\), \(\Delta\mathbf{x}_{t}=[\Delta\mathbf{y}_{t}^{\top},\ldots,\Delta\mathbf{y}_{t- p_{0}+2}^{\top}]^{\top}\),
\[w_{t}(\tau) =h^{-1}K\left(\frac{\tau_{t}-\tau}{h}\right)\frac{P_{h,2}(\tau)- \frac{\tau_{t}-\tau}{h}P_{h,1}(\tau)}{P_{h,0}(\tau)P_{h,2}(\tau)-P_{h,1}^{2}( \tau)}, \tag{2.3}\] \[P_{h,l}(\tau) =\frac{1}{Th}\sum_{t=1}^{T}\left(\frac{\tau_{t}-\tau}{h}\right)^ {l}K\left(\frac{\tau_{t}-\tau}{h}\right)\text{ for }l=0,1,2.\]
To proceed, we require the following conditions to hold.
**Assumption 2**.:
_1. Suppose that_ \(K(\cdot)\) _is a symmetric and positive kernel function defined on_ \([-1,1]\)_, and_ \(\int_{-1}^{1}K(u)\mathrm{d}u=1\)_. Moreover,_ \(K(\cdot)\) _is Lipschitz continuous on_ \([-1,1]\)_. As_ \((T,h)\rightarrow(\infty,0)\)_,_ \(Th\rightarrow\infty\)_._
_2. \(\{\boldsymbol{\varepsilon}_{t}\}\) is a sequence of independent random variables. Let_ \(T^{1-\frac{4}{3}}h/(\log T)^{1-\frac{4}{3}}\rightarrow\infty\) _as_ \(T\rightarrow\infty\)_._
Assumption 2.1 imposes a set of regular conditions on the kernel function and the bandwidth. On top of Assumption 1.3, Assumption 2.2 imposes more structure on \(\{\boldsymbol{\varepsilon}_{t}\}\), and is used to derive Gaussian approximation for the sum of time-varying VECM process. This condition can be relaxed if we impose more dependence structure (e.g., nonlinear system theory as in Wu 2005). Provided \(\delta>5\), the usual optimal bandwidth \(h_{opt}=O(T^{-1/5})\) satisfies the condition \(T^{1-\frac{4}{3}}h/(\log T)^{1-\frac{4}{3}}\rightarrow\infty\). In addition, Gaussian approximation together with \(T^{1-\frac{4}{3}}h/(\log T)^{1-\frac{4}{3}}\rightarrow\infty\) is used to derive the uniform convergence of our nonparametric estimators, which is further used for asymptotic covariance estimation, semiparametric estimation and model specification testing in the below.
With these conditions in hand, we further let
\[\widetilde{\mathbf{w}}_{t}(\tau)=[\widetilde{\mathbf{y}}_{t}^{\top}(\tau) \boldsymbol{\beta},\Delta\widetilde{\mathbf{x}}_{t}^{\top}(\tau)]^{\top}\quad \text{and}\quad\widetilde{\mathbf{x}}_{t}(\tau)=[\Delta\widetilde{\mathbf{y}}_ {t}^{\top}(\tau),\ldots,\Delta\widetilde{\mathbf{y}}_{t-p_{0}+2}^{\top}(\tau)] ^{\top},\]
and summarize the asymptotic properties of the local linear estimators in the following theorem.
**Theorem 2.1**.: _Let Assumptions 1-2 hold._
_1. If_ \(Th^{7}\to 0\)_, then for any_ \(\tau\in(0,1)\)_,_
\[\sqrt{Th}\mathrm{vec}\left([\widehat{\mathbf{\Pi}}(\tau),\widehat{\mathbf{ \Gamma}}(\tau)]-[\mathbf{\Pi}(\tau),\mathbf{\Gamma}(\tau)]-\frac{1}{2}h^{2} \widetilde{c}_{2}[\mathbf{\Pi}^{(2)}(\tau),\mathbf{\Gamma}^{(2)}(\tau)]\right) \rightarrow_{D}N\left(\mathbf{0},\widetilde{v}_{0}\mathbf{\Sigma}_{\mathrm{co} }(\tau)\right),\]
_where \(\mathbf{\Sigma}_{\rm co}(\tau)=\left({\rm diag}(\mathbf{\beta},\mathbf{I}_{d(p_{0}-1)})E( \widetilde{\mathbf{w}}_{t}(\tau)\widetilde{\mathbf{w}}_{t}^{\top,-1}(\tau))\,{ \rm diag}(\mathbf{\beta},\mathbf{I}_{d(p_{0}-1)})^{\top}\right)\otimes\mathbf{\Omega}(\tau)\);_
_2. \(\sup_{\tau\in[0,1]}|[\widehat{\mathbf{\Pi}}(\tau),\widehat{\mathbf{\Gamma}}(\tau)]-[ \mathbf{\Pi}(\tau),\mathbf{\Gamma}(\tau)]|=O_{P}\left(h^{2}+\sqrt{\log T/(Th)}\right)\);_
_3. \(\sup_{\tau\in[0,1]}|\widehat{\mathbf{\Sigma}}_{\rm co}(\tau)-\mathbf{\Sigma}_{\rm co}( \tau)|=O_{P}\left(h+\sqrt{\log T/(Th)}\right)\), where \(\widehat{\mathbf{\Sigma}}_{\rm co}(\tau)=\sum_{t=1}^{T}K\left(\frac{\tau-\tau}{h} \right)\mathbf{S}_{T,0}^{+}(\tau)\otimes\widehat{\mathbf{\Omega}}(\tau)\)._
Theorem 2.1 establishes the asymptotic distribution yielded by \([\widehat{\mathbf{\Pi}}(\tau),\widehat{\mathbf{\Gamma}}(\tau)]\), and also gives the rates of uniform convergence which will be extensively used in the following development.
In what follows, we consider the estimation of cointegration matrix by utilizing the reduced rank structure of \(\mathbf{\Gamma}(\cdot)\) and using the profile likelihood method (e.g., Fan & Huang 2005). To ensure a unique cointegration matrix, we assume
\[\mathbf{\beta}=\begin{bmatrix}\mathbf{I}_{r_{0}}\\ \mathbf{\beta}^{*}\end{bmatrix} \tag{2.4}\]
where \(\mathbf{\beta}^{*}\) is a \((d-r_{0})\times r_{0}\) matrix. Using (2.4), \(\widehat{\mathbf{\alpha}}(\tau)\) is the first \(r_{0}\) columns of \(\widehat{\mathbf{\Pi}}(\tau)\), i.e., \(\widehat{\mathbf{\Pi}}(\tau)=[\widehat{\mathbf{\alpha}}(\tau),\widehat{\mathbf{\Pi}}_{2} (\tau)]\), where the definition of \(\widehat{\mathbf{\Pi}}_{2}(\tau)\) is obvious.
Specifically, given \(\mathbf{\Pi}(\tau)\), we can estimate the short-run time-varying parameters \(\mathbf{\Gamma}(\tau)\) by
\[\widehat{\mathbf{\Gamma}}(\tau,\mathbf{\Pi}) =\sum_{s=1}^{T}(\Delta\mathbf{y}_{s}-\mathbf{\Pi}(\tau_{s})\mathbf{y}_ {s-1})\Delta\mathbf{x}_{s-1}^{*,\top}K\left(\frac{\tau_{s}-\tau}{h}\right) \tag{2.5}\] \[\times\left(\sum_{s=1}^{T}\Delta\mathbf{x}_{s-1}^{*}\Delta \mathbf{x}_{s-1}^{*,\top}K\left(\frac{\tau_{s}-\tau}{h}\right)\right)^{-1} \begin{bmatrix}\mathbf{I}_{d(p_{0}-1)}\\ \mathbf{0}_{d(p_{0}-1)}\end{bmatrix},\]
where \(\Delta\mathbf{x}_{t}^{*}=\Delta\mathbf{x}_{t}\otimes\left[1,\frac{\tau_{t+1} -\tau}{h}\right]^{\top}\). In connection with (2.4), we can write
\[\widetilde{\mathbf{r}}_{t}(\mathbf{\alpha})=\widetilde{\mathbf{R}}_{t}^{\top}( \mathbf{\alpha}){\rm vec}\left(\mathbf{\beta}^{*,\top}\right)+\mathbf{u}_{t}^{*},\]
where \(\mathbf{u}_{t}^{*}=\mathbf{u}_{t}+\left[\mathbf{\Gamma}(\tau_{t})-\widehat{\mathbf{ \Gamma}}(\tau_{t},\mathbf{\Pi})\right]\Delta\mathbf{x}_{t-1}\), \(\mathbf{r}_{t}(\mathbf{\alpha})=\Delta\mathbf{y}_{t}-\mathbf{\alpha}(\tau_{t}) \mathbf{y}_{t-1}^{(1)}\), \(\mathbf{y}_{t}^{(1)}\) contains the first \(r_{0}\) elements of \(\mathbf{y}_{t}\), \(\mathbf{r}^{\top}(\mathbf{\alpha})=[\mathbf{r}_{1}(\mathbf{\alpha}),\ldots,\mathbf{r}_ {T}(\mathbf{\alpha})]\),
\[\widetilde{\mathbf{r}}_{t}(\mathbf{\alpha})=\mathbf{r}_{t}(\mathbf{\alpha})-\mathbf{r}^ {\top}(\mathbf{\alpha})\mathbf{K}(\tau_{t})\Delta\mathbf{x}^{*}\left(\Delta \mathbf{x}^{*,\top}\mathbf{k}(\tau_{t})\Delta\mathbf{x}^{*}\right)^{-1}\left[ \mathbf{I}_{d(p_{0}-1)},\mathbf{0}_{d(p_{0}-1)}\right]^{\top}\Delta\mathbf{x}_ {t-1},\]
\(\mathbf{k}(\tau)={\rm diag}\left[K\left(\frac{\tau_{1}-\tau}{h}\right),\ldots,K \left(\frac{\tau_{T}-\tau}{h}\right)\right]\), \(\Delta\mathbf{x}^{*,\top}=\left[\Delta\mathbf{x}_{0}^{*},\ldots,\Delta \mathbf{x}_{T-1}^{*}\right]\), \(\mathbf{R}_{t}(\mathbf{\alpha})=\mathbf{y}_{t-1}^{(2)}\otimes\mathbf{\alpha}^{\top}( \tau_{t})\), \(\mathbf{y}_{t}^{(2)}\) contains the last \(d-r_{0}\) elements of \(\mathbf{y}_{t}\),
\[\widetilde{\mathbf{R}}_{t}(\mathbf{\alpha})=\mathbf{R}_{t}(\mathbf{\alpha})-\mathbf{R} ^{\top}(\mathbf{\alpha})\mathbf{K}(\tau_{t})\Delta\mathbf{X}^{*}\left(\Delta \mathbf{X}^{*,\top}\mathbf{K}(\tau_{t})\Delta\mathbf{X}^{*}\right)^{-1}\left[ \mathbf{I}_{d^{2}(p_{0}-1)},\mathbf{0}_{d^{2}(p_{0}-1)}\right]^{\top}\Delta \mathbf{X}_{t-1},\]
\(\mathbf{R}^{\top}(\mathbf{\alpha})=[\mathbf{R}_{1}(\mathbf{\alpha}),\ldots,\mathbf{R}_{T}( \mathbf{\alpha})]\), \(\Delta\mathbf{X}^{*,\top}=\left[\Delta\mathbf{X}_{0}^{*},\ldots,\Delta\mathbf{X} _{T-1}^{*}\right]\), \(\Delta\mathbf{X}_{t}=\Delta\mathbf{x}_{t}\otimes\mathbf{I}_{d}\), \(\Delta\mathbf{X}_{t}^{*}=\Delta\mathbf{x}_{t}^{*}\otimes\mathbf{I}_{d}\) and \(\mathbf{K}(\tau)=\mathbf{k}(\tau)\otimes\mathbf{I}_{d}\). Replacing \(\mathbf{\alpha}(\tau)\) with \(\widehat{\mathbf{\alpha}}(\tau)\), the weighted least squares (WLS) estimator of \(\mathbf{\beta}^{*}\) is given by
\[\mathrm{vec}\left[\widehat{\mathbf{\beta}}^{*,\top}\right]=\left(\sum_{t=1}^{T} \widetilde{\mathbf{R}}_{t}(\widehat{\mathbf{\alpha}})\widehat{\mathbf{\Omega}}^{ -1}(\tau_{t})\widetilde{\mathbf{R}}_{t}^{\top}(\widehat{\mathbf{\alpha}})\right) ^{-1}\sum_{t=1}^{T}\widetilde{\mathbf{R}}_{t}(\widehat{\mathbf{\alpha}})\widehat {\mathbf{\Omega}}^{-1}(\tau_{t})\widetilde{\mathbf{r}}_{t}(\widehat{\mathbf{ \alpha}}). \tag{2.6}\]
The next theorem summaries the asymptotic distribution associated with \(\widehat{\mathbf{\beta}}^{*}\).
**Theorem 2.2**.: _Let Assumptions 1-2 hold. Suppose further that \(\frac{Th^{2}}{(\log T)^{2}}\to\infty\) and \(Th^{6}\to 0\), then_
_1. \(\mathrm{Tvec}\left[\widehat{\mathbf{\beta}}^{*,\top}-\mathbf{\beta}^{*,\top}\right] \to_{D}\left(\int_{0}^{1}\mathbf{W}_{d-r_{0}}(u)\mathbf{W}_{d-r_{0}}^{\top}(u )\otimes\mathbf{\alpha}^{\top}(u)\mathbf{\Omega}^{-1}(u)\mathbf{\alpha}(u)\mathrm{d}u \right)^{-1}\int_{0}^{1}\mathbf{W}_{d-r_{0}}(u)\otimes\mathrm{d}\mathbf{W}_{r_ {0}}(u)\),_
_2. \(\left(\sum_{t=1}^{T}\mathbf{y}_{t-1}^{(2)}\mathbf{y}_{t-1}^{(2),\top}\otimes \widehat{\mathbf{\alpha}}^{\top}(\tau_{t})\widehat{\mathbf{\Omega}}^{-1}(\tau_{t} )\widehat{\mathbf{\alpha}}(\tau_{t})\right)^{1/2}\mathrm{vec}\left[\widehat{\mathbf{ \beta}}^{*,\top}-\mathbf{\beta}^{*,\top}\right]\to_{D}N\left(\mathbf{0},\mathbf{I} _{(d-r_{0})r_{0}}\right)\),_
_where \(\mathbf{W}_{d-r_{0}}(u)=\left[\mathbf{0}_{(d-r_{0})\times r_{0}},\mathbf{I}_{ d-r_{0}}\right]\mathbf{W}_{d}(u,\mathbf{\Sigma}_{\mathbf{y}}(u))\), \(\mathbf{W}_{r_{0}}(u)=\mathbf{W}_{r_{0}}(u,\mathbf{\alpha}^{\top}(u)\mathbf{ \Omega}^{-1}(u)\mathbf{\alpha}(u))\), and \(\mathbf{W}_{d-r_{0}}(\cdot)\) is independent of \(\mathbf{W}_{r_{0}}(\cdot)\)._
The first result shows that the cointegration matrix can be estimated at a super consistent rate \(T\), while the second result concerning asymptotic normality of a properly standardized from of \(\widehat{\mathbf{\beta}}^{*}-\mathbf{\beta}^{*}\) indicates how to construct confidence interval practically.
### On Lag Length and Cointegration Rank
We now consider the estimation of \(p_{0}\) and \(r_{0}\). Specifically, we first propose an information criterion that can estimate the lag length (\(p_{0}\)), and then consider the estimation of cointegration rank (\(r_{0}\)).
To estimate \(p_{0}\), we minimize an information criterion as follows:
\[\widehat{p}=\underset{1\leq p\leq P}{\mathrm{argmin}}\ \mathrm{IC}(p), \tag{2.7}\]
where \(\mathrm{IC}(p)=\log\left\{\mathrm{RSS}(p)\right\}+p\cdot\chi_{T}\), \(\mathrm{RSS}(p)=\frac{1}{T}\sum_{t=1}^{T}\widehat{\mathbf{u}}_{p,t}^{\top} \widehat{\mathbf{u}}_{p,t}\), \(\chi_{T}\) is the penalty term, \(\widehat{\mathbf{u}}_{p,t}\) is the value of \(\widehat{\mathbf{u}}_{t}\) by letting the number of lagged differences be \(p-1\), and \(P\) is a sufficiently large fixed positive integer. The following proposition summaries the asymptotic property of (2.7).
**Theorem 2.3**.: _Let Assumptions 1-2 hold. Suppose that \(\chi_{T}\to 0\) and \(c_{T}^{-2}\chi_{T}\to\infty\), where \(c_{T}=h^{2}+\left(\frac{\log T}{Th}\right)^{1/2}\). Then \(\Pr\left(\widehat{p}=p_{0}\right)\to 1\) as \(T\to\infty\)._
It is noteworthy that Theorem 2.3 does not require any knowledge of cointegration rank. In view of the conditions on \(\chi_{T}\), a natural choice is
\[\chi_{T}=\frac{\log(\log(Th))}{3}\left(h^{4}+h^{2}\left(\frac{\log T}{Th}\right) ^{1/2}+\frac{\log T}{Th}\right).\]
We next consider the estimation of cointegration rank \(r_{0}\). The basic principle of our method is to separate the \(r_{0}\) relevant singular values of \(\mathbf{\Pi}(\tau)\) from the zero ones, while the number of nonzero ones corresponds to the cointegration rank. Before proceeding further, it should be pointed out that the choice of lag length \(p_{0}\) is irrelevant with the determination of cointegration rank since we can set the lag length \(P\) as sufficiently large but fixed (i.e., \(p_{0}\leq P\)), in which case \(\mathbf{\Gamma}_{j}(\tau)=\mathbf{0}_{d}\) for \(j=p_{0},...,P\).
The method is based on the QR decomposition with column-pivoting of \(\int_{0}^{1}\mathbf{\Pi}^{\top}(\tau)\mathrm{d}\tau\), i.e., \(\int_{0}^{1}\mathbf{\Pi}^{\top}(\tau)\mathrm{d}\tau=\mathbf{\beta}\int_{0}^{1}\mathbf{ \alpha}^{\top}(\tau)\mathrm{d}\tau=\mathbf{\mathrm{S}}\mathbf{R}\), where \(\mathbf{S}^{\top}\mathbf{S}=\mathbf{I}_{d}\), and \(\mathbf{R}\) is an upper triangular matrix with the diagonal elements being nonincreasing. The estimator of \(\int_{0}^{1}\mathbf{\Pi}(\tau)\mathrm{d}\tau\) is naturally given by \(\int_{0}^{1}\widehat{\mathbf{\Pi}}(\tau)\mathrm{d}\tau\) and its QR decomposition with column-pivoting is defined as
\[\int_{0}^{1}\widehat{\mathbf{\Pi}}(\tau)\mathrm{d}\tau=\widehat{\mathbf{R}}^{\top }\widehat{\mathbf{S}}^{\top}=\begin{bmatrix}\widehat{\mathbf{R}}_{11}^{\top}& \mathbf{0}_{r_{0}\times(d-r_{0})}\\ \widehat{\mathbf{R}}_{12}^{\top}&\widehat{\mathbf{R}}_{22}^{\top}\end{bmatrix} \cdot\begin{bmatrix}\widehat{\mathbf{S}}_{1}^{\top}\\ \widehat{\mathbf{S}}_{2}^{\top}\end{bmatrix},\]
where \(\widehat{\mathbf{S}}\) is \(d\times d\) orthonormal, and the partition of the second step should be obvious so we omit the descriptions for each block. By Theorem 2.1, \(\int_{0}^{1}\widehat{\mathbf{\Pi}}(\tau)\mathrm{d}\tau\rightarrow_{P}\int_{0}^{1} \mathbf{\Pi}(\tau)\mathrm{d}\tau\). Therefore, \(\widehat{\mathbf{R}}_{22}\) is expected to be small, which motivates the use of following procedure. Let \(\widehat{\mu}_{k}=\sqrt{\sum_{j=k}^{d}\widehat{\mathbf{R}}^{2}(k,j)}\), where \(\widehat{\mathbf{R}}(k,j)\) denotes the element of \(k^{th}\) row and \(j^{th}\) column of \(\widehat{\mathbf{R}}\).
We consider the following singular value ratio test, taking a suggestion from the literature (e.g., Lam & Yao 2012, Zhang et al. 2019):
\[\widehat{r}=\underset{0\leq r\leq d-1}{\operatorname{argmax}}\left(\frac{ \widehat{\mu}_{r}}{\widehat{\mu}_{r+1}}I\left(\widehat{\mu}_{r}\geq w_{T} \right)+I\left(\widehat{\mu}_{r}<w_{T}\right)\right) \tag{2.8}\]
where \(\widehat{\mu}_{0}=\widehat{\mu}_{1}+w_{T}\) is the "mock" singular value since \(\frac{\widehat{\mu}_{r}}{\widehat{\mu}_{r+1}}\) is not defined for \(r=0\), \(w_{T}=\frac{\log T}{Th}\log(\log(Th))\) and \(\widehat{\mu}_{1}\geq\cdots\geq\widehat{\mu}_{d}\). In addition, the indicator function is used to ensure that the estimator \(\widehat{r}\) is consistent. Note that similar to the case of eigenvalue ratio test for the factor model, both the numerator and denominator converge to zeros at the same rate when \(r>r_{0}\). Therefore, if \(\widehat{\mu}_{r}\) is "small", we take it as a sign of \(r>r_{0}\) and set \(\widehat{\mu}_{r}/\widehat{\mu}_{r+1}\) to one.
The following theorem summaries the asymptotic property of (2.8).
**Theorem 2.4**.: _Let Assumptions 1-2 hold. Then \(\Pr\left(\widehat{r}=r_{0}\right)\to 1\) as \(T\rightarrow\infty\)._
If \(r_{0}=0\), \(\mathbf{y}_{t}\) is a pure unit root process with time-varying vector autoregressive errors. Hence, our procedure is also able to test the existence of cointegration relationship.
### Testing for Parameter Stability
Practically, it is necessary to test whether the coefficients of (1.1) are time-varying before applying the aforementioned framework. Formally, we consider a hypothesis test of the form:
\[\mathbb{H}_{0}:\mathbf{C}\mathbf{b}(\cdot)=\mathbf{c}\text{ for some unknown }\mathbf{c}\in\mathbb{R}^{s}, \tag{2.9}\]
where \(\mathbf{b}(\tau)=\operatorname{vec}(\boldsymbol{\alpha}(\tau),\boldsymbol{ \Gamma}(\tau))\), \(\mathbf{C}\) is a selection matrix of full row rank, and \(s\) is the number of restrictions. The choice of \(\mathbf{C}\) and \(\mathbf{c}\) should be theory/data driven. For example, one can let \(\mathbf{C}=\left[\mathbf{I}_{r_{0}},\mathbf{0}_{r_{0}\times(d-1)r_{0}+d^{2}( p_{0}-1)}\right]\) and \(\mathbf{c}=\mathbf{0}\) to test whether there exists an error-correction term for \(\Delta y_{1,t}\) over the long-run.
The test statistic is constructed based on the weighted integrated squared errors:
\[\widehat{Q}_{\mathbf{C},\mathbf{H}}=\frac{1}{T}\sum_{t=1}^{T}\left\{\mathbf{ C}\widehat{\mathbf{b}}(\tau_{t})-\widehat{\mathbf{c}}\right\}^{\top} \mathbf{H}(\tau_{t})\left\{\mathbf{C}\widehat{\mathbf{b}}(\tau_{t})-\widehat{ \mathbf{c}}\right\}, \tag{2.10}\]
where \(\widehat{\mathbf{b}}(\tau)=\operatorname{vec}(\widehat{\boldsymbol{\alpha}}( \tau),\widehat{\boldsymbol{\Gamma}}(\tau))\) should be obvious, and \(\widehat{\mathbf{c}}=\frac{1}{T}\sum_{t=1}^{T}\mathbf{C}\widehat{\mathbf{b}}( \tau_{t})\) is the semiparametric estimator of \(\mathbf{c}\). In (2.10), \(\mathbf{H}(\cdot)\) is an \(s\times s\) positive definite weighting matrix, and is typically set as the precision matrix associated with \(\widehat{\mathbf{b}}(\cdot)\). We present the asymptotic distribution of the semiparametric estimator \(\widehat{\mathbf{c}}\) and the proposed test in the following theorem.
**Theorem 2.5**.: _Let Assumptions 1-2 hold. Suppose further that \(\frac{Th^{2}}{(\log T)^{2}}\to\infty\) and \(Th^{5.5}\to 0\). Then_
1. \(\sqrt{T}\left(\widehat{\mathbf{c}}-\mathbf{c}-\frac{1}{2}h^{2}\widetilde{c}_{ 2}\int_{0}^{1}\mathbf{C}\mathbf{b}^{(2)}(\tau)\mathrm{d}\tau\right)\to_{D}N \left(\mathbf{0},\int_{0}^{1}\mathbf{C}(\mathbf{\Sigma}_{\mathbf{w}}^{-1}( \tau)\otimes\boldsymbol{\Omega}(\tau))\mathbf{C}^{\top}\mathrm{d}\tau\right)\)_;_
2. _Under_ \(\mathbb{H}_{0}\)_,_ \(T\sqrt{h}\left(\widehat{Q}_{\mathbf{C},\widehat{\mathbf{H}}}-\frac{1}{Th}s \widetilde{v}_{0}\right)\to_{D}N(0,4sC_{B}),\) _where_ \(\widehat{\mathbf{H}}(\tau)=(\mathbf{C}\widehat{\mathbf{V}}_{\mathbf{b}}(\tau) \mathbf{C}^{\top})^{-1}\)_,_ \(\widehat{\mathbf{V}}_{\mathbf{b}}(\tau)=\widehat{\mathbf{\Sigma}}_{\mathbf{w} }^{-1}(\tau)\otimes\widehat{\boldsymbol{\Omega}}(\tau)\)_, and_ \(C_{B}=\int_{0}^{1-v}K(u)K(u+v)\mathrm{d}u\right)^{2}\mathrm{d}v\)_._
In Theorem 2.5.1, the bias term \(\frac{1}{2}h^{2}\widetilde{c}_{2}\int_{0}^{1}\mathbf{C}\mathbf{b}^{(2)}(\tau) \mathrm{d}\tau\) vanishes under \(\mathbb{H}_{0}\), and thus the parametric component in the corresponding semiparametric model can have a \(\sqrt{T}\)-consistent estimate \(\widehat{\mathbf{c}}\). Theorem 2.5.2 states that the test statistic converges to a normal distribution and is asymptotically pivotal. The bias term \(s\widetilde{v}_{0}\) can easily be calculated for any given kernel function, and it arises due to the quadratic form of the test statistic.
To close our theoretical investigation, we note further that the online supplementary Appendix A.1 provides a local alternative of the parameter stability test, and also gives a simulation-assisted testing procedure to improve the finite sample performance of the test.
Simulation
In this section, we first provide some details of the numerical implementation in Section 3.1, and then respectively examine the estimation and hypothesis testing in Sections 3.2 and 3.3.
### Numerical Implementation
Throughout the numerical studies, Epanechnikov kernel (i.e., \(K(u)=0.75(1-u^{2})I(|u|\leq 1)\)) is adopted. The optimal lag length and the cointegration rank are estimated based on (2.7) and (2.8) respectively. For each given \(p\) of (2.7), the bandwidth \(\widehat{h}_{cv}\) is always chosen by minimizing the following leave-one-out cross-validation criterion function:
\[\widehat{h}_{cv}=\arg\min_{h}\sum_{t=1}^{T}\left\|\Delta\mathbf{y}_{t}- \widehat{\mathbf{\Pi}}_{-t}(\tau_{t})\mathbf{y}_{t-1}-\sum_{j=1}^{p-1}\widehat {\mathbf{\Gamma}}_{j,-t}(\tau_{t})\Delta\mathbf{y}_{t-j}\right\|^{2}, \tag{3.1}\]
where \(\widehat{\mathbf{\Pi}}_{-t}(\cdot)\) and \(\widehat{\mathbf{\Gamma}}_{j,-t}(\cdot)\) are obtained based on the local linear estimator of Section 2.1 but leaving the \(t^{th}\) observation out. Once \(\widehat{p}\), \(\widehat{r}\) and \(\widehat{h}_{cv}\) are obtained, the estimation procedure is relatively straightforward. As shown in Richter & Dahlhaus (2019), the leave-one-out cross validation method works well as long as the error terms are uncorrelated, which implies that this desirable property should hold in our case.
### Examining the Estimation Results
The data generating process (DGP) is as follows:
\[\Delta\mathbf{y}_{t}=\boldsymbol{\alpha}(\tau_{t})\boldsymbol{\beta}^{\top} \mathbf{y}_{t-1}+\mathbf{\Gamma}_{1}(\tau_{t})\Delta\mathbf{y}_{t-1}+\mathbf{ u}_{t}\ \ \text{with}\ \ \mathbf{u}_{t}=\boldsymbol{\omega}(\tau_{t})\boldsymbol{\varepsilon}_{t}\ \ \text{for}\ \ t=1,\ldots,T, \tag{3.2}\]
where \(T\in\{200,400,800\}\), \(\boldsymbol{\varepsilon}_{t}\)'s are i.i.d. draws from \(N(\mathbf{0}_{2\times 1},\mathbf{I}_{2})\), and
\[\mathbf{\Gamma}_{1}(\tau) =\begin{bmatrix}0.5\exp\{\tau-0.5\}&-0.2\exp\{\tau-1\}\\ -0.2\cos\pi\tau&0.6\exp\{-\tau-0.5\}\end{bmatrix}, \tag{3.3}\] \[\boldsymbol{\omega}(\tau) =\begin{bmatrix}0.8\exp\{-0.5\tau\}+0.5&0\\ 0.1\exp\{0.5-\tau\}&0.5(\tau-0.5)^{2}+1\end{bmatrix}.\]
In this case, we have \(p_{0}=2\). To test the null hypothesis of no cointegration relations, we consider two sets of \(\boldsymbol{\alpha}(\tau)\) and \(\boldsymbol{\beta}\):
DGP 1 -- \(\boldsymbol{\alpha}(\tau)=[0.2\sin(\tau)-0.5,0.2\cos(\tau)+0.4]^{\top}\) and \(\boldsymbol{\beta}=[1,-0.8]^{\top}\), so there exists one cointegration relationship, i.e. \(r_{0}=1\).
DGP 2 -- \(\boldsymbol{\alpha}(\tau)=\boldsymbol{\beta}=0\), so \(r_{0}=0\) and \(\mathbf{y}_{t}\) is a pure unit-root process with time-varying vector autoregressive errors.
For each generated dataset, we carry on the methodologies documented in Section 2, and conduct 1000 replications.
First, we evaluate the performance of the lag length selection procedure (i.e., (2.7)), and report the percentages of \(\widehat{p}<2\), \(\widehat{p}=2\), and \(\widehat{p}>2\) respectively based on 1000 replications. Table 1 shows that the information criterion (2.7) performs reasonably well, as the percentages associated with \(\widehat{p}=2\) are sufficiently close to 1 except for the case with \(T=200\). In addition, this information criterion works well in both time-varying VECM (DGP 1) and time-varying VAR models (DGP 2).
INSERT TABLE 1 ABOUT HERE
Next, we evaluate the performance of the conintegration rank estimator (2.8) and report the percentages of \(\widehat{r}=0\) and \(\widehat{r}=1\) respectively based on 1000 replications. Table 2 shows that the singular value ratio method of (2.8) performs reasonably well. However, when \(r_{0}=0\), the estimator (2.8) tends to identify a false cointegration relationship for small sample size (i.e., \(T=200\)).
INSERT TABLE 2 ABOUT HERE
Finally, we evaluate the estimates of \(\boldsymbol{\alpha}(\tau)\), \(\boldsymbol{\beta}\) and \(\boldsymbol{\Gamma}_{1}(\tau)\) for DGP 1, and calculate the root mean square error (RMSE) as follows
\[\left\{\frac{1}{1000T}\sum_{n=1}^{1000}\sum_{t=1}^{T}\|\widehat{\boldsymbol{ \theta}}^{(n)}(\tau_{t})-\boldsymbol{\theta}(\tau_{t})\|^{2}\right\}^{1/2}\]
for \(\boldsymbol{\theta}(\cdot)\in\{\boldsymbol{\alpha}(\cdot),\boldsymbol{\Gamma} (\cdot)\}\), where \(\widehat{\boldsymbol{\theta}}^{(n)}(\tau)\) is the estimate of \(\boldsymbol{\theta}(\tau)\) for the \(n\)-th replication. Of interest, we also examine the finite sample coverage probabilities of the confidence intervals based on our asymptotic theories. In the following, we compute the average of coverage probabilities for grid points in \(\{\tau_{t},t=1,\ldots,T\}\), and then further take an average across the elements of \(\boldsymbol{\theta}(\cdot)\). The RMSEs and empirical coverage probabilities are reported in Table 3, which reveals several notable points. First, the RMSE decreases as the sample size goes up. Second, the RMSE of \(\boldsymbol{\beta}\) is much smaller than those of \(\boldsymbol{\alpha}(\tau)\) and \(\boldsymbol{\Gamma}_{1}(\tau)\), which should be expected. Third, the finite sample coverage probabilities are smaller than their nominal level (95%) for small \(T\), but are fairly close to 95% as \(T\) increases.
INSERT TABLE 3 ABOUT HERE
### Examining the Parameter Stability Test
To evaluate the size and local power of the proposed test statistic, we consider the following DGP:
\[\Delta\mathbf{y}_{t}=\boldsymbol{\alpha}(\tau_{t})\boldsymbol{\beta}^{\top} \mathbf{y}_{t-1}+\boldsymbol{\Gamma}_{1}(\tau_{t})\Delta\mathbf{y}_{t-1}+ \mathbf{u}_{t}, \tag{3.4}\]
where \(\boldsymbol{\beta}\), \(\boldsymbol{\Gamma}_{1}(\cdot)\) and \(\mathbf{u}_{t}\) are generated in the same way as DGP 1 in Section 3.2, and
\[\boldsymbol{\alpha}(\tau)=\begin{bmatrix}-0.4\\ 0.4\end{bmatrix}+b\times d_{T}\times\begin{bmatrix}\sin(\tau)\\ \cos(\pi\tau)\end{bmatrix},\]
in which \(d_{T}=T^{-1/2}h^{-1/4}\) and \(b\) is set to be 0, 1 or 2 in order to investigate the size and local power of the proposed test. We use the proposed testing procedure to test whether the coefficient \(\boldsymbol{\alpha}(\cdot)\) is time-varying. Again, we let \(T\in\{200,400,800\}\) and conduct 1000 replications for each choice of \(T\). We use the simulation-assisted testing procedure of Appendix A.1 to get the empirical critical value \(\widehat{q}_{1-\alpha}\) after 1000 bootstrap replications. We consider a sequence of bandwidths to check the robustness of the proposed test with respect to a sequence of bandwidths:
\[h=\alpha_{1}T^{-1/5},\quad\alpha_{1}=0.6,\ldots,2. \tag{3.5}\]
Table 4 reports the rejection rates at the 5% and 10% nominal levels. A few facts emerge. First, our test has reasonable sizes using the empirical critical values obtained by the bootstrap procedure if the sample size is not so small. Second, the size behaviour of our test is not sensitive to the choices of bandwidths. As discussed in Gao & Gijbels (2008), the estimation-based optimal bandwidths may also be optimal for testing purposes, so for simplicity we use the cross-validation based bandwidth or the rule-of-thumb bandwidth in practice. Third, the local power of our test increases rapidly as \(b\) increases.
INSERT TABLE 4 ABOUT HERE
## 4 A Real Data Example
In this section, we assess the time-varying predictability of the term structure (i.e., the yield curve) of interest rates, and investigate whether the expectations hypothesis of the term structure holds periodically by using the proposed time-varying VECM model, which allows for shifts in the predictability of the term structure.
We now briefly review the literature. The term structure is crucial to both monetary policy analysis and private individuals. According to the rational expectations hypothesis (Campbell & Shiller 1987), the term structure (or term spread) should provide information on the future changes in both short-term and long-term interest rates. For example,
if a long bond yield exceeds a short yield, the long rate subsequently tends to rise, which generates expected capital losses on the long bond and thus offsets the current yield advantage. This also implies that bond returns are predictable from the yield spread, and the expected bond returns and the yield spread should be negatively correlated.
The literature on the term structure and bond return predictability is enormous and it continues to expand (e.g., Campbell and Shiller 1987, Bauer and Rudebusch 2020, Andreas et al. 2021, Vayanos and Vila 2021, He et al. 2022). However, the existing results on the expectations hypothesis of the term structure present many discrepancies, which may be due to the fact that the relationship evolves with time. For example, Borup et al. (2021) document that bond return predictability depends on the economic states, while linear forecasting models yield little evidence of unconditional predictability. Along this line of research, one important question is that whether the U.S. monetary and financial system has changed over time so that estimates from historical data are unreliable for modern policy analysis and investment activities. Although the literature has begun to explore whether the predictability of the term structure depends on the economy states (e.g., Andreasen et al. 2021, Borup et al. 2021), few studies aim to quantify the varying predictability of the term structure over time. In what follows, we address this issue using the newly proposed framework. The estimation procedure is conducted in exactly the same way as in Section 3, so we no longer repeat the details.
### Empirical Analysis
To study the time-varying predictability of the term structure, we consider the following time-varying bivariate \(\text{VECM}(p_{0}-1)\) model:
\[\Delta\mathbf{y}_{t}=\boldsymbol{\alpha}(\tau_{t})\boldsymbol{\beta}^{\top} \mathbf{y}_{t-1}+\sum_{j=1}^{p_{0}-1}\boldsymbol{\Gamma}_{j}(\tau_{t})\Delta \mathbf{y}_{t-j}+\mathbf{u}_{t},\quad\mathbf{u}_{t}=\boldsymbol{\omega}(\tau_ {t})\boldsymbol{\varepsilon}_{t},\]
where \(\mathbf{y}_{t}^{\top}=[l_{t},s_{t}]\), \(s_{T}\) is the interest rate on a one-period bond and \(l_{t}\) is the interest rate on a multi-period bond. According to the expectations hypothesis of the term structure, \(l_{t}\) and \(s_{t}\) should be integrated with \(\boldsymbol{\beta}^{\top}=[1,-1]\) (and thus \(\beta^{*}=-1\) under the identification condition), while the error-correction term is the term spread. In addition, the adjustment coefficient \(\boldsymbol{\alpha}(\tau)\) measures the predictability of the term structure and the sign of the elements in \(\boldsymbol{\alpha}(\tau)\) should be positively significant according to the theory.
We use a selection of bond rates with maturities ranging from 1 to 5 years. The interest rates are estimated from the prices of U.S. Treasury securities and correspond to zero-coupon bonds. The data are monthly observations from 1961:M6 to 2022:M12, which are collected from Nasdaq Data Link at [https://data.nasdaq.com](https://data.nasdaq.com). Figure 1 plots these five variables.
### Constant Parameter VECM
We begin by presenting the results using a constant parameter VECM model in order to get some intuition, which is also served as a benchmark for evaluating our time-varying VECM model. Recall that one of the economic implications of the expectations hypothesis of the term structure is that the long and short rates should be cointegrated with \(\mathbf{\beta}=[1,-1]^{\top}\). To empirically test this implication, we first detect the presence of cointegration, using the Johansen's likelihood ratio test. The testing results are reported in Table 5. From Table 5, for all possible bivariate pairs and lag lengths considered, the tests strongly reject the null of no integration, but do not reject the null of a single cointegration relationship at the 5% significance level.
INSERT TABLE 5 ABOUT HERE
We then report the parameter estimates for constant VECM models, while the optimal lag \(p\) is set to be 2 according to the literature (e.g., Hansen 2003). Table 6 reports the parameter estimates as well as their 95% confidence intervals, in which \(\alpha_{i}\) denotes the \(i^{th}\) elements of adjustment coefficient \(\mathbf{\alpha}\). From Table 6, we can see that for all considered bivariate pairs, the estimated cointegration parameter \(\widehat{\beta}^{*}\) is quite close to unity, which is consistent with the theory. However, for all bivariate pairs considered, the estimates of \(\widehat{\alpha}_{1}\) and \(\widehat{\alpha}_{2}\) are not positively significant (or even negative), which contradicts to the economic implications of the term structure theory. Note that according to the expectations hypothesis, the sign of adjustment coefficients \(\mathbf{\alpha}\) should be positive. These results are in line with Borup et al. (2021), who find that linear forecasting models yield little evidence of unconditional bond return predictability.
INSERT TABLE 6 ABOUT HERE
### Time-Varying VECM
In order to solve the puzzle raised by constant parameter VECM models, we then investigate the possibility that the expectations hypothesis of the term structure holds periodically. This is mainly motivated by the following facts. First, several studies have shown that the bond return predictability depends on the economic states (e.g., Andreasen et al. 2021, Borup et al. 2021). Second the rational expectations hypothesis also implies that the future bond returns should be negatively correlated with the current term spread (i.e., \(l_{t}-s_{t}\)). The above two facts mean that the predictability of term structure shifts over time and the above puzzling contradictions may be explained by using time-varying VECM models.
Table 7 report the estimation and testing results of time-varying VECM models, as well as some robustness checks. For the time-varying VECM(\(p-1\)) model, the optimal lag is \(\widehat{p}=2\) or \(\widehat{p}=3\) by our approach, which is consistent with the literature (e.g., Hansen & Seo 2002). We further check whether the model coefficients are indeed time-varying. We employ the proposed test statistic to examine the constancy of model coefficients. The associated \(p\)-value for these considered bivariate pairs are around 0.000-0.001, which suggest that we should choose the time-varying VECM model over a constant one. Certainly, one may examine each element of these coefficient matrices. However, it will lead to a quite lengthy presentation. In order not to deviate from our main goal, we no longer conduct more testing along this line. These testing results also suggest that the predictability of term structure are time-varying for a range of short and long rates. We then conduct robustness check to see whether the error innovations \(\{\boldsymbol{\varepsilon}_{t}\}\) exhibit serial correlation. We use the multivariate version of Breusch-Godfrey LM test (Godfrey 1978) to test the serial-correlations in the innovations \(\{\boldsymbol{\varepsilon}_{t}\}\), in which the null hypothesis is \(H_{0}:E(\boldsymbol{\varepsilon}_{t}\boldsymbol{\varepsilon}_{t+1}^{\top})= \boldsymbol{0}\). Based on the estimates \(\widehat{\boldsymbol{\varepsilon}}_{t}=\widehat{\boldsymbol{\Omega}}^{-1/2}( \tau_{t})\widehat{\boldsymbol{u}}_{t}\), the corresponding \(p\)-value ranges from 0.559 to 0.945, suggesting that the time-varying VECM model fits the data quite well for all considered bivariate pairs.
INSERT TABLE 7 ABOUT HERE
We then apply the singular ratio test to detect the presence of cointegration and then check whether \(\beta^{*}=-1\). Table 7 show that the estimates of cointegration rank are 1 (i.e., \(\widehat{r}=1\)) for all considered bivariate pairs. Based on these singular ratio tests, we find strong evidence of the existence of cointegration, which is consistent with Figure 1 and the results of constant parameter VECM models. In addition, this result, i.e., \(\widehat{r}=1\), is robust to different choices of \(p\) ranging from 2-6. We also report the point estimates of \(\beta^{*}\) and their 95% confidence intervals in Table 7. According to Table 7, we find a long-run relationship between the long rate and the short rate, and we cannot reject the null that \(H_{0}:\beta^{*}=-1\) at a 5% significance level for most considered bivariate pairs. Interestingly, we find that the estimates of \(\beta^{*}\) between these two models are almost identical, while the time-varying VECM models yields quite narrow confidence bands (i.e., smaller standard errors).
Finally, we investigate whether the term structure (or bond returns) is predictable over the long-run by using the term spread as a predictor, i.e., \(\boldsymbol{\alpha}(\tau)=\boldsymbol{0}\), and we also examine the sign of \(\boldsymbol{\alpha}(\tau)\), which should be positive according to the expectations hypothesis. In order to confirm whether the error-correction component is significant, We test the null hypothesis \(\mathbb{H}_{0}:\ \boldsymbol{\alpha}(\tau)=\boldsymbol{0}\). The testing results are reported in the last column of Table 7, which indicates that we should reject the null at all conventional levels. Therefore, the term structure is predictable in the long-run, at least in some local periods, while the constant parameter VECM model yields little evidence of the predictability of
the term structure. We also investigate the time-varying pattern of the term structure predictability. Figure 2 plots the estimates of \(\alpha_{1}(\cdot)\) and \(\alpha_{2}(\cdot)\) and their 95% point-wise confidence intervals, as well as the U.S. core inflation. Here, the core inflation data are monthly observations from 1967:M1 to 2022:M12, collected from the Federal Reserve Bank of St. Louis economic database.
In summary, our finding of great interest is that the estimated error-correction effects for the short and long rates are only positively significant in the 1980s and in recent years, and vary significantly over time for these bivariate pairs considered. What is more is that the time-varying patterns of \(\alpha_{1}(\cdot)\) and \(\alpha_{2}(\cdot)\) are almost identical, and are similar to the pattern of time-variations in U.S. core inflation rate. Our findings suggest that the expectations hypothesis holds periodically, especially in the period of unusual high inflation. Importantly, our results also provide us with evidence for the economic implications of the theoretical macro-finance term structure model proposed by Andreasen et al. (2021), who find that the monetary policy decisions by the Federal Reserve with respect to stabilizing inflation is a key driver of this switch in bond return predictability.
INSERT FIGURE 2 ABOUT HERE
## 5 Conclusions
In this paper, we propose a time-varying vector error-correction model that allows for different time series behaviours (e.g., unit-root and locally stationary processes) interacting with each other to co-exist. From practical perspectives, this framework can be used to estimate shifts in the predictability of non-stationary variables, and test whether economic theories hold periodically. We first develop a time-varying Granger Representation Theorem, which facilitates establishing asymptotic properties, and then propose estimation and inferential theories for both short-run and long-run coefficients. We also propose an information criterion to estimate the lag length, a singular-value ratio test to determine the cointegration rank, and a hypothesis test to examine the parameter stability. To validate the theoretical findings, we conduct extensive simulations. Finally, we demonstrate the empirical relevance by applying the framework to investigate the rational expectations hypothesis of the U.S. term structure. We conclude that the predictability of the term structure vary significantly over time and the expectations hypothesis of the term structure holds periodically, especially in the period of unusual high inflation.
## 6 Acknowledgements
Gao and Peng acknowledge financial support from the Australian Research Council Discovery Grants Program under Grant Numbers: DP200102769 & DP210100476, respec
tively. Yan acknowledges the financial support by Fundamental Research Funds for the Central Universities (Grant Numbers: 2022110877 & 2023110099).
|
2301.13160 | Mathematical modelling and numerical simulation of reverse-osmosis
desalination | The reverse osmosis membrane module is an integral element of a desalination
system as it determines the overall performance of the desalination plant. The
fraction of clean water that can be recovered via this process is often limited
by salt precipitation which plays a critical role in its sustainability. In
this work, we present a model to study the complex interplay between flow,
transport and precipitation processes in reverse osmosis membranes, which
together influence recovery and in turn process sustainability. A reactive
porous interface model describes the membrane with a dynamic evolving porosity
and permeability to capture the scaling and clogging of the membrane. An
open-source finite-volume numerical solver is implemented within the OpenFOAM
library and numerical tests are presented here showing the effect of the
various parameters of the model and the robustness of the model to describe a
wide range of operating conditions. | Nicodemo Di Pasquale, Mayo Akele, Federico Municchi, John King, Matteo Icardi | 2023-01-16T09:56:12Z | http://arxiv.org/abs/2301.13160v1 | # Mathematical modelling and numerical simulation of reverse-osmosis desalination
###### Abstract
The reverse osmosis membrane module is an integral element of a desalination system as it determines the overall performance of the desalination plant. The fraction of clean water that can be recovered via this process is often limited by salt precipitation which plays a critical role in its sustainability. In this work, we present a model to study the complex interplay between flow, transport and precipitation processes in reverse osmosis membranes, which together influence recovery and in turn process sustainability. A reactive porous interface model describes the membrane with a dynamic evolving porosity and permeability to capture the scaling and clogging of the membrane. An open-source finite-volume numerical solver is implemented within the OpenFOAM(r)library and numerical tests are presented here showing the effect of the various parameters of the model and the robustness of the model to describe a wide range of operating conditions.
## 1 Introduction
The demand of freshwater has steadily increased over the last forty decades at a global level, mainly because of the increase in population and improving living standards which are leading to an expansion of irrigated agriculture and its human consumption. In turn, the increase in consumption is straining the freshwater sources in their ability to supply the growing demand of water worldwide with almost two third of the total world population experiencing severe water scarcity during at least a part of the year (Mekonnen and Hoekstra, 2016; Jones et al., 2019). The current freshwater sources are already overexploited, and even if better management is still needed to reduce the misuse of such resources (e.g., wastewater treatments or waste reduction) (Najid et al., 2022), these solutions cannot still be enough to meet the future demand of freshwater. The ongoing climate change is expected to reduce the availability of freshwater because of the receding of glaciers with a subsequent important reduction of the flow in important rivers such as the Mekong Yellow, or Gange (Shannon et al., 2008).
Almost 98% of the total liquid water on the Earth is not available for the direct use or consumption, as it form the total water present in seas and oceans. However, this last fact also means that if the exceedingly high saline content in seawater can be reduced or removed, we will have access to the largest source of freshwater sources, with which the required amount of freshwater could be delivered without straining the natural occurring resources (Jones et al., 2019). Therefore, there is a strong push into developing more efficient technologies for the desalination of seawater. Among the currently available technologies are thermal desalination and membrane processes (Fritzmann et al., 2007; Subramani and Jacangelo, 2015). In thermal desalination, seawater is brought to evaporation through multi-effect distillation or multi-stage flash distillation (Al-hotmani et al., 2021), and the resulting vapor is subsequently condensed. In membrane technologies, a semi-permeable membrane is employed to separate (or filtrate) the solution of salt and water.
Reverse Osmosis (RO), is a widely employed membrane technology for treating seawater and wastewater with salinity up to 70 g/l (Hickenbottom and Cath, 2014) which, due to its relative simplicity and widespread diffusion has been among the main topics of research in membrane filtration (Wardeh and Morvan, 2008; Luo et al., 2019). One of the main challenges in RO is _concentration polarisation_(Kim and Hoek, 2005), which is the presence, over the membrane, of a solute-rich boundary layer. Concentration polarisation can lead to solute precipitation and fouling, significantly reducing the local permeability of the membrane with adverse effects on the permeation flux. When the solution contains inorganic salts (such as sodium chloride NaCl or calcium sulfate CaSO\({}_{4}\) ), the resulting accumulation of crystals on the membrane is also called _scaling._ There is extensive experimental evidence indicating that scaling reduces the membrane performance over time (Hu et al., 2014) and therefore, scaling phenomena should play a major role in the mathematical modelling of RO systems.
Computational Fluid Dynamics (CFD) represents a powerful to analyse concentration polarisation and scaling, with the earliest attempts dating back to more than two decades ago (Hansen et al., 1998). The membrane is usually included as a boundary condition where the flux of the solute is assumed equal to zero (complete retention). Previous studies have considered dependence on the solute concentration of properties such as the osmotic pressure (Hansen et al., 1998), viscosity, density and diffusion coefficient (Geraldes et al., 2001; Wiley and Fletcher, 2002) to include effects of concentration polarisation on the membrane(Wiley and Fletcher, 2002; Johnston et al., 2022). In particular, CFD simulations for membranes have been employed to analise different geometrical configurations, such as the spacer-filled channels. These include solid elements in the feeding flow to increase the shear stress on the surface of the membrane, which in turn increases the local mixing and mass transfer across the membrane (Shakaib et al., 2007; Fletcher and Wiley, 2004; Fimbres-Weihs and Wiley, 2010; Ghidossi et al., 2006; Lau et al., 2009; Santos et al., 2007; Koutsou et al., 2009; Ranade and Kumar, 2006).
However, the effects of scaling have not been directly included in CFD simulations. In this work, we propose a mathematical model and a CFD solver for the analysis of the performances of a RO membrane in which we included the possibility for the solute in the feed flow to react (precipitate) on the membrane and therefore affecting the membrane permeability. The paper is organised as follows. We describe the mathematical framework for the analysis of the membrane, highlighting how the chemical reactions can be accounted for in the model. We then discuss the implementation of our model into the widely used open source finite volume library OpenFOAM(r)and we show some results for some typical situations, showing the applicability of the whole framework. We then draw some conclusions and we outline the possible extension of the model.
## 2 Model
In this work, we approximate a rectangular 3D membrane module as a 2D channel illustrated in fig. 1. This is a common practice employed in other recent CFD studies (Johnston et al., 2022). While the configuration we considered can be easily extended to more complex geometries, the focus here is to develop a complete mathematical model for the polarization and scaling of the membrane by giving a proof of concept for a general-purpose numerical solver. Our main goal is to show the mechanisms governing the evolution of the solute at the membrane interface, and a 2D channel geometry allows us to focus on this task.
Let us therefore assume a rectangular domain \(\Omega\equiv(0,L)\times(0,H)\) with boundary \(\Gamma=\partial\Omega\) subdivided in three different regions:
\[\Gamma_{m} =(0,L)\times\{0\}\quad\text{(membrane)}\] \[\Gamma_{in} =\{0\}\times(0,H)\quad\text{(inlet)}\] \[\Gamma_{out} =\{L\}\times(0,H)\quad\text{(outlet)} \tag{1}\]
where \(\Gamma_{m}\) represents the membrane boundary, \(\Gamma_{in}\) is the inlet boundary and \(\Gamma_{out}\) is the outlet boundary. With \(\Gamma_{w}\) we represent the remaining part of the boundary constituted by solid boundaries so that \(\Gamma_{w}=\Gamma\setminus(\Gamma_{in}\cup\Gamma_{out}\cup\Gamma_{m})\), as shown in fig. 1. In this study, the permeate flow is not explicitly modelled, therefore the membrane represents a boundary condition for the problem.
The analysis of the filtration process requires the simultaneous solution of the flow field coupled with the transport of dissolved chemical species, which can involve one or more chemical reactions. Such chemical reactions can lead to solute precipitation, modifying the permeability and porosity of the membrane, thus causing a variation in the osmotic pressure and the final yield of the permeate. We summarised the different mechanism and their interdependence in fig. 2.
Our mathematical model is composed of three different and intertwined parts that need to be simultaneously considered to obtain the overall behaviour of the membrane which we are now going to describe in detail.
### Flow modelling
The mixture is described as an incompressible Newtonian fluid described by the Navier-Stokes equations:
\[\nabla\cdot\mathbf{u} =0 \text{in}\ \ \Omega,t>0 \tag{2a}\] \[\rho\left[\frac{\partial\mathbf{u}}{\partial t}+(\mathbf{u}\cdot \nabla\mathbf{u})\right] =-\nabla p+\mu\nabla^{2}\mathbf{u} \text{in}\ \ \Omega,t>0 \tag{2b}\]
Figure 1: 2-dimensional domain considered in this work
where \(\rho\) is the density of the fluid \(\mathbf{u}\) is the velocity vector with components \((u,v)^{T}\), \(p\) is the fluid pressure and \(\mu\) is the dynamic viscosity, \(\nabla\) is the gradient operator. We impose the following boundary and initial conditions on the system shown in eq. (2) for the velocity and pressure:
\[\mathbf{u} =0 \text{in}\ \ \Omega,t=0 \tag{3a}\] \[\mathbf{u} =0,\ \ p=P_{in} \text{on}\ \ \Gamma_{in}\] (3b) \[\nabla u\cdot\mathbf{n} =0,\ \ p=P_{out} \text{on}\ \ \Gamma_{out}\] (3c) \[\mathbf{u} =0 \text{on}\ \ \Gamma_{w} \tag{3d}\]
Finally, the membrane is modelled as a dynamic Dirichlet condition for the velocity \(v\) orthogonal to the membrane is obtained from the Darcy law as:
\[u=0,\ \ v=-\frac{k(\Delta p-\Delta\pi)}{\ell\mu}\
where we used the definition of membrane rejection of the species \(i\),
\[r_{i}^{\phi}=1-\frac{\phi_{i}^{p}}{\phi_{i}},\,\,\,i=1,\ldots,N\,. \tag{8}\]
The Van't Hoff equation assumes a linear relation between concentration and osmotic pressure and is more accurate for low concentration of solutes. Different formulations were proposed to take into account the nonlinear behaviour of solutions, which consider the activity of the solvent (Khraisheh et al., 2019), calculated using the Pitzer equation for the electrolyte solutions (Pitzer, 1973). As a proof of concept, we will consider here the simplified model since the most complex behaviour can be straightforwardly added to the model and the CFD code we will present in the next section. The membrane rejection of the species \(i\) expresses the amount of solute rejected by the membrane (and therefore not present in the permeate) as a fraction of the initial quantity. In this work, we assume \(r_{i}^{\phi}=1\) for every ion species, which corresponds to complete rejections of the ions at the membrane. In this case, the concentration on the permeate side and the permeate osmotic pressure, \(\pi_{p}\), are both equal to 0 and therefore:
\[\Delta\pi=RT\varphi\sum_{i}^{N}\phi_{i}\,. \tag{9}\]
In the literature, the two most popular models proposed to describe the solute-solvent solute transport through the membrane are the solution-diffusion model (Merten, 1963; Lonsdale et al., 1965; Wijmans and Baker, 1995) and the Spiegler-Kedem model Spiegler and Kedem. The former expresses the flow through the membrane \(\dot{J}_{v}\) as:
\[\dot{J}_{v}=A(\Delta p-\Delta\pi) \tag{10}\]
noting that in our notation \(v=\frac{\dot{J}_{v}}{A(\dot{\Gamma}_{m})}\) where \(S_{m}\) is the membrane surface; while for the latter we have
\[\dot{J}_{v}=\frac{1}{R_{m}\mu}(\Delta p-\sigma\Delta\pi) \tag{11}\]
with the same identification for \(\dot{J}_{v}\) (i.e., \(v=\frac{\dot{J}_{v}}{A(\dot{\Gamma}_{m})}\)) where \(R_{m}\) is the membrane resistance and \(\sigma\) is the reflection coefficients which measure the impermeability of the membrane to the solutes. \(\sigma=1\) indicates a membrane completely impermeable to solutes and will be the one considered for this work.
Notice that in eq. (4) we use the Darcy law to rewrite the so-called water permeability of the membrane \(A\) which appear in the equation in terms of the permeability and thickness of the membrane and the viscosity of water as \(A=\frac{S_{m}k}{\ell\mu}\). Using the same reasoning for eq. (11) we obtain that \(R_{m}=\frac{\ell}{S_{m}k}\), that is to say, the membrane resistance is inversely proportional to the permeability. The identification of the Darcy-related terms with the water permeability \(A\) or membrane resistance has two main advantages. Firstly, we can connect our analysis with the membranes available commercially which are described in terms of water permeability or membrane resistance. This last fact allows us to choose the range of the parameters we are considering which are appropriate for the description of real membranes. Secondly, using the Darcy derived version of the equation for the flow through the membrane allows us to include more detailed mechanisms in the model which can take into account more complex phenomena, such as chemical reactions and depositions of solids on the membrane as we will show in more details in the next sections. One of the ways considered in the literature to include fouling and polarisation of the membrane is to define such contribution as additional resistance terms to be added to \(R_{m}\) in eq. (11) (see e.g., (Silva et al., 2011; Lee and Clark, 1998; Yeh, 2002)). These additional terms must be derived from experiments or empirical correlations. In contrast, our formulation leverages the well-established theory of porous media to include such effects directly in the determination of the permeability \(k\).
The performance of the membrane can be evaluated by using the recovery \(r\), defined as the ratio between the permeate flow rate, \(\dot{Q}_{p}\), and the feed flow rate, \(\dot{Q}_{f}\):
\[r=\frac{\dot{Q}_{p}}{\dot{Q}_{f}}=\frac{vA_{p}}{U_{in}A_{in}}=\frac{vL}{uH} \tag{12}\]
where we used the definition of flow rate through a surface, \(AU/\ell t\), to calculate the flux through the inlet (subscript \(in\)) and the membrane (subscript \(m\)) and then we replaced the area in eq. (12) with their geometrical expression related to the domain we are considering: \(A_{in}=H\times Z\), \(A_{p}=L\times Z\).
### Solute Transport
In membrane processes, flow and solute transport are tightly coupled through the boundary condition on the membrane given by equation eq. (4). The transport equation for the concentration \(\phi_{i}\) of the \(i-\)th ion species in the bulk liquid is given by:
\[\frac{\partial\phi_{i}}{\partial t} +\nabla\cdot(\mathbf{u}_{i}\phi_{i})=\nabla\cdot(D\nabla\phi_{i} )+\xi_{B}^{i}\] in \[\Omega,t>0 \tag{13a}\] \[\phi_{i} =0\] in \[\Omega,t=0\] (13b) \[\phi_{i} =\phi_{in}\] on \[\Gamma_{in}\] (13c) \[\nabla\phi_{i} =0\] on \[\Gamma_{out}\cup\Gamma_{w}\] (13d) \[\dot{J}_{i} =v\phi_{i}-D_{i}\frac{\partial\phi_{i}}{\partial y}=\xi_{M}^{i}+ \xi_{P}^{i}\] on \[\Gamma_{m} \tag{13e}\]
where \(\mathbf{u}_{i}\) is the velocity of the \(i\)-th chemical species, \(D_{i}\) is its diffusion coefficient, \(\xi\) is a rate term that represents different mechanisms of depletion of the solute, identified by the subscripts \(B\) for the chemical reaction in the bulk, \(M\), for chemical reaction at the membrane, and \(P\) for the fraction of the solute that crosses the membrane. The amount of the species \(i\) at the membrane, can either precipitate on the membrane following a chemical reaction or go through the membrane in the permeate. The sum of these two mechanisms must equal the flux of the \(i\)-th chemical species at the membrane \(\dot{J}_{i}\). We can therefore assume that each of these mechanisms corresponds to a fraction of \(\dot{J}\), and hence that \(\dot{J}=\kappa_{M}\xi_{M}+\kappa_{P}\xi_{P}\). The coefficients \(\kappa\) have the property that:
\[\kappa_{M}+\kappa_{P}=1 \tag{14}\]
If only superficial reaction is present, then \(\kappa_{R}=1\) and \(\kappa_{P}=0\), while in the case there is no superficial reaction and the solute crosses the membrane then \(\kappa_{P}=1\) and \(\kappa_{R}=0\).
In this work we will make the following assumptions on the behaviour of the system: 1.) The precipitation reaction is irreversible and only occurs on the membrane surface, (i.e., \(\xi_{B}=0\)) 2.) The transport processes are similar for the all the salt ions and the diffusion coefficients are independent of concentration, 3.) The salt ions have the same velocity as the fluid (i.e., \(\mathbf{u}_{i}=\mathbf{u}\) for every \(i\)), 4.) The porous medium is homogeneous, 5.) there are only surface reactions at the membrane (i.e., \(\kappa_{R}=1\)).
### Chemical kinetics
Following the principles of mass action, the dynamic behaviour of chemical systems with \(n\) components involved in \(m\) reactions, can be described by a set of first order differential equations with time as the independent variable:
\[\frac{\mathrm{d}\phi_{1}}{\mathrm{d}t} =f_{1}(\phi_{1},\phi_{2},\ldots,\phi_{n},t)\] \[\frac{\mathrm{d}\phi_{2}}{\mathrm{d}t} =f_{2}(\phi_{1},\phi_{2},\ldots,\phi_{n},t)\] \[\vdots\] \[\frac{\mathrm{d}\phi_{n}}{\mathrm{d}t} =f_{n}(\phi_{1},\phi_{2},\ldots,\phi_{n},t)\]
where \(\phi_{i}(t)\), \(i=1,\ldots,n\) denotes the volume molar concentration of chemical species \(X_{i}\) at time \(t\). The dynamics of the reaction network can be conveniently written in matrix form using the formalism developed in (Chellaboina et al., 2009):
\[\frac{\mathrm{d}\phi_{i}}{\mathrm{d}t}=\xi_{R}^{i}=(A-B)^{T}K\phi^{A}(t),\ \ \phi_{i}(0)=\phi_{0,i},\ \ t\geq 0 \tag{16}\]
where \(K=\mathrm{diag}(\mathcal{K}_{1},\ldots,\mathcal{K}_{m})\) is the diagonal matrix which contains as elements the reaction kinetics \(\mathcal{K}_{j}\), \(j=1,\ldots,m\) and \(\phi_{0}\) is the initial concentration. \(A\) and \(B\) are the \(m\times n\) matrices having in each entry the stochiometric coefficients of the reactants and products respectively and \(\phi^{A}(t)\) is the matrix obtained by replacing each element of \(A\) with \(\phi_{i}^{a_{lp}}\) where \(a_{kj}\) is the element of \(A\) in the \(l\)-th row and \(p\)-th line. The first term in the boundary conditions in eq. (13e) takes the form
\[\xi_{R}^{i}=\left.\left[(A-B)^{T}K\phi^{A}(t)\right]\right|_{i}\ell \tag{17}\]
where the notation \(\left[\cdot\right]\right|_{i}\) stands for the \(i\)-th component of its (\(n\times 1\) vector) argument.
For a generic first order reaction \(X_{1}+X_{2}\to X_{3}\) with kinetic constant \(K\), we have therefore
\[\frac{\mathrm{d}\phi_{1}}{\mathrm{d}t} =-\mathcal{K}\phi_{1}\phi_{2}\] \[\frac{\mathrm{d}\phi_{2}}{\mathrm{d}t} =-\mathcal{K}\phi_{1}\phi_{2}\] \[\frac{\mathrm{d}\phi_{3}}{\mathrm{d}t} =\mathcal{K}\phi_{1}\phi_{2}\]
and
\[\xi_{R}^{1} =-\mathcal{K}\phi_{1}\phi_{2}\ell\] \[\xi_{R}^{2} =-\mathcal{K}\phi_{1}\phi_{2}\ell\] \[\xi_{R}^{3} =\mathcal{K}\phi_{1}\phi_{2}\ell\]
An example of a reaction of this type important in membrane operation is the formation of calcium carbonate, through the reaction (Warsinger et al., 2015):
\[\mathrm{Ca_{2}}^{+}+2\,\mathrm{HCO_{3}}^{-}\longrightarrow\mathrm{CaCO_{3}}+ \mathrm{CO_{2}}+\mathrm{H_{2}O} \tag{20}\]
In fact, the scaling caused by the precipitation of calcium carbonate limits the operating condition of desalination systems for brackish, groundwater, and seawater (Waly et al., 2009).
One effect to be considered when a chemical reaction is involved is that the reaction between salt ions and subsequent precipitation of minerals, often alters the membrane properties, such as porosity and permeability. As crystals grow, it is expected that the permeability \(k\), in equation (see eq. (4)) and the porosity, \(\epsilon\), will decrease, reducing the liquid flow through the membrane (Steefel et al., 2005). To account for the modification of the porosity and permeability as a result of mineral precipitation, we employ the Kozeny-Carman model (Hommel et al., 2018). This model allows us to quantify the porosity-permeability relations and estimate the resulting changes.
According to the Kozeny-Carman model, the change in permeability is calculated by relating the current permeability, \(k\), based on the current porosity \(\epsilon\), to the initial permeability \(k_{0}\) corresponding to the initial porosity \(\epsilon_{0}\)Hommel et al. (2018). These equations consequently follow the form below
\[\frac{k}{k_{0}}=\frac{f(\epsilon)}{f(\epsilon_{0})}. \tag{21}\]
Thus we can describe the permeability evolution using the following power law (Hommel et al., 2018)
\[\frac{k}{k_{0}}=\frac{(1-\epsilon_{0})^{2}}{(1-\epsilon)^{2}}\left(\frac{\epsilon} {\epsilon_{0}}\right)^{3}, \tag{22}\]
where \(k\) is the current permeability, \(\epsilon\) is the current porosity, \(k_{0}\) is the initial permeability and \(\epsilon_{0}\) is the initial porosity. The rate at which porosity reduction occurs is given by (Huo et al., 2019; Noiriel et al., 2004):
\[\epsilon=\epsilon_{o}-\frac{V_{s}}{\ell}\int\limits_{t_{0}}^{t}\sum\limits_{j} \xi_{R}^{j}dt, \tag{23}\]
where \(\epsilon_{o}\) denotes the initial porosity at \(t_{0}\), \(V_{s}\) is the molar volume of solid precipitate in m\({}^{3}\)/mol and \(r(t)\) is the rate of precipitation in mol\(\cdot\)m\({}^{-2}\cdot\)s\({}^{-1}\) and the index \(j\) runs over all the possible reactions in the system. For the first order reaction we are considering here the eq. (23) simplify as:
\[\epsilon=\epsilon_{o}-\frac{V_{s}}{\ell}\int\limits_{t_{0}}^{t}\left(\mathcal{ K}\phi_{1}\phi_{2}\right)\mathrm{d}t. \tag{24}\]
We should again observe the inter-dependencies and feedback mechanisms between fluid flow, transport and reaction. Namely, in eq. (24) we see the precipitation reaction leads to a change in porosity which in turn affects the permeability in equation eq. (22). The change in permeability impacts the flow via the fluid velocity in eq. (4). This, in turn, alters the solute concentration distribution via eq. (13a), which ultimately impacts the rate of precipitation again via eq. (13e). Moreover, from eq. (24) we can observe that the variation of the porosity is proportional to the kinetic reaction constant. This last fact, in turn, simplifies the predictions for the clogging of the membrane based on the reactions in the systems. We can expect that if there are two chemical reactions in our systems, the first one 10 times slower than the second, then the clogging caused by the products of the second reaction will take 10 times the time needed by the products of the first reaction to produce the same amount of clogging. One of the strengths of our models is that allow these kinds of qualitative analyses even without actually solving the equations.
## 3 Numerical discretisation
The equations presented in section 2 are solved using the open-source finite volume OpenFOAM 8.0 library and the code is available open-source (Icardi, 2022). In order to solve the equation of motion alongside the reaction at the membrane we developed a new solver for OpenFOAM called binaryReactionFoam which is based on two widely used solvers, pimpleFoam and scalarTransportFoam. The former is a transient solver for incompressible flows based on the PIMPLE algorithm while the latter is a concentration transport solver using a user-specified velocity field. The solver also includes the possibility of modelling solid precipitation in the fluid and a multiphase flow model for the solid particles. The most important element in the computational framework, however is represented by the new boundary conditions implemented to model the membrane. The membraneVelocity boundary conditions impose the fluid velocity based on the fluid pressure, the permeate pressure, and the membrane properties. These are updated in time by linking this boundary condition to the one for the scalar concentration, named binaryReaction, which solves for the solid precipitation at the boundary and therefore updates the membrane permeability. The equations and boundary conditions are coupled iteratively through Picard (fixed point) iteration (through the PIMPLE iterations) until convergence, making the whole model fully implicit.
We simulate a two-dimensional rectangular channel with height \(h=0.003\) m and length of \(L=0.02\) m discretised on a mesh composed by 600\(\times\)200 cells. The following discretisation schemes (we direct the reader to the OpenFOAM user guide (Ope, 2019) for a detailed description of each scheme) are used to discretise the equations:
* advective fluxes (divSchemes Gauss vanLeer) are computed at the faces and the variables interpolated with a Total Variation Diminishing scheme;
* gradient terms (gradSchemes Gauss linear) are approximated with central differencing;
* surface normal gradients for diffusive fluxes (snGradSchemes orthogonal) are approximated with central differencing (the grid is in fact orthogonal and does not need any correction to ensure second order accuracy);
* time derivatives (ddTSchemes backward) are approximated with third order implicit backward Euler scheme,
We specified a fully developed velocity profile at the inlet:
\[u(0)=6u_{av}\frac{y}{h}\left(1-\frac{y}{h}\right) \tag{25}\]
where \(u_{av}\) is the average velocity along the channel. By specifying the velocity profile at the inlet we need only to specify the pressure at the outlet (the value of which is given in table 1). The pressure of the permeate through the membrane is assumed constant along the length of the membrane and put equal to zero. For longer membranes, this assumption is no longer valid and the permeate flow needs to be modeled explicitly (with 1D or 2D models). This will be the subject of future extensions of our framework.
The initial value of the permeability we consider in the calculations is \(k=10^{-16}\) m\({}^{2}\). Using the expression for the water permeability obtained from eq. (4) and eq. (10): \(A=\frac{S_{m}k}{\ell\mu}\), and the viscosity of water at room temperature \(\mu=10^{-3}\) Pa s, and the surface of the membrane, \(S_{m}=2\cdot 10^{-6}\) m\({}^{2}\) for the channel configuration and \(\ell=10^{-7}\)m, we obtain a value of \(A=2\cdot 10^{-12}\) m (Pa s)\({}^{-1}\). This value is in line with the values reported for commercial membranes, which are in the range \(10^{-14}\) to \(10^{-10}\) (Pa s)\({}^{-1}\)(Ruiz-Garcia and de la Nuez Pestana, 2019; Lee et al., 1981; Drazevic et al., 2014).
Our model is able to include the scaling of the membrane given by the chemical reaction, which can modify the membrane permeability through the precipitation of a solid phase obtained as a product of the reaction. In this work we considered a range of kinetic reactions, going from very slow to fast reactions, i.e. with a value of the kinetic constant spanning four orders of magnitude, from \(10^{-10}\) to \(10^{-1}\) m\({}^{3}\)/mol. We considered such a large range of kinetics since our main goal is not to focus on a specific system (and reaction) but to give a general description that can be applied to different specific situations.
## 4 Results
In this work, we employ a fixed flow profile at the inlet, which when considering the properties in table 1, gives a Reynolds number equal to 300. Therefore, the system operates in a fully developed laminar flow regime. The first property that can be derived from this model is the polarisation of the membrane, which represents the accumulation of the solute at the interface of the membrane on the feed side. This is an undesired effect since it increases the osmotic pressure reducing the extraction of the permeate per unit of energy consumed in the process. We report the variation of the concentration profile in the domain in fig. 3 for the lowest and highest chemical kinetic rate considered. We can observe in fig. 3 that the concentration at the membrane is different from the one in the bulk region in both cases. However, while the case with \(\mathcal{K}=10^{-15}\) m\({}^{3}\)/mol shows a higher concentration with respect to the bulk (see fig. 3a), the case with the highest value of the kinetic rate (\(\mathcal{K}=10^{-1}\) m\({}^{3}\)/mol, see fig. 3b) shows a concentration smaller than the one in the bulk.
The latter observations show that the possible behaviour of the solution near the membrane strongly depends on the reaction kinetics. In the first case (the one represented in fig. 3a corresponding to the lowest kinetic rate considered \(\mathcal{K}=10^{-15}\) m\({}^{3}\)/mol), we can observe the "standard" effect of the _polarisation_ of the membrane. During the filtration process, there is an accumulation of the solutes molecule on the feed side of the membrane, which results in a higher concentration of the solution at the membrane itself. On
\begin{table}
\begin{tabular}{c c c} \hline \hline Symbol & definition/value & units \\ \hline \(k\) & \(10^{-16}\) & m\({}^{2}\) \\ \(\epsilon\) & 0.7 & - \\ \(\phi_{a,0}\) & 35 & g/m\({}^{3}\) \\ \(\phi_{b,0}\) & \(\phi_{a,0}\) & g/m\({}^{3}\) \\ \(u_{in,av}\) & 0.1 & m/s \\ \(D\) & 0.003 & m \\ \(\ell\) & 0.0001 & m \\ \(V_{s}\) & \(27\cdot 10^{-6}\) & m\({}^{3}\)/mol \\ \(u_{av}\) & 0.1 & m/s \\ \({\cal K}\) & \(\{10^{-10},\;10^{-5},\;10^{-2},\;10^{-1}\}\) & m\({}^{3}\)/mol \\ \(\rho\) & 1000 & kg/m\({}^{3}\) \\ \(p_{out}\) & 1000 & kPa \\ \(p_{perm}\) & 0 & kPa \\ \(\mu\) & \(10^{-3}\) & Pa s \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the numerical inputs for the physical quantities used in the simulations. Note that the units of the kinetic constant \({\cal K}\) depend on the fact that we considered a binary reaction, whereas for the permeability \(k\) and porosity \(\epsilon\) we are considering the initial value (i.e., the value at \(t=0\) h.
the opposite side, when the reaction rate is almost negligible (as for the case of \(\mathcal{K}=10^{-15}\) m\({}^{3}\)/mol), the solutes are now consumed by the reaction and they accumulate at the membrane interface, leading to the concentration profile observed in fig. (a)a. In the latter case, the solute is now consumed almost instantly at the membrane interface. This result in a transport (convection and diffusion) limited profile of the concentration near the interface.
The two opposite effects just described for the profile of the concentration at the membrane interface give an interesting effect on the evolution of the porosity and permeability profiles across the membrane. As the reaction proceeds, a new solid phase is formed which precipitates on the membrane modifying its structure and therefore its fluid dynamical behaviour. In particular, the solid phase generated during the reaction clogs the pores of the membrane, resulting in a variation of the porosity of the membrane with time. On top of this, since the concentration along the membrane (in the x-direction) decreases, we can expect a decrease in the overall reaction rate (which is proportional to the concentration) and therefore a difference in the permeability and porosity over the membrane. When we instead observe polarisation (i.e., in the case of the lowest reaction rate) the concentration near the membrane increases with the distance along the membrane direction.
Therefore, we can expect that the reaction velocity (which depend on \(\mathcal{K}\) and the concentration), at a fixed time, will increase along the membrane interface for the case with polarisation (low kinetic reaction) and decrease along the membrane for high value of the kinetic reaction rate. We will show quantitatively these effects in the next section.
The description of the properties of the membrane (i.e., porosity, permeability, velocity through the membrane) can be given in terms of global quantities, that is to say quantities averaged over all the membrane length, which therefore becomes a function of time only. We will start our analysis by giving an account of these global properties.
In fig. 4 we report the variation of the average of the porosity across the membrane as function of time. For the lowest kinetic reaction time considered there is no appreciable variation of the porosity after more than one day of operations. By increasing the kinetic reaction rate we can start to observe some deviations. In particular, for the highest value of the reaction rate the porosity decays to 60% of its original value after only one day of operation. This latter kind of results can be useful in determining the operation time that we can expect from a membrane given a certain composition of the feed.
The law of variation of the permeability with the reaction depends on the variation of the porosity, and in fact we can expect a similar behaviour. We reported the results for \(k\) in fig. 5, where we can see that there is an order of magnitude difference between the initial value of the permeability at time \(t=0\) and after 28 h of operations.
The average velocity through the membrane obtained with the conditions specified is 1.8 \(\mu\)s, which decreases with the decrease of the permeability of the membrane up to 0.13 \(\mu\)s for the lowest value of \(k\) and \(\epsilon\) shown in figs. 4 and 5. In order to maintain the flow across the membrane in the given conditions of cross-flow in the channel and for the permeability and porosity given, we have to apply a pressure of approx 1800 kPa, which is needed to overcome an osmotic pressure of 1000 kPa, which reduces to 978 kPa in the
Figure 3: Contour plot of the concentration within the channel given in units of the initial concentration \(\tilde{\phi}_{a}=\phi_{a}\phi_{in,0}\). On the left: results reported for the lowest kinetic constant. On the right, results are reported for the highest kinetic constant. Note that the starting point of the legend is not zero and is different between the pictures to make the results more clear.
Figure 4: Plot of the porosity versus time for all the systems considered, \(\mathcal{K}\) is in mol/m\({}^{3}\).
Figure 5: Plot of the permeability as a function of time for all the systems considered \(\mathcal{K}\) is in mol/m\({}^{3}\).
case of the highest reaction rate. The difference in the osmotic pressure for the case \(\mathcal{K}=10^{-1}\) depends on the fact that for this case the concentration at the membrane is lower than the bulk (and lower than the case with polarisation) because of the very fast reaction (see fig. 3b).
Despite the fact the osmotic pressure is smaller for the fastest reaction case, this case remains the worse in terms of permeate extraction, because the fast scaling of the membrane reduces quickly the porosity until the flux stops completely, i.e., we reach a value of \(v=0.13\)\(\mu\)s when we consider a fast reaction rate, against a value one order of magnitude higher for the case of the lowest reaction rate where we do not observe the scaling in the simulated time.
### Local profiles
Local profiles at the membrane are analysed for the same range of parameters. Since the smaller value of the kinetic constant (\(\mathcal{K}=10^{-}15,10^{-}10\) m\({}^{3}\)/mol) gives the same behaviour in the time scale considered, we are showing only results for \(\mathcal{K}\geq 10^{-}5\) m\({}^{3}\)/mol. We summarise our finding in fig. 6 where we reported the component \(v\) of the velocity of the fluid (i.e., the velocity through the membrane) and the porosity are reported.
According to our analysis in the preceding sections, the flux along the membrane depends on different contributions. The first is the frictional pressure drop along the channel, which can cause considerable differences in the transmembrane pressure. Secondly, we have the polarisation effects, which increase the osmotic pressure (as it is proportional to the concentration difference across the two sides of the membrane), and finally, the scaling, which changes the permeability of the membrane itself. While the first contribution can be mitigated by a better design of the membrane modules (Krawczyk and Jonsson, 2014), the contributions of the last two effects are difficult to quantify _a priori_, as it can be seen from fig. 6a. While the frictional pressure drop along the channel acts in all the systems in the same way (we are considering the same geometry and the same initial conditions for the flow), that is not true for the polarisation and scaling effects. In particular, the systems with a lower concentration suffer from polarisation at the membrane, as shown in the previous section, which increases the osmotic pressure. The system with higher reaction kinetics instead, does not suffer from the polarisation of the membrane (and in fact, the osmotic pressure is lower than the case at lower reaction rates, see the previous section). However, the scaling of the membrane, combined with the pressure drop in the channel is now dominating, resulting in an overall smaller flux (see blue curves against red and black curves in fig. 6a).
In the variation of the porosity profiles in the membrane, we can observe the qualitative analysis we discussed in the previous section. For the larger kinetic rates, the transport-limited boundary layer on the membrane is reducing the reaction rate along the membrane. This, in turn, gives a porosity that increases along the membrane, with a value of \(\epsilon\) at the outlet of the membrane, after 28 hours, 3% larger than the value at the inlet (see dot-dashed blue curve in fig. 6b). For the lowest reaction rates instead, we obtain the opposite behaviour: the polarisation increases the overall velocity along the membrane, which result in a reduction of the porosity along the x-direction, even though the low velocity of the reactions results in a very small variation (see the red continuous curve in fig. 6b). Near the outlet of the membrane, the porosity increases, as a result of the reduction of the polarisation at this point of the domain.
## 5 Conclusions
In this work, we presented a comprehensive computational model to describe the solute dynamics ed evolution near a membrane for desalination processes. In particular, we included a model to treat the scaling of the membrane as solid precipitated following a (general) chemical reaction. We connected the accumulation of solids at the membrane with porosity and permeability as described by the Darcy theory of porous media. Following this approach, we were able to give a full explicit model to derive the dynamical evolution of the filtration process by specifying a few initial parameters (e.g., the property of the solution and the kinetics of the reaction).
The membrane is described as a dynamic boundary condition for the fluid mechanics and solute transport equations, which are coupled together through the osmotic pressure term, and therefore the flow through the membrane. We implemented our model in the widely used software package for CFD calculations OpenFOAM(r), and performed simulations for a selected range of operating conditions. Results show how this model can be used to predict the decay in the flux through the membrane due to the accumulation of the precipitated solid originating from the chemical reaction.
The formulation presented here has two main advantages which make it flexible and powerful in treating polarization. First, the proposed formulation can address all the interconnections between the different mechanisms (fluid dynamics, solute evolution, chemical reaction, scaling, and fouling) which affect the membrane performance. The second advantage is that the model can be easily extended to include more complex geometries, or models for the osmotic pressure (such as the Pitzer model (Pitzer, 1973; Khaisheh et al., 2019)), fluid flow conditions in the system, as well as more complex reactions paths.
|
2305.01093 | Stable free boundary surfaces with constant extrinsic curvature in
$3$-dimensional space forms | In this paper we use the notion of stability for free boundary surfaces with
constant higher order mean curvature to obtain rigidity results for
$H_2$-surfaces with free boundary of a geodesic ball of a simply connected
$3$-dimensional space form or a slab of $\mathbb{R}^3$. | Leonardo Damasceno, Maria Fernanda Elbert | 2023-05-01T21:28:28Z | http://arxiv.org/abs/2305.01093v1 | # Stable free boundary surfaces with constant extrinsic curvature in 3-dimensional space forms
###### Abstract
In this paper we use the notion of stability for free boundary surfaces with constant higher order mean curvature to obtain rigidity results for \(H_{2}\)-surfaces with free boundary of a geodesic ball of a simply connected 3-dimensional space form or a slab of \(\mathbb{R}^{3}\).
+
Footnote †: The authors were partially supported by CAPES.
+
Footnote †: The authors were partially supported by CAPES.
+
Footnote †: The authors were partially supported by CAPES.
+
Footnote †: The authors were partially supported by CAPES.
## 1 Introduction
The extrinsic curvature \(H_{2}\) of a surface is the product of its principal curvatures. As a consequence of the Gauss equation, when immersed into a 3-manifold with constant curvature equal to \(c\in\mathbb{R}\), the intrinsic curvature \(K\) and the extrinsic curvature are related via \(H_{2}=K-c\). In particular, when the ambient space is the 3-dimensional Euclidean space \(\mathbb{R}^{3}\) both notions coincide.
The notion of stability of surfaces with constant mean curvature has been studied by mathematicians throughout the last four decades. It is known that minimal hypersurfaces can be seen as critical points of the volume functional, whereas the hypersurfaces with constant mean curvature (CMC) can also be described on a variational setting. They are critical points of the area functional with respect to variations which preserve volume. Stability, then, means that they are a minimum of area for such variations.
Given a region \(\Omega\) of a Riemannian manifold \(M\), a hypersurface \(\Sigma\subseteq\Omega\) whose boundary \(\partial\Sigma\) is contained into \(\partial\Omega\) is said to have free boundary if its boundary intersects orthogonally \(\partial\Omega\). In a
more general situation, when the contact angle is constant along the intersection, such submanifold is said to be capillary. Minimal or CMC capillary hypersurfaces supported on \(\partial\Omega\) can also be seen as critical points of the volume functional but, in this case, it is only considered variations which maintains the boundary on the hypersurface which supports it. A number of results has been proved for the case of the ambient space has dimension equal to \(3\)[13, 14, 15, 17].
When considering higher order mean curvatures, the notion of stability does not come from a variational setting. Despite that, for closed hypersurfaces (fixed bounded variations) the second author and Barbara Nelli proposed a notion of stability for such hypersurfaces by using the linearization of the corresponding PDE (see [7]) and, in [5], the first and the second authors proposed a notion of stability for the free boundary and capillary cases).
The main goal of this paper is to prove the following results:
**Theorem 1.1**.: Let \(\Sigma^{2}\) be a closed disk and \(\varphi:\Sigma\to\Omega\subseteq\mathbb{M}^{3}(c)\) be a \(H_{2}\)-surface with free boundary in \(\partial\Omega\) and \(H_{2}>0\). Then \(\varphi(\Sigma)\) is totally umbilical.
**Theorem 1.2**.: Let \(\varphi:\Sigma^{2}\to B_{R}\subseteq\mathbb{M}^{3}(c)\) be a stable \(H_{2}\)-surface with free boundary in a geodesic ball \(B_{R}\) with radius \(R>0\). If \(c>0\) assume the surface is contained into a hemisphere and if \(c<0\) assume that \(\dfrac{A(\Sigma)}{\ell(\partial\Sigma)}>-\dfrac{\mathrm{cn}_{c}(R)}{c\, \mathrm{sn}_{c}(R)}\), where
\[\mathrm{sn}_{c}(\rho)=\begin{cases}\dfrac{\sin\left(\rho\sqrt{c}\right)}{ \sqrt{c}},&\text{if }c>0\\ \rho,&\text{if }c=0\\ \dfrac{\sinh\left(\rho\sqrt{-c}\right)}{\sqrt{-c}},&\text{if }c<0\end{cases} \tag{1}\]
and \(\mathrm{cn}_{c}(\rho)=\mathrm{sn}_{c}^{\prime}(\rho)\). Then \(\varphi(\Sigma)\) is totally umbilical.
**Theorem 1.3**.: Let \(\varphi:\Sigma\to\mathbb{R}^{3}\) be a compact stable \(H_{2}\)-surface with a free boundary in a slab bounded by two parallel planes \(\Pi_{1}\) and \(\Pi_{2}\) such that its genus is equal to \(0\) and \(H_{2}>0\). Then \(\varphi(\Sigma)\) is a surface of revolution around an axis orthogonal to \(\Pi_{1}\).
The first is a generalization of [12, Theorem 1] when \(c=0\) and [17, Theorem 4.1] when \(c\neq 0\). The second theorem is an extension of [17, Theorem 5.1] whereas the third is an extension of [1, Theorem 3.1]. The paper is organized as the following: the Section 2 is dedicated to fix the notation and the concepts used throughout the rest of the paper, and the sections 3, 4 and 5 are dedicated to the proof of Theorems 1.1, 1.2 and 1.3, respectively.
## 2 Preliminaries
Let \(\left(M^{3},g\right)\) be an oriented Riemannian manifold and \(\varphi:\Sigma^{2}\to M\) be an oriented surface with unit normal vector field \(\eta\) in the normal bundle \(\Gamma(N\Sigma)\). Its second fundamental form _II_, scalar
second fundamental form \(\mathit{II}_{\eta}\) and Weingarten operator \(A=\left(\mathit{II}_{\eta}\right)^{\flat}\) are defined, respectively, as
\[\mathit{II}(X,Y) = \left(\overline{\nabla}_{X}Y\right)^{\bot}=\left\langle\overline{ \nabla}_{X}Y,\eta\right\rangle\eta=\mathit{II}_{\eta}\left(X,Y\right)\eta\] \[\left\langle A(X),Y\right\rangle = \mathit{II}_{\eta}\left(X,Y\right)=\left\langle-\overline{\nabla }_{X}\eta,Y\right\rangle,\]
where \(X,Y\in\Gamma(T\Sigma)\) and \(\overline{\nabla}\) is the Levi-Civita connection of \(M\). Let \(\kappa_{1}(p)\geq\kappa_{2}(p)\) be the principal curvatures of the surface \(\varphi\) at \(p\). The 1-mean curvature \(H_{1}\) is the arithmetic mean of \(\kappa_{1}\) and \(\kappa_{2}\) and the 2-mean curvature is given by its product \(H_{2}=\kappa_{1}\kappa_{2}\). The surface is said to have constant mean curvature of order \(r\in\{1,2\}\) if \(H_{r}\) is constant over \(\Sigma\); when this happens, \(\Sigma\) is called an \(H_{r}\)-surface.
The first Newton transformation \(P_{1}\) is defined by \(P_{1}=2H_{1}I-A\). Since \(A_{p}\) is self-adjoint for all \(p\in\Sigma\), the Newton transformations are self-adjoint as well and their eigenvectors are the same as those of \(A\). If \(\{e_{1},e_{2}\}\) denotes the eigenvectors of \(A\) then the eigenvalue associated to \(e_{i}\) is equal to \(S_{1}(A_{i})=2H_{1}-\kappa_{i}\). Moreover, we have the following identities:
\[\mathrm{tr}\,P_{1} = 2H_{1} \tag{2}\] \[\mathrm{tr}\,P_{1}A = 2H_{2}\] (3) \[\mathrm{tr}\,P_{1}A^{2} = 2H_{1}H_{2}. \tag{4}\]
In a general Riemannian manifold \((M,g)\) with Levi-Civita connection \(\overline{\nabla}\), if \(\phi\) is a pointwise symmetric \((2,0)\)-tensor in \(M\), the Cheng-Yau operator of \(f\in C^{\infty}(M)\) is defined by
\[\square f=\mathrm{tr}\left(\phi\left(\mathrm{Hess}\,f\right)^{\flat}\right)= \mathrm{div}\left(\phi\overline{\nabla}f\right)-\left\langle\mathrm{div}\, \phi,\overline{\nabla}f\right\rangle,\]
where \(\mathrm{Hess}\,f\) is the Hessian of \(f\) in \(M\), \(\left(\mathrm{Hess}\,f\right)^{\flat}\) is the metric \((1,1)\)-tensor field on \(M\) equivalent to \(\mathrm{Hess}\,f\) and \(\mathrm{div}\,\phi:=\mathrm{tr}\left(\overline{\nabla}\phi\right)\). The operator \(\phi\) is said to be divergence free if \(\mathrm{div}\,\phi=0\).
When considering an oriented surface \(\varphi:\Sigma^{2}\to M^{3}\) with shape operator \(A\in\Gamma\left(\mathrm{End}\,(T\Sigma)\right)\), the \(L_{1}\)-operator of \(\Sigma\) is defined as the Cheng-Yau operator for the Newton transformation \(P_{1}\), i.e.,
\[L_{1}f=\mathrm{tr}\left(P_{1}\left(\mathrm{Hess}\,f\right)^{\flat}\right), \quad f\in C^{\infty}(\Sigma).\]
Here, we say that \(-L_{1}\) is a second-order elliptic differential operator when \(P_{1}\) is positive definite on each point of \(\Sigma\). If \(H_{2}>0\), then, after a choice of orientation on \(\varphi\), \(P_{1}\) is positive definite. [6, Lemma 3.10]. In [16, Theorem 4.1], H. Rosenberg proved that \(P_{1}\) is divergence free when \(M\) has constant sectional curvature (see also [6, Corollary 3.7] for the case where \(r=1\) and \(M\) is Einstein).
Let \(\Omega\subseteq M\) be a closed domain with smooth boundary \(\partial\Omega\) and assume that \(\varphi:\Sigma\to M\) is an oriented surface such that \(\varphi(\Sigma)\subseteq\Omega\) and \(\varphi(\partial\Sigma)\subseteq\partial\Omega\). Let \(\nu\in\Gamma\left(T\Sigma|_{\partial\Sigma}\right)\) be the unit outward conormal vector field on \(\partial\Sigma\) and let \(\overline{\nu}\in\Gamma\left(T\partial\Omega|_{\partial\Sigma}\right)\) and \(\overline{\eta}\in\Gamma\left(TM|_{\partial\Omega}\right)\) be the unit normal vector fields associated to the immersions \(\varphi|_{\partial\Sigma}:\partial\Sigma\to\partial\Omega\) and \(\iota_{\partial\Omega}:\partial\Omega\hookrightarrow M\), respectively, such that \(\{\nu,\eta\}\)
has the same orientation as \(\{\overline{\nu},\overline{\eta}\}\) on each point of \(\varphi(\partial\Sigma)\). If \(\theta\) denotes the angle between \(\nu\) and \(\overline{\nu}\), then
\[\begin{cases}\nu=\cos\theta\,\overline{\nu}+\sin\theta\,\overline{\eta}\\ \eta=-\sin\theta\,\overline{\nu}+\cos\theta\,\overline{\eta}\end{cases},\quad \text{or conversely,}\quad\begin{cases}\overline{\nu}=\cos\theta\,\nu-\sin \theta\,\eta\\ \overline{\eta}=\sin\theta\,\nu+\cos\theta\,\eta\end{cases}. \tag{5}\]
A \(H_{r}\)-surface \(\varphi:\Sigma\to\Omega\subseteq M\) is said to be capillary if the contact angle \(\theta\) between \(\partial\Sigma\) and \(\partial\Omega\) is constant. When \(\theta=\dfrac{\pi}{2}\), \(\varphi\) is called a free boundary surface. When \(\varphi:\Sigma\to\Omega\subseteq M\) is a surface with free boundary in \(\Omega\), (5) implies that \(\nu=\overline{\eta}\) and \(\eta=-\overline{\nu}\).
The following result will be used throughout this article.
**Lemma 2.1** ([1, Lemma 2.2]).: Suppose \(\iota_{\partial\Omega}\) is a totally umbilical immersion into \(M\) and that \(\varphi\) is a capillary immersion into \(M\). Then the unit outwards normal vector field \(\nu\in\Gamma\left(T\Sigma|_{\partial\Sigma}\right)\) is a principal direction of \(\varphi\).
A variation of \(\varphi\) is a smooth function \(\Phi:\Sigma^{2}\times(-\varepsilon,\varepsilon)\to M^{3}\) such that, for each \(t\in(-\varepsilon,\varepsilon)\), \(\varphi_{t}=\Phi|_{\Sigma\times\{t\}}\) is an isometric immersion and \(\varphi_{0}=\varphi\). The pair \((\Sigma,\varphi_{t}^{*}g)\) will be denoted by \(\Sigma_{t}\). The variational field of \(\Phi\) in \(\varphi_{t}\) is defined by
\[\xi_{t}(p)=\left.\Phi_{*}\dfrac{\partial}{\partial t}\right|_{(p,t)}\in\Gamma \left(TM|_{\varphi_{t}(\Sigma)}\right).\]
If \(\eta_{t}\in\Gamma(N\Sigma)\) is the unit normal vector field of \(\varphi_{t}\), the support function of \(\Phi\) at \(t\) is defined by
\[f_{t}=\langle\xi_{t},\eta_{t}\rangle\in C^{\infty}(\Sigma).\]
Since \(\varphi_{t}:\Sigma\to M\) is an oriented surface, one can define its second fundamental form \(\textit{II}_{t}\), its scalar second fundamental form \(\left(\textit{II}_{t}\right)_{\eta_{t}}\) and its Weingarten operator \(A_{t}\). Also, we set \(\overline{R}_{\eta_{t}}\left(X\right):=\overline{R}\left(\eta_{t},X\right) \eta_{t}\), where \(\overline{R}\) the Riemann curvature tensor of \(M\) defined by
\[\overline{R}(X,Y)Z=\overline{\nabla}_{Y}\overline{\nabla}_{X}Z-\overline{ \nabla}_{X}\overline{\nabla}_{Y}Z+\overline{\nabla}_{[X,Y]}Z,\quad X,Y,Z\in \Gamma(TM).\]
If \(H_{2}(t)\) denotes the 2-mean curvature associated to immersion \(\varphi_{t}\), its variation is given by
\[H_{2}^{\prime}(t)=\left(L_{1}\right)_{t}f_{t}+2H_{1}(t)H_{2}(t)f_{t}+\text{tr }_{\Sigma_{t}}\left(\left(P_{1}\overline{R}_{\eta}\right)_{t}\right)f_{t}+\xi _{t}^{\top}\left(H_{2}(t)\right), \tag{6}\]
where \(\left(L_{1}\right)_{t}\) is the \(L_{1}\)-operator of immersion \(\varphi_{t}\) and \(\left(P_{1}\overline{R}_{\eta}\right)_{t}:=\left(P_{1}\right)_{t}\circ\overline {R}_{\eta_{t}}\). A proof of (6) can be found in [6, Proposition 3.2].
The enclosed volume between \(\Sigma\) and \(\Sigma_{t}\) is defined as \(\mathcal{V}(t)=\int_{\Sigma\times[0,t]}\Phi^{*}d\mu_{M}\), with \(d\mu_{M}\) being the volume form of \((M,g)\). A variation \(\Phi\) is volume-preserving if \(\mathcal{V}(t)=\mathcal{V}(0)\) for all \(t\in(-\varepsilon,\varepsilon)\). It is known that
\[\mathcal{V}^{\prime}(0)=\int_{\Sigma}f\,d\mu_{\Sigma},\]
where \(u=\left\langle\xi,\eta\right\rangle\in C^{\infty}(\Sigma)\) and \(d\mu_{\Sigma}\) is the volume form of \((\Sigma,\varphi^{*}g)\). Thus, a variation \(\Phi\) is volume-preserving if and only if \(\int_{\Sigma}f\,d\mu_{\Sigma}=0\).
A \(H_{2}\)-surface \(\varphi:\Sigma\to M\) is positive definite if \(P_{1}\) is positive definite on each point \(p\in\Sigma\). A variation \(\Phi\) of a surface \(\varphi:\Sigma\to\Omega\subseteq M\) is called admissible if \(\varphi_{t}(\operatorname{int}\Sigma)\subseteq\operatorname{int}\Omega\) and \(\varphi_{t}(\partial\Sigma)\subseteq\partial\Omega\) for any \(t\in(-\varepsilon,\varepsilon)\), where \(\varphi_{t}=\Phi|_{\Sigma\times\{t\}}\). If \(\Phi\) is an admissible variation of \(\varphi\), then \(\xi|_{\partial\Sigma}\in\Gamma\left(T\partial\Omega|_{\partial\Sigma}\right)\). If \(\Sigma\) is a capillary \(H_{2}\)-surface supported in \(\partial\Omega\) with contact angle \(\theta\in(0,\pi)\) and \(\Phi\) is a volume-preserving admissible variation of \(\varphi\), consider the functional, defined in [5],
\[\mathcal{F}_{1,\theta}[\Sigma_{t}]=-\int_{\Sigma}H_{2}(t)\left\langle\xi_{t}, \eta_{t}\right\rangle\,d\mu_{\Sigma_{t}}+\int_{\partial\Sigma}\left\langle\xi_ {t},(P_{1}\nu-|P_{1}\nu|\cos\theta\,\overline{\nu})_{t}\right\rangle\,d\mu_{ \partial\Sigma_{t}}, \tag{7}\]
where \(d\mu_{\Sigma_{t}}\) and \(d\mu_{\partial\Sigma_{t}}\) denote the volume forms of \(\Sigma_{t}\) and \(\partial\Sigma_{t}=\left(\partial\Sigma,\left(\varphi_{t}|_{\partial\Sigma} \right)^{*}g\right)\), respectively. If \(\partial\Omega\) is totally umbilical and \(\Phi\) is an admissible volume-preserving variation of \(\varphi\) then
\[\frac{\partial}{\partial t}\mathcal{F}_{1,\theta}\left[\Sigma_{t }\right]\biggr{|}_{t=0}=-\int_{\Sigma}f\left(L_{1}f+\operatorname{tr}\left(P_ {1}\left(A^{2}+\overline{R}_{\eta}\right)\right)f\right)\,d\mu_{\Sigma}+\\ +\int_{\partial\Sigma}|P_{1}\nu|\,\,f\left(\frac{\partial f}{ \partial\nu}+\left(\csc\theta\left(\mathit{I\!I}_{\partial\Omega}\right)_{ \overline{\eta}}\left(\overline{\nu},\overline{\nu}\right)-\cot\theta\left( \mathit{I\!I}_{\Sigma}\right)_{\eta}\left(\nu,\nu\right)\right)f\right)\,d\mu_ {\partial\Sigma}, \tag{8}\]
where \(f=\left\langle\xi,\eta\right\rangle\in C^{\infty}(\Sigma)\) is the support function of \(\Phi\) at \(t=0\) and \(\mathit{I\!I}_{\Sigma}\) and \(\mathit{I\!I}_{\partial\Omega}\) are the second fundamental forms of \(\varphi:\Sigma\to\Omega\) and \(\iota_{\partial\Omega}:\partial\Omega\hookrightarrow\Omega\), respectively. For a proof see [5, Appendix A]. A positive definite capillary \(H_{2}\)-surface \(\varphi:\Sigma\to\Omega\subseteq M\) supported in \(\partial\Omega\) with contact angle \(\theta\in(0,\pi)\) is \(r\)-stable if \(\left.\frac{\partial}{\partial t}\mathcal{F}_{1,\theta}\left[\Sigma_{t} \right]\right|_{t=0}\geq 0\) for any volume-preserving admissible variation \(\Phi\) of \(\varphi\). If the inequality holds for all admissible variations of \(\varphi\), \(\Sigma\) is said to be strongly \(r\)-stable. The expression (8) is associated to the eigenvalue problem below:
\[\begin{cases}T_{1}f=-L_{1}f-q_{r}f=\lambda f,&\text{ in }\Sigma\\ \frac{\partial f}{\partial\nu}+\alpha_{\theta}f=0,&\text{ on }\partial \Sigma\end{cases}\,, \tag{9}\]
where \(q_{r}=\operatorname{tr}\left(P_{1}\left(A^{2}+\overline{R}_{\eta}\right) \right)\in C^{\infty}(\Sigma)\) and \(\alpha_{\theta}=\csc\theta\left(\mathit{I\!I}_{\partial\Omega}\right)_{ \overline{\eta}}\left(\overline{\nu},\overline{\nu}\right)-\cot\theta\left( \mathit{I\!I}_{\Sigma}\right)_{\eta}\left(\nu,\nu\right)\in C^{\infty}(\partial \Sigma)\). For the properties involving its principal eigenvalue see [5, Proposition 3.4]. The notion of \(1\)-stability can also be considered when \(P_{1}\) is negative definite, see [5, Remark 3.5].
Let \(\mathbb{M}^{3}(c)\) be the simply connected space form of constant sectional curvature \(c\), i.e., \(\mathbb{M}^{3}(c)\) is equal to \(\mathbb{R}^{3}\) if \(c=0\), \(\mathbb{S}^{3}(c)\) if \(c>0\) and \(\mathbb{H}^{3}(c)\) if \(c=0\). In this paper we consider the following models for \(\mathbb{M}^{3}(c)\):
\[\mathbb{R}^{3} = \left\{x=(x_{1},x_{2},x_{3},x_{4})\in\mathbb{R}^{4}\ |\,\ x_{4}=0\right\}\] \[\mathbb{S}^{3}(c) = \left\{x=(x_{1},x_{2},x_{3},x_{4})\in\mathbb{R}^{4}\ |\,\ x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2}= \frac{1}{c^{2}}\right\}\]
\[\mathbb{H}^{3}(c) = \left\{x=(x_{1},x_{2},x_{3},x_{4})\in\mathbb{R}_{1}^{4}\ |\ \,x_{1}^{2}+x_{2}^{2}+x_{3}^{2}-x_{4}^{2}=-\frac{1}{c^{2}},x_{4}>0\right\}\]
endowed with the pullback of the Euclidean metric for \(c\geq 0\) or the Minkowski metric for \(c<0\). When \(M=\mathbb{M}^{3}(c)\) we have that \(\overline{R}_{\eta}(X)=cX\) for all \(X\in\Gamma(T\mathbb{M}^{3}(c))\) and \(\operatorname{div}P_{1}=0\) (see [16, Theorem 4.1]). Thus, (8) can be rewritten as
\[\frac{\partial}{\partial t}\mathcal{F}_{1,\theta}\left[\Sigma_{t}\right]\biggr{|} _{t=0}=\int_{\Sigma}\left\langle P_{1}\nabla f,\nabla f\right\rangle-2H_{1} \left(H_{2}+c\right)f^{2}\,d\mu_{\Sigma}+\int_{\partial\Sigma}\left|P_{1}\nu \right|\alpha_{\theta}f^{2}\,d\mu_{\partial\Sigma}. \tag{10}\]
The equation (10) can be viewed as a quadratic form associated to a bilinear symmetric form on the Hilbert space \(H^{1}(\Sigma)\), the closure of \(C^{\infty}(\Sigma)\) with respect to the Sobolev norm
\[\left\|\cdot\right\|_{H^{1}(\Sigma)}^{2}=\left\|\cdot\right\|_{L^{2}(\Sigma)} ^{2}+\left\|\nabla\cdot\right\|_{L^{2}(\Sigma)}^{2}.\]
The 1-index form of \(\varphi:\Sigma\to\Omega\subseteq\mathbb{M}^{3}(c)\) is
\[\mathcal{I}_{1,\theta}(f_{1},f_{2})=\int_{\Sigma}\left\langle P_{1}\nabla f_{ 1},\nabla f_{2}\right\rangle-2H_{1}\left(H_{2}+c\right)f_{1}f_{2}\,d\mu_{ \Sigma}+\int_{\partial\Sigma}\left|P_{1}\nu\right|\alpha_{\theta}f_{1}f_{2}\, d\mu_{\partial\Sigma}, \tag{11}\]
where \(f_{1},f_{2}\in H^{1}(\Sigma)\). \(\Sigma\) is strongly 1-stable if and only if \(\mathcal{I}_{1,\theta}(f,f)\geq 0\) for all \(f\in H^{1}(\Sigma)\) and 1-stable if \(\mathcal{I}_{1,\theta}(f,f)\geq 0\) for all \(f\in\mathcal{F}=\left\{f\in H^{1}(\Sigma)\,|\,\int_{\Sigma}f\,d\mu_{\Sigma}=0\right\}\). It can be proved that a totally umbilical capillary compact surface supported on a connected totally umbilical surface of \(\mathbb{M}^{3}(c)\) is 1-stable [5, Proposition 4.2].
As in the case \(r=0\), when considering \(\varphi\) a capillary \((r+1)\)-minimal surface, i.e. \(H_{2}=0\), we say that \(\varphi\) is 1-stable if \(\mathcal{I}_{1,\theta}(f,f)\geq 0\) for all \(f\in C^{\infty}_{0}(\Sigma)\). This means the hypothesis on the variation being volume-preserving is dropped.
If \(f\in\mathcal{F}\), the normal vector field \(\xi=f\eta\) on \(\Sigma\) is a Jacobi field if \(f\in\ker\mathcal{I}_{1,\theta}|_{\mathcal{F}\times\mathcal{F}}\), i.e., \(\mathcal{I}_{1,\theta}(f,g)=0\) for every \(g\in\mathcal{F}\). The next lemma, whose proof is in [5, Lemma 4.4], gives a characterization of Jacobi fields on \(\Sigma\).
**Lemma 2.2**.: Let \(\varphi:\Sigma\to\Omega\subseteq\mathbb{M}^{3}(c)\) be a positive definite \(H_{2}\)-surface with free boundary in \(\partial\Omega\) and \(f\in\mathcal{F}\). Then
1. \(\xi=f\eta\) is a Jacobi field on \(\Sigma\) if and only if \(f\in C^{\infty}(\Sigma)\) and \[\begin{cases}T_{1}f=\text{constant}&\text{in }\Sigma\\ \frac{\partial f}{\partial\nu}+\alpha_{\theta}f=0&\text{on }\partial\Sigma \end{cases}.\] (12)
2. If \(\varphi\) is \(r\)-stable and \(\mathcal{I}_{1,\theta}(f,f)=0\) then \(f\) is a Jacobi field on \(\Sigma\).
Proof of Theorem 1.1
The Theorem 1.1 is inspired by its analogous version proved by J. Nitsche in [12] when \(c=0\) and by R. Souam in [17, Theorem 4.1] when \(c\neq 0\), both of them addressing constant mean curvature immersions of disk into a ball in \(\mathbb{M}^{3}(c)\). Here, we consider immersions of the disk into a compact, convex smooth body \(\Omega\). In order to prove this result, one needs the following theorem proved by R. Bryant.
**Theorem A** [3, Theorem 3]: Let \(\varphi:\Sigma^{2}\to\mathbb{M}^{3}(c)\) be a smooth immersion that satisfies a Weingarten equation of the form \(H_{1}=f(H_{1}^{2}-H_{2})\), for some \(f\in C^{\infty}((-\varepsilon,\infty);\mathbb{R})\) and \(\varepsilon>0\). Then \(\varphi\) is totally umbilical or else, the umbilic locus consists entirely of isolated points of strictly negative index.
For a definition of index, see [10, p. 107]. As a direct consequence of the Poincare-Hopf Theorem for manifolds with boundary [11, p. 35], we have a boundary version of Hopf's Theorem:
**Theorem B** (Boundary version of Hopf's Theorem) Let \(\Sigma^{2}\) be a compact manifold with boundary \(\partial\Sigma\) for which the umbilic locus \(\mathcal{U}\) is finite. Suppose that one of the principal directions is transversal to \(\partial\Sigma\). Then
\[\sum_{p\in\mathcal{U}}i(p)=\chi(\Sigma),\]
where \(i(p)\) is the index of \(p\in\mathcal{U}\).
Proof of Theorem 1.1.: Since \(\varphi:\Sigma^{2}\to\mathbb{M}^{3}(c)\) is a \(H_{2}\)-surface, its mean curvature satisfies a Weingarten equation \(H_{1}=f(H_{1}^{2}-H_{2})\), where \(f(y)=\sqrt{y+H_{2}}\) and \(y\in(-H_{2},+\infty)\). If \(\varphi\) is not totally umbilic then, by Theorem A, the umbilical points of \(\varphi\) form a finite set \(\mathcal{U}\subseteq\Sigma\) and each umbilical point has negative index. Since \(\nu\) is a principal direction along \(\partial\Sigma\), one can use Theorem B to obtain
\[0>\sum_{p\in\mathcal{U}}i(p)=\chi(\Sigma)=1,\]
which is a contradiction. Thus \(\Sigma\) is totally umbilical.
**Remark 3.1**.: The same argument holds if \(\Sigma\) is a CMC surface since Theorem A still holds in this case (choose \(f\) to be a constant function).
## 4 Proof of Theorem 1.2
In this section we will prove Theorem 1.2. The geodesic ball \(B_{R}\) is convex for all \(R\in(0,R_{c})\), where \(R_{c}=+\infty\) if \(c\leq 0\) and \(R_{c}=\dfrac{\pi}{2\sqrt{c}}\) if \(c>0\). Its boundary \(\partial B_{R}\) is a totally umbilical sphere whose mean curvature with respect to the inward unit normal is equal to \(\dfrac{\mathrm{cn}_{c}(R)}{\mathrm{sn}_{c}(R)}\).
One must state some identities that will be used throughout the proof.
**Lemma 4.1**.: Let \(\varphi:\Sigma^{2}\to\mathbb{M}^{3}(c)\) be a surface. Then
\[L_{1}\varphi = 2H_{2}\eta-2cH_{1}\varphi \tag{13}\] \[L_{1}\eta = -\operatorname{tr}\left(P_{1}A^{2}\right)\eta+2cH_{2}\varphi- \nabla H_{2}, \tag{14}\]
Here \(L_{1}\varphi\) and \(L_{1}\eta\) are calculated coordinate-wise.
For a proof of (13) and (14) see [16, Remark 5.1].
The next Lemma also has key role for the proof of Theorem 1.2.
**Lemma 4.2**.: Suppose that \(\varphi:\Sigma\to\Omega\subseteq M\) is a surface such that \(H_{2}>0\) and let \(u\in C^{\infty}(\Sigma)\backslash\{0\}\) be a function such that \(T_{1}u=0\). Then its nodal set \(\Gamma=u^{-1}(\{0\})\) is a finite graph whose vertices are the critical points of \(u\). In a neighborhood of each critical point \(\Gamma\) is a star of at least two branches.
Proof.: Let \(p\in\Sigma\) and take \(\varphi:U_{p}\subseteq\mathbb{R}^{2}\to\Sigma\) a parametrization of \(\Sigma\) at \(p\) with local coordinates \((u,v)\). Since \(L_{1}\) is a second-order elliptic differential operator only with principal part, it follows from PDE theory in [8, Chapter 3] that there exists a coordinate change
\[\overline{u}=h_{1}(u,v),\quad\overline{v}=h_{2}(u,v)\]
of class \(C^{2}\) in a neighborhood of \(p_{0}=\varphi^{-1}(p)\) whose Jacobian does not vanish at \(p_{0}\) that transforms the pullback of \(L_{1}\) in the Laplacian operator. We may suppose (restricting \(U_{p}\) if necessary) that \((\overline{u},\overline{v})\) is a diffeomorphism in \(U_{p}\). Thus, in the new coordinates \((\overline{u},\overline{v})\), \(L_{1}\) is the Laplacian and [4, Theorem 2.5] implies that its nodal lines in \(U_{p}\) meet at the critical points. Since \(\Sigma\) is compact, we can cover \(\Sigma\) with finitely many such open neighborhoods \(U_{p}\), proving the claim.
Proof of Theorem 1.2.: The proof is an extension of that in [15, Theorem 11]. From [14, Lemma 1.1], \((\text{II}_{\partial B})_{\overline{\eta}}(\overline{v},\overline{v})=-\dfrac {\operatorname{cn}_{c}(R)}{\operatorname{sn}_{c}(R)}\) and the curvature of \(\partial\Sigma\) in \(\Sigma\) is equal to \(\dfrac{\operatorname{cn}_{c}(R)}{\operatorname{sn}_{c}(R)}\). Thus, the Gauss-Bonnet Theorem implies that
\[2\pi\chi(\Sigma)=\int_{\Sigma}K\,d\mu_{\Sigma}+\int_{\partial\Sigma}\kappa_{g} \,d\mu_{\partial\Sigma}=\int_{\Sigma}H_{2}+c\,d\mu_{\Sigma}+\int_{\partial \Sigma}\kappa_{g}\,d\mu_{\partial\Sigma}>cA(\Sigma)+\dfrac{\operatorname{cn}_ {c}(R)}{\operatorname{sn}_{c}(R)}\ell(\partial\Sigma).\]
In all three cases the inequality above implies that the genus of \(\Sigma\) is equal to zero.
Now consider the case \(c=0\) and suppose, without loss of generality, that \(B_{R}\) is the unit ball centered at the origin of \(\mathbb{R}^{3}\). Let \(p_{0}\in\Sigma\) be a point where the function \(p\in\Sigma\mapsto|\varphi(p)|\) attains its minimum and define the function
\[f(p)=\left\langle\varphi(p)\wedge\eta(p_{0}),\eta(p)\right\rangle,\quad p\in\Sigma, \tag{15}\]
where \(\wedge\) denotes the cross product in \(\mathbb{R}^{3}\). It is clear that \(f(p_{0})=0\) and for all \(\mathbf{v}\in T\Sigma\),
\[\left\langle\nabla f,\mathbf{v}\right\rangle=\mathbf{v}\left\langle\varphi \wedge\eta(p_{0}),\eta\right\rangle=\left\langle\mathbf{v}\wedge\eta(p_{0}), \eta\right\rangle+\left\langle\varphi\wedge\eta(p_{0}),\overline{\nabla}_{ \mathbf{v}}\eta\right\rangle=\left\langle\eta(p_{0})\wedge\eta-A\left(\varphi \wedge\eta(p_{0})\right)^{\top},\mathbf{v}\right\rangle.\]
Thus \(\nabla f=\eta(p_{0})\wedge\eta-A\left(\varphi\wedge\eta(p_{0})\right)^{\top}\), where \({}^{\top}\) denotes the projection onto \(T\Sigma\). Since \(\left|\varphi\right|\) attains its minimum at \(p_{0}\), we have \(\varphi(p_{0})\parallel\eta(p_{0})\), hence \(\nabla f(p_{0})=0\). Also we have
\[L_{1}f = L_{1}\left\langle\varphi\wedge\eta(p_{0}),\eta\right\rangle=L_{1 }\left\langle\eta\wedge\varphi,\eta(p_{0})\right\rangle\] \[= L_{1}\left\langle\left(\eta^{2}\varphi^{3}-\eta^{3}\varphi^{2}, \eta^{3}\varphi^{1}-\eta^{1}\varphi^{3},\eta^{1}\varphi^{2}-\eta^{2}\varphi^{1 }\right),\eta(p_{0})\right\rangle\] \[= \left\langle\left(L_{1}\left(\eta^{2}\varphi^{3}-\eta^{3}\varphi ^{2}\right),L_{1}\left(\eta^{3}\varphi^{1}-\eta^{1}\varphi^{3}\right),L_{1} \left(\eta^{1}\varphi^{2}-\eta^{2}\varphi^{1}\right)\right),\eta(p_{0}) \right\rangle,\]
where \(\varphi^{i}=\left\langle\varphi,\mathbf{e}_{i}\right\rangle\), \(\eta^{i}=\left\langle\eta,\mathbf{e}_{i}\right\rangle\), \(i\in\{1,2,3\}\), and the vectors \(\{\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\}\) form the canonical basis of \(\mathbb{R}^{3}\). Using (13) and (14), we have for \(i,j\in\{1,2,3\}\),
\[L_{1}\left(\eta^{i}\varphi^{j}\right) = \varphi^{j}L_{1}\eta^{i}+\eta^{i}L_{1}\varphi^{j}+2\left\langle P _{1}\nabla\eta^{i},\nabla\varphi^{j}\right\rangle\] \[= -2H_{1}H_{2}\eta^{i}\varphi^{j}+2H_{2}\eta^{i}\eta^{j}-2\left\langle P _{1}A\mathbf{e}_{i}^{\top},\mathbf{e}_{j}^{\top}\right\rangle\] \[L_{1}\left(\eta^{i}\varphi^{j}-\eta^{j}\varphi^{i}\right) = -2H_{1}H_{2}\left(\eta^{i}\varphi^{j}-\eta^{j}\varphi^{i}\right).\]
Thus, \(L_{1}f+2H_{1}H_{2}f=0\) on \(\Sigma\). Moreover, since \(\varphi=\nu\) on \(\partial\Sigma\), the Lemma 2.1 implies that, on \(\partial\Sigma\)
\[\frac{\partial f}{\partial\nu} = \left\langle\nabla f,\nu\right\rangle=\left\langle\eta(p_{0}) \wedge\eta-A(\varphi\wedge\eta(p_{0}))^{\top},\nu\right\rangle\] \[= \left\langle\nu\wedge\eta(p_{0}),\eta\right\rangle-\left\langle \nu\wedge\eta(p_{0}),A\nu\right\rangle\] \[= f-\left|A\nu\right|\left\langle\nu\wedge\eta(p_{0}),\nu\right\rangle =f.\]
Hence the function \(f\) satisfies
\[\begin{cases}L_{1}f+2H_{1}H_{2}f=0&\text{in }\Sigma\\ \frac{\partial f}{\partial\nu}-f=0&\text{on }\partial\Sigma\end{cases}. \tag{16}\]
We claim that \(f\equiv 0\) on \(\Sigma\). Otherwise, Lemma 4.2 implies that the lines of the nodal set \(f^{-1}(\{0\})\) meet at the critical points of \(f\). Using the Gauss-Bonnet theorem for each connected component \(\Sigma_{i}\) of \(\Sigma\backslash\beta^{-1}(\{0\})\), we have
\[\int_{\Sigma_{i}}K\,d\mu_{\Sigma}=2\pi\chi(\Sigma_{i})-\int_{\partial\Sigma_{ i}}\kappa_{g}\,d\mu_{\partial\Sigma}-\sum_{j}\theta_{ij}, \tag{17}\]
where \(\theta_{ij}\), \(j\in\{1,...,j_{i}\}\) denotes the external angles of \(\Sigma_{i}\). Summing up (17) for all \(i\), we obtain
\[\int_{\Sigma}K\,d\mu_{\Sigma} = \sum_{i}\int_{\Sigma_{i}}K\,d\mu_{\Sigma}=\sum_{i}\left(2\pi \chi(\Sigma_{i})-\int_{\partial\Sigma_{i}}\kappa_{g}\,d\mu_{\partial\Sigma}- \sum_{j}\theta_{ij}\right) \tag{18}\] \[= 2\pi\sum_{i}\chi(\Sigma_{i})-\int_{\partial\Sigma}\kappa_{g}\,d \mu_{\partial\Sigma}-\sum_{l}\theta_{l},\]
where the last term means the sum of all external angles for every connected component \(\Sigma_{i}\). Since \(\partial\Sigma\) is smooth, it follows again from the Gauss-Bonnet theorem that (18) implies to
\[2\pi\left(2-2g-s\right) = 2\pi\chi(\Sigma)=\int_{\Sigma}K\,d\mu_{\Sigma}+\int_{\partial \Sigma}\kappa_{g}\,d\mu_{\partial\Sigma}\] \[= 2\pi\sum_{i}\chi(\Sigma_{i})-\sum_{l}\theta_{l},\]
where \(s\) is the number of components of \(\partial\Sigma\). Since \(f(p_{0})=0\) and \(\nabla f(p_{0})=0\), the Lemma 4.2 implies there are at least two nodal lines of \(f\) intersecting at \(p_{0}\) and forming a star at \(p_{0}\); so \(\sum_{l}\theta_{l}\geq 2\pi\). On the other hand, on each connected component \(\Gamma_{i}\) of \(\partial\Sigma\), \(i\in\{1,...,s\}\), choosing a positively oriented arclength parametrization \(\gamma\) we have \(\varphi\wedge\eta=-\gamma^{\prime}\). So
\[\int_{\Gamma_{i}}f\,d\mu_{\partial\Sigma}=\int_{\Gamma_{i}}\left\langle \varphi\wedge\eta(p_{0}),\eta\right\rangle\,d\mu_{\partial\Sigma}=-\int_{ \Gamma_{i}}\left\langle\varphi\wedge\eta,\eta(p_{0})\right\rangle\,d\mu_{ \partial\Sigma}=\int_{\Gamma_{i}}\left\langle\gamma^{\prime},\eta(p_{0}) \right\rangle\,d\mu_{\partial\Sigma}=0,\]
and it follows that \(f\) has at least two zeroes on each component \(\Gamma_{i}\). Each point of \(f^{-1}\left(\{0\}\right)\cap\Gamma_{i}\) contributes with at least \(\pi\) for the sum of the \(\theta_{j}\) in the last equation. Putting things together, we have
\[\sum_{l}\theta_{l}\geq 2\pi\left(1+s\right) \tag{19}\]
and using (19) in (18) we obtain
\[\sum_{i}\chi(\Sigma_{i})=\frac{1}{2\pi}\left(2\pi\left(2-2g-s\right)+\sum_{l} \theta_{l}\right)\geq 2-2g-s+1+s=3-2g.\]
Assuming that \(\Sigma\) has genus \(g=0\), it follows that \(\Sigma\backslash f^{-1}\left(\{0\}\right)\) has at least three connected components. If \(\Sigma_{1}\) and \(\Sigma_{2}\) are two connected components of the nodal domain of \(f\), define
\[\widetilde{f}=\begin{cases}f&\text{in }\Sigma_{1}\\ \alpha f&\text{in }\Sigma_{2}\\ 0&\text{in }\Sigma\backslash(\Sigma_{1}\cup\Sigma_{2})\end{cases},\]
where \(\alpha\in\mathbb{R}\) is such that \(\widetilde{f}\in\mathcal{F}\). Since \(\partial\Sigma_{i}\cap\partial\Sigma=\partial\Sigma\cap\Sigma_{i}\) and \(\widetilde{f}\equiv 0\) outside \(\Sigma_{i}\), we have
\[\int_{\Sigma_{1}}\left\langle P_{1}\nabla\widetilde{f},\nabla \widetilde{f}\right\rangle-2H_{1}H_{2}\widetilde{f}^{2}\,d\mu_{\Sigma} = \int_{\Sigma_{1}}\left\langle P_{1}\nabla\widetilde{f},\nabla f \right\rangle-2H_{1}H_{2}\widetilde{f}f\,d\mu_{\Sigma}\] \[= -\int_{\Sigma}\widetilde{f}\left(L_{1}f+2H_{1}H_{2}f\right)\,d \mu_{\Sigma}+\int_{\partial\Sigma\cap\Sigma_{1}}|P_{1}\nu|\widetilde{f}\frac{ \partial f}{\partial\nu}\,d\mu_{\partial\Sigma}\] \[= \int_{\partial\Sigma\cap\Sigma_{1}}|P_{1}\nu|\,\widetilde{f}^{2} \,d\mu_{\partial\Sigma}\]
and, similarly,
\[\int_{\Sigma_{2}}\left\langle P_{1}\nabla\widetilde{f},\nabla\widetilde{f} \right\rangle-2H_{1}H_{2}\widetilde{f}^{2}\,d\mu_{\Sigma}=\int_{\partial\Sigma \cap\Sigma_{2}}\left|P_{1}\nu\right|\widetilde{f}^{2}\,d\mu_{\partial\Sigma}.\]
Thus,
\[\mathcal{I}_{1}(\widetilde{f},\widetilde{f})=\sum_{i=1}^{2}\int_{\Sigma_{i}} \left\langle P_{1}\nabla\widetilde{f},\nabla\widetilde{f}\right\rangle-2H_{1 }H_{2}\widetilde{f}^{2}\,d\mu_{\Sigma}-\int_{\partial\Sigma\cap\Sigma_{i}} \left|P_{1}\nu\right|\widetilde{f}^{2}\,d\mu_{\partial\Sigma}=0.\]
Hence, the second item of Lemma 2.2 implies that \(\widetilde{f}\) is a Jacobi field on \(\Sigma\). But since \(\widetilde{f}\equiv 0\) outside of \(\Sigma_{1}\cap\Sigma_{2}\), the Aronszajn's unique continuation principle [2] implies that \(\widetilde{f}\equiv 0\), which is a contradiction.
Finally, since \(f\equiv 0\), the Killing field \(p\in\mathbb{R}^{3}\mapsto p\wedge\eta(p_{0})\in\mathbb{R}^{3}\) is tangent to \(\Sigma\). Hence, \(\Sigma\) is a rotation surface around the axis \(\eta(p_{0})\) with fixed point \(p_{0}\) and thus, \(\Sigma\) must be homeomorphic to a disk. Using Theorem 1.1, we conclude that \(\Sigma\) is totally umbilical.
The non-Euclidean cases use similar arguments and, since the spherical and the hyperbolic cases are very similar, we will give a sketch of the proof only when \(c=-1\). Using the same notation used in [17, Theorem 5.1], define \(f:\Sigma\to\mathbb{R}\) by
\[f(p)=\left\langle\varphi(p)\wedge\eta(p_{0})\wedge\mathbf{e}_{4},\eta(p) \right\rangle.\]
The same arguments used in the Euclidean case gives \(\nabla f(p_{0})=0\) and
\[\begin{cases}L_{1}f+2(H_{1}H_{2}-1)f=0&\text{in }\Sigma\\ \frac{\partial f}{\partial\nu}-\frac{\operatorname{cn}_{c}(R)}{ \operatorname{sn}_{c}(R)}f=0&\text{on }\partial\Sigma\end{cases}.\]
It can also be shown that \(f\equiv 0\) and to prove this claim, it is considered a positively oriented arclength parametrization \(\gamma\) of a connected component \(\Gamma_{i}\) of \(\partial\Sigma\) satisfying \(\varphi\wedge\eta\wedge\nu=-\gamma^{\prime}\). The identity \(f\equiv 0\) implies that \(\Sigma\) is a rotation surface in \(\mathbb{R}^{4}\) around the plane generated by \(\mathbf{e}_{4}\) and \(\eta(p_{0})\) with fixed point \(p_{0}\), proving that \(\Sigma\) is a disk.
## 5 Proof of Theorem 1.3
In this section we will extend [1, Theorem 3.1] for \(1\)-stable \(H_{2}\)-surfaces with free boundary in a slab of \(\mathbb{R}^{3}\), \(H_{2}>0\) and genus \(0\).
Proof of Theorem 1.3.: The proof of this result is an adaptation to the arguments used in [1, Theorem 3.1]. Without loss of generality, one can suppose that \(\Pi_{1}=\{x_{3}=0\}\) and \(\Pi_{2}=\{x_{3}=1\}\). Let \(\Gamma\) be a connected component of \(\partial\Sigma\) such that \(\varphi(\Gamma)\) lies on \(\Pi_{1}\) and consider in this plane the circumscribed circle \(\mathscr{C}\) about \(\varphi(\Gamma)\). We will prove that \(\varphi(\Sigma)\) is a surface of revolution around the vertical axis passing through the center of \(\mathscr{C}\).
Assuming, without loss of generality, the center of \(\mathscr{C}\) is the origin of \(\mathbb{R}^{3}\) and consider the function \(f(p)=\left\langle\varphi(p)\wedge\mathbf{e}_{3},\eta(p)\right\rangle\), \(p\in\Sigma\), where \(\wedge\) is the cross product of \(\mathbb{R}^{3}\). A similar computation to the one in Theorem 1.2 to obtain (16) shows that
\[\begin{cases}L_{1}f+2H_{1}H_{2}f=0&\text{in }\Sigma\\ \frac{\partial f}{\partial\nu}=0&\text{on }\partial\Sigma\end{cases}.\]
The proof is finished if one can show that \(f\equiv 0\).
Suppose, otherwise, that \(f\not\equiv 0\). Then Lemma 4.2 implies its nodal set \(f^{-1}(\{0\})\) is a graph whose vertices are the critical points of \(f\). We must show the nodal domain \(\Sigma\backslash f^{-1}(\{0\})\) has at least \(3\) connected components. If the function \(f\) does not change its sign in a neighborhood of a point \(p_{0}\in f^{-1}(\{0\})\cap\partial\Sigma\) then, as \(L_{1}f=-2H_{1}H_{2}f\), the strong maximum principle [9, Theorem 3.5] and the Hopf Lemma [9, Lemma 3.4] implies that \(\frac{\partial f}{\partial\nu}(p_{0})\neq 0\) unless \(f\equiv 0\) in a neighborhood of \(p_{0}\), thus \(f\equiv 0\) by Aronszajn's unique continuation principle [2]. In both cases this leads to a contradiction, therefore the nodal domain has at least two connected components. The same arguments used in [1, Theorem 3.1] to prove the nodal domain has a third connected component are valid here.
Denoting \(\Sigma_{1}\) and \(\Sigma_{2}\) two of these components, define the function
\[\widetilde{f}=\begin{cases}f&\text{in }\Sigma_{1}\\ \alpha f&\text{in }\Sigma_{2}\\ 0&\text{in }\Sigma\backslash(\Sigma_{1}\cup\Sigma_{2})\end{cases},\]
where \(\alpha\in\mathbb{R}\) is such that \(\widetilde{f}\in\mathcal{F}\). Since \(\partial\Sigma_{i}\cap\partial\Sigma=\partial\Sigma\cap\Sigma_{i}\) and \(\widetilde{f}\equiv 0\) outside \(\Sigma_{i}\), we obtain
\[\int_{\Sigma_{1}}\left\langle P_{1}\nabla\widetilde{f},\nabla \widetilde{f}\right\rangle-2H_{1}H_{2}\widetilde{f}^{2}\,d\mu_{\Sigma} = \int_{\Sigma_{1}}\left\langle P_{1}\nabla\widetilde{f},\nabla f \right\rangle-2H_{1}H_{2}\widetilde{f}f\,d\mu_{\Sigma}\] \[= -\int_{\Sigma_{1}}\widetilde{f}\left(L_{1}f+2H_{1}H_{2}f\right) \,d\mu_{\Sigma}+\int_{\partial\Sigma\cap\Sigma_{1}}\left|P_{1}\nu\right| \widetilde{f}\frac{\partial f}{\partial\nu}\,d\mu_{\Sigma}\] \[= 0\]
and, similarly, \(\int_{\Sigma_{2}}\left\langle P_{1}\nabla\widetilde{f},\nabla \widetilde{f}\right\rangle-2H_{1}H_{2}\widetilde{f}^{2}\,d\mu_{\Sigma}=0\). Thus,
\[\mathcal{I}_{1}(\widetilde{f},\widetilde{f})=\sum_{i=1}^{2}\int_{\Sigma_{i}} \left\langle P_{1}\nabla\widetilde{f},\nabla\widetilde{f}\right\rangle-2H_{1 }H_{2}\widetilde{f}^{2}\,d\mu_{\Sigma}=0.\]
and since \(\Sigma\) is \(r\)-stable, Lemma 2.2 implies that \(\widetilde{f}\) is a Jacobi field on \(\Sigma\). However, since \(\widetilde{f}\) vanishes on \(\Sigma\backslash(\Sigma_{1}\cup\Sigma_{2})\), it follows from Aronszajn's unique continuation principle that \(\widetilde{f}=0\), which is a contradiction. Therefore \(f\equiv 0\) and \(\varphi(\Sigma)\) is a surface of revolution around the \(x_{3}\)-axis. |
2303.09741 | Hamiltonicity of $1$-tough $(P_2\cup kP_1)$-free graphs | Given a graph $H$, a graph $G$ is $H$-free if $G$ does not contain $H$ as an
induced subgraph. For a positive real number $t$, a non-complete graph $G$ is
said to be $t$-tough if for every vertex cut $S$ of $G$, the ratio of $|S|$ to
the number of components of $G-S$ is at least $t$. A complete graph is said to
be $t$-tough for any $t>0$. Chv\'{a}tal's toughness conjecture, stating that
there exists a constant $t_0$ such that every $t_0$-tough graph with at least
three vertices is Hamiltonian, is still open in general. Chv\'{a}tal and
Erd\"{o}s \cite{CE} proved that, for any integer $k\ge 1$, every
$\max\{2,k\}$-connected $(k+1)P_1$-free graph on at least three vertices is
Hamiltonian. Along the Chv\'{a}tal-Erd\"{o}s theorem, Shi and Shan \cite{SS}
proved that, for any integer $k\ge 4$, every $4$-tough $2k$-connected $(P_2\cup
kP_1)$-free graph with at least three vertices is Hamiltonian, and furthermore,
they proposed a conjecture that for any integer $k\ge 1$, any $1$-tough
$2k$-connected $(P_2\cup kP_1)$-free graph is Hamiltonian. In this paper, we
confirm the conjecture, and furthermore, we show that if $k\ge 3$, then the
condition `$2k$-connected' may be weakened to be `$2(k-1)$-connected'. As an
immediate consequence, for any integer $k\ge 3$, every $(k-1)$-tough $(P_2\cup
kP_1)$-free graph is Hamiltonian. This improves the result of Hatfield and
Grimm \cite{HG}, stating that every $3$-tough $(P_2\cup 3P_1)$-free graph is
Hamiltonian. | Leyou Xu, Chengli Li, Bo Zhou | 2023-03-17T02:42:32Z | http://arxiv.org/abs/2303.09741v2 | # Hamiltonicity of \(1\)-tough \((P_{2}\cup kP_{1})\)-free graphs
###### Abstract
Given a graph \(H\), a graph \(G\) is \(H\)-free if \(G\) does not contain \(H\) as an induced subgraph. For a positive real number \(t\), a non-complete graph \(G\) is said to be \(t\)-tough if for every vertex cut \(S\) of \(G\), the ratio of \(|S|\) to the number of components of \(G-S\) is at least \(t\). A complete graph is said to be \(t\)-tough for any \(t>0\). Chvatal's toughness conjecture, stating that there exists a constant \(t_{0}\) such that every \(t_{0}\)-tough graph with at least three vertices is Hamiltonian, is still open in general. Chvatal and Erdos [8] proved that, for any integer \(k\geq 1\), every \(\max\{2,k\}\)-connected \((k+1)P_{1}\)-free graph on at least three vertices is Hamiltonian. Along the Chvatal-Erdos theorem, Shi and Shan [18] proved that, for any integer \(k\geq 4\), every \(4\)-tough \(2k\)-connected \((P_{2}\cup kP_{1})\)-free graph with at least three vertices is Hamiltonian, and furthermore, they proposed a conjecture that for any integer \(k\geq 1\), any \(1\)-tough \(2k\)-connected \((P_{2}\cup kP_{1})\)-free graph is Hamiltonian. In this paper, we confirm the conjecture, and furthermore, we show that if \(k\geq 3\), then the condition '\(2k\)-connected' may be weakened to be '\(2(k-1)\)-connected'. As an immediate consequence, for any integer \(k\geq 3\), every \((k-1)\)-tough \((P_{2}\cup kP_{1})\)-free graph is Hamiltonian. This improves the result of Hatfield and Grimm [11], stating that every \(3\)-tough \((P_{2}\cup 3P_{1})\)-free graph is Hamiltonian.
**Keywords:** toughness, Hamiltonian graph, \((P_{2}\cup kP_{1})\)-free graph
Introduction
Let \(G\) be a graph with vertex set \(V(G)\) and edge set \(E(G)\). A graph \(G\) is Hamiltonian if there exists a cycle containing each vertex of \(G\). For a given graph \(H\), a graph \(G\) is called \(H\)-free if \(G\) does not contain \(H\) as an induced subgraph.
For vertex disjoint graphs \(H\) and \(F\), \(H\cup F\) denotes the disjoint union of graphs \(H\) and \(F\). A linear forest is a graph consisting of disjoint paths. As usual, \(P_{n}\) denotes the path on \(n\) vertices. For positive integer \(k\) and \(\ell\), \(kP_{\ell}\) denotes the linear forest consisting of \(k\) disjoint copies of the path \(P_{\ell}\).
For a positive integer \(k\), a connected graph \(G\) is said to be \(k\)-connected if any deletion of at most \(k-1\) vertices on \(G\) also results in a connected graph.
For a graph \(G\) with \(S\subset V(G)\), denote by \(G[S]\) the subgraph of \(G\) induced by \(S\). Let \(G-S=G[V(G)-S]\). The number of components of \(G\) is denoted by \(c(G)\).
The toughness of a graph \(G\), denoted by \(\tau(G)\), is defined as
\[\tau(G)=\min\left\{\frac{|S|}{c(G-S)}:S\subseteq V(G),c(G-S)\geq 2\right\}\]
if \(G\) is not a complete graph and \(\tau(G)=\infty\) otherwise. For a positive real number \(t\), a graph \(G\) is called \(t\)-tough if \(\tau(G)\geq t\), that is, \(|S|\geq t\cdot c(G-S)\) for each \(S\subseteq V(G)\) with \(c(G-S)\geq 2\). The concept of toughness of a graph was introduced by Chvatal [7]. Clearly, every Hamiltonian graph is \(1\)-tough, but the converse is not true. Chvatal [7] proposed the following conjecture, which is known as Chvatal's toughness conjecture.
**Conjecture 1** (Chvatal).: _[_7_]_ _There exists a constant \(t_{0}\) such that every \(t_{0}\)-tough graph with at least three vertices is Hamiltonian._
Bauer, Broersma and Veldman [2] showed that \(t_{0}\geq\frac{9}{4}\) if it exists. Conjecture 1 has been confirmed for a number of special classes of graphs [1, 3, 4, 5, 6, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]. For example, it has been confirmed for graphs with forbidden (small) linear forests, such as \(1\)-tough \(R\)-free graphs with \(R=P_{3}\cup P_{1},P_{2}\cup 2P_{1}\)[13], \(2\)-tough \(2P_{2}\)-free graphs [14, 16, 4], \(3\)-tough \((P_{2}\cup 3P_{1})\)-free graphs [11], \(7\)-tough \((P_{3}\cup 2P_{1})\)-free graphs [10] and \(15\)-tough \((P_{3}\cup P_{2})\)-free graphs [17]. Though great efforts have been made, it remains open.
If connectivity is considered, there is a classic result, due to Chvatal and Erdos [8].
**Theorem 1** (Chvatal and Erdos).: _[_8_]_ _For any integer \(k\geq 1\), every \(\max\{2,k\}\)-connected \((k+1)P_{1}\)-free graphs on at least three vertices is Hamiltonian._
Note that \(k\)-connected \((k+1)P_{1}\)-free graphs must be \(1\)-tough and that constant connectivity condition cannot guarantee the existence of a Hamiltonian cycle in \((P_{2}\cup kP_{1})\)-free graphs. Supporting Chvatal's toughness conjecture, Shi and Shan [18] and Hatfield and Grimm [11] established the following interesting results.
**Theorem 2** (Shi and Shan).: _[_18_]_ _For any integer \(k\geq 4\), every \(4\)-tough \(2k\)-connected \((P_{2}\cup kP_{1})\)-free graph is Hamiltonian._
**Theorem 3** (Hatfield and Grimm).: _[_11_]_ _Every \(3\)-tough \((P_{2}\cup 3P_{1})\)-free graph is Hamiltonian._
Shi and Shan [18] proposed the following conjecture.
**Conjecture 2** (Shi and Shan).: _[_18_]_ _Let \(k\geq 4\) be an integer. Let \(G\) be a \(1\)-tough \(2k\)-connected \((P_{2}\cup kP_{1})\)-free graph. Then \(G\) is Hamiltonian._
In this paper, we show that Conjecture 2 is true by showing the following result.
**Theorem 4**.: _For any integer \(k\geq 1\), every \(1\)-tough \(2k\)-connected \((P_{2}\cup kP_{1})\)-free graph is Hamiltonian._
Furthermore, we show the following stronger result.
**Theorem 5**.: _Let \(k\) be an integer with \(k\geq 3\). Every \(1\)-tough \((2k-2)\)-connected \((P_{2}\cup kP_{1})\)-free graph is Hamiltonian._
Theorems 4 and 5 echo the Chavatal-Erdos theorem [8] (Theorem 1). Note that a non-complete \((k-1)\)-tough graph must be \((2k-2)\)-connected for \(k\geq 3\). An immediate consequence of Theorem 5 is as follows, from which we also have Theorem 3 due to Hatfield and Grimm [11].
**Corollary 1**.: _For any integer \(k\geq 3\), every \((k-1)\)-tough \((P_{2}\cup kP_{1})\)-free graph is Hamiltonian._
Preliminaries
We introduce some notations.
For \(v\in V(G)\), \(N_{G}(v)\) denotes the neighborhood of \(v\) in \(G\). For \(v\in V(G)\) and a subgraph \(F\) of \(G\), let \(N_{F}(v)=N_{G}(v)\cap V(F)\). For \(S\subseteq V(G)\), \(N_{F}(S)=\bigcup_{v\in S}N_{F}(v)\). If \(H\) is a subgraph of \(G\), then we write \(N_{F}(H)\) for \(N_{F}(V(H))\).
Let \(C\) be an oriented cycle, where the orientation is always clockwise. For \(u\in V(C)\), denote by \(u^{+1}\) the immediate successor of \(u\) and \(u^{-1}\) the immediate predecessor of \(u\) on \(C\). For an integer \(\ell\geq 2\), denote by \(u^{+\ell}\) the immediate successor of \(u^{+(\ell-1)}\) and \(u^{-\ell}\) the immediate predecessor of \(u^{-(\ell-1)}\) on \(C\). For convenience, we write \(u^{+}\) for \(u^{+1}\) and \(u^{-}\) for \(u^{-1}\). For \(S\subseteq V(C)\), let \(S^{+}=\{u^{+}:u\in S\}\). For \(u,v\in V(C)\), \(u\overrightarrow{C}v\) denotes the segment of \(C\) from \(u\) to \(v\) which follows the orientation of \(C\), while \(u\overleftarrow{C}v\) denotes the opposite segment of \(C\) from \(u\) to \(v\). Particularly, if \(u=v\), then \(u\overrightarrow{C}v=u\) and \(u\overleftarrow{C}v=u\).
For a graph \(G\) with \(u,v\in V(G)\), a \((u,v)\)-path is a path from \(u\) to \(v\) in \(G\).
## 3 Proof of Theorem 4
Proof of Theorem 4.: Suppose to the contrary that \(G\) is a \(1\)-tough \(2k\)-connected (\(P_{2}\cup kP_{1}\))-free graph but \(G\) is not Hamiltonian. Then \(G\) is not complete. As \(G\) is \(2k\)-connected, there are cycles in \(G\). Let \(C\) be a longest cycle in \(G\). As \(G\) is not Hamiltonian, \(V(G)\setminus V(C)\neq\emptyset\). Observe that \(N_{C}(H)\neq\emptyset\) for any component \(H\) of \(G-V(C)\).
**Claim 1**.: _For any component \(H\) of \(G-V(C)\), \(N_{C}(H)^{+}\) is an independent set, and \(N_{C}(H)\cap N_{C}(H)^{+}=\emptyset\)._
Proof.: Suppose that \(N_{C}(H)^{+}\) is not independent for some component \(H\) of \(G-V(C)\). Let \(N_{C}(H)=\{u_{1},\ldots,u_{t}\}\). Then \(u_{i}^{+}u_{j}^{+}\in E(G)\) for some \(i\) and some \(j\) with \(1\leq i<j\leq t\). Let \(u_{i}^{\prime}\) be a neighbor of \(u_{i}\) in \(H\) and \(u_{j}^{\prime}\) a neighbor \(u_{j}\) in \(H\). As \(H\) is connected, there is a \((u_{j}^{\prime},u_{i}^{\prime})\)-path \(P\) in \(H\). Then
\[u_{i}\overleftarrow{C}u_{j}^{+}u_{i}^{+}\overrightarrow{C}u_{j}u_{j}^{\prime }Pu_{i}^{\prime}u_{i}\]
is a cycle of \(G\) longer than \(C\), a contradiction. So \(N_{C}(H)^{+}\) is an independent set of \(G\).
As \(N_{C}(H)^{+}\) is an independent set of \(G\), we have \(N_{C}(H)\cap N_{C}(H)^{+}=\emptyset\).
**Claim 2**.: _Every component of \(G-V(C)\) is trivial._
Proof.: Suppose to the contrary that there exists a nontrivial component \(H\) of \(G-V(C)\). Then \(H\) contains an edge \(uv\). By Claim 1, \(N_{C}(H)\cap N_{C}(H)^{+}=\emptyset\), so \(G-N_{C}(H)\) is not connected, and \(N_{C}(H)\) is a vertex cut of \(G\). As \(G\) is \(2k\)-connected, we have \(|N_{C}(H)^{+}|=|N_{C}(H)|\geq 2k\). By Claim 1, \(N_{C}(H)^{+}\) is an independent set of \(G\) and then the graph \(G\left[N_{C}(H)^{+}\cup\{u,v\}\right]\) contains exactly one edge \(uv\) and so it contains \(P_{2}\cup 2kP_{1}\) as an induced subgraph. Thus, \(G\) contains \(P_{2}\cup kP_{1}\) as an induced subgraph, a contradiction.
Let \(x\in V(G)\setminus V(C)\) and \(N_{C}(x)=\{x_{1},\ldots,x_{t}\}\). By Claim 2, \(x\) is an isolated vertex of \(G-V(C)\). So, by Claim 1 and the fact that \(G\) is \(2k\)-connected, \(t\geq 2k\). For \(i=1,\ldots,t\), denote by \(S_{i}\) the vertex set of the segment \(x_{i}^{+}\overrightarrow{C}x_{i+1}^{-}\) of \(C\) from \(x_{i}^{+}\) to \(x_{i+1}^{-}\), where \(x_{t+1}=x_{1}\). As \(x_{i}\) and \(x_{i+1}\) are not consecutive vertices on \(C\) by Claim 1, \(|S_{i}|\geq 1\).
**Claim 3**.: _For \(i=1,\ldots,t\), \(|S_{i}|\) is odd and \(N_{C}(x_{i}^{+j})\cap N_{C}(x)^{+}=\emptyset\) if \(1\leq j\leq|S_{i}|\) with \(j\equiv 1\ (\mathrm{mod}\ 2)\)._
Proof.: Firstly, we prove the second part.
Assume that \(i=1\) as the argument applies also to the case \(i=2,\ldots,t\).
Take an arbitrary \(X\subseteq N_{C}(x)^{+}\) with \(|X|=2k\) and \(x_{1}^{+}\in X\). We will show that
\[|N_{C}(x_{1}^{+j})\cap X|\begin{cases}=0&\text{if $j$ is odd},\\ \geq k+2&\text{if $j$ is even}\end{cases} \tag{1}\]
by induction on \(j\) for integers \(j=1,\ldots,|S_{1}|\). If \(j=1\), then by Claim 1, \(N_{C}(x)^{+}\) is an independent set of \(G\), so \(N_{C}(x_{1}^{+1})\cap N_{C}(x)^{+}=\emptyset\), i.e., (1) follows for \(j=1\). Suppose that (1) is not true for \(j=2\). Then \(\left|N_{C}(x_{1}^{+2})\cap X\right|\leq k+1\). So \(\left|\left(X\setminus N_{C}(x_{1}^{+2}))\cup\{x,x_{1}^{+},x_{1}^{+2}\}\right| \geq k+2\), and \(G\left[(X\setminus N_{C}(x_{1}^{+2}))\cup\left\{x,x_{1}^{+},x_{1}^{+2}\right\}\right]\) contains exactly one edge \(x_{1}^{+}x_{1}^{+2}\), so it contains \(P_{2}\cup kP_{1}\) as an induced subgraph, a contradiction. Thus, (1) follows for \(j=2\).
Let \(j\) be an integer with \(3\leq j\leq|S_{1}|\). Suppose that (1) holds for \(j-1\).
Suppose that \(j\) is odd. By the inductive hypothesis, \(\left|N_{C}(x_{1}^{+(j-1)})\cap X\right|\geq k+2\). Suppose that \(N_{C}(x_{1}^{+j})\cap X\neq\emptyset\). Then there exists some \(x_{r}^{+}\in N_{C}(x_{1}^{+j})\cap X\). If \(\left|N_{C}(x_{1}^{+j})\cap X\right|\leq k+1\), then \(\left|\left(X\setminus N_{C}(x_{1}^{+j})\right)\cup\left\{x,x_{1}^{+j},x_{r}^{ +}\right\}\right|\geq k+2\) and \(G\left[(X\setminus N_{C}(x_{1}^{+j}))\cup\left\{x,x_{1}^{+j},x_{r}^{+}\right\}\right]\) contains exactly one edge \(x_{1}^{+j}x_{r}^{+}\)
so it contains \(P_{2}\cup kP_{1}\) as an induced subgraph, a contradiction. Then \(\left|N_{C}(x_{1}^{+j})\cap X\right|\geq k+2\) and hence
\[\left|N_{C}(x_{1}^{+(j-1)})\cap N_{C}(x_{1}^{+j})\cap X\right| \geq\left|N_{C}(x_{1}^{+(j-1)})\cap X\right|+\left|N_{C}(x_{1}^{+j })\cap X\right|-\left|X\right|\] \[\geq k+2+k+2-2k>2.\]
Assume that \(x_{p}^{+},x_{q}^{+}\in N_{C}(x_{1}^{+(j-1)})\cap N_{C}(x_{1}^{+j})\cap X\) with \(1\leq p<q\leq t\). If \(p\geq 2\), then
\[x_{1}^{+(j-1)}x_{p}^{+}\overrightarrow{C}x_{q}xx_{p}\overleftarrow{C}x_{1}^{ +j}x_{q}^{+}\overrightarrow{C}x_{1}^{+(j-1)}\]
is a cycle of \(G\) longer than \(C\), a contradiction. So \(p=1\), and then
\[x_{1}^{+(j-1)}x_{q}^{+}\overrightarrow{C}x_{1}xx_{q}\overleftarrow{C}x_{1}^ {+j}x_{1}^{+}\overrightarrow{C}x_{1}^{+(j-1)}\]
is a cycle of \(G\) longer than \(C\), also a contradiction. Therefore, \(N_{C}(x_{1}^{+j})\cap X=\emptyset\). This is (1) for odd \(j\).
Now suppose that \(j\) is even. By the inductive hypothesis, \(N_{C}(x_{1}^{+(j-1)})\cap X=\emptyset\). If \(\left|N_{C}(x_{1}^{+j})\cap X\right|\leq k+1\), then \(\left|(X\setminus N_{C}(x_{1}^{+j}))\cup\left\{x,x_{1}^{+j},x_{1}^{+(j-1)} \right\}\right|\geq k+2\), and \(G\left[(X\setminus N_{C}(x_{1}^{+(j-1)}))\cup\left\{x,x_{1}^{+j},x_{1}^{+(j-1) }\right\}\right]\) contains exactly one edge \(x_{1}^{+j}x_{1}^{+(j-1)}\), so it contains \(P_{2}\cup kP_{1}\) as an induced subgraph, a contradiction. So \(\left|N_{C}(x_{1}^{+j})\cap X\right|\geq k+2\). This is (1) for even \(j\).
If \(1\leq j\leq\left|S_{i}\right|\) with \(j\equiv 1\) (mod 2), then by (1), \(N_{C}(x_{1}^{+j})\cap X=\emptyset\) for any \(X\subseteq N_{C}(x)^{+}\) with \(\left|X\right|=2k\) and \(x_{1}^{+}\in X\), so
\[N_{C}(x_{1}^{+j})\cap N_{C}(x)^{+}=N_{C}(x_{1}^{+j})\bigcap\bigcup_{ \stackrel{{ X\subseteq N_{C}(x)^{+}}}{{\left|X\right|=2k}}}X=\emptyset.\]
This proves the second part.
Secondly, we prove the first part. Suppose that \(\left|S_{1}\right|\) is even. By (1), \(\left|N_{C}(x_{1}^{+\left|S_{1}\right|})\cap X\right|\geq k+2\). If \(\left|N_{C}(x_{2})\cap X\right|\leq k\), then \(G\left[X\setminus N_{C}(x_{2})\cup\left\{x,x_{2}\right\}\right]\) contains exactly one edge \(xx_{2}\) and so it contains \(P_{2}\cup kP_{1}\) as an induced subgraph, a contradiction. So \(\left|N_{C}(x_{2})\cap X\right|\geq k+1\) and then by similar argument as above, we have \(\left|N_{C}(x_{2})\cap N_{C}(x_{1}^{+\left|S_{1}\right|})\cap X\right|\geq 2\) and so we may obtain a cycle of \(G\) longer than \(C\), a contradiction. Therefore, \(\left|S_{1}\right|\) is odd, as desired.
Let \(S^{\prime}=\cup_{i=1}^{t}S^{\prime}_{i}\), where \(S^{\prime}_{i}=\left\{x_{i}^{+j}\in S_{i}:j\equiv 1\pmod{2}\right\}\) for \(i=1,\ldots,t\). Suppose that there is an edge \(uv\) in \(G[S^{\prime}]\). By Claim 3, \(N_{C}(u)\cap N_{C}(x)^{+}=\emptyset\) and \(N_{C}(v)\cap N_{C}(x)^{+}=\emptyset\). By Claim 1, \(N_{C}(x)^{+}\) is an independent set. So \(uv\) is the unique edge in \(G\left[N_{C}(x)^{+}\cup\{u,v\}\right]\). Recall that \(|N_{C}(x)^{+}|\geq 2k\). So \(G\left[N_{C}(x)^{+}\cup\{u,v\}\right]\) contains \(P_{2}\cup kP_{1}\) as an induced subgraph, a contradiction. So \(S^{\prime}\) is an independent set of \(G\). By Claim 3, \(|S_{i}|\) is odd for each \(i=1,\ldots,t\), so
\[|V(C)|=2|S^{\prime}|.\]
**Claim 4**.: _For any \(y\in V(G)\setminus V(C)\), \(N_{C}(y)\cap S^{\prime}=\emptyset\)._
Proof.: The case \(y=x\) is obvious by the definition of \(S^{\prime}\). Suppose that \(y\neq x\).
Firstly, we show that
\[N_{C}(y)\cap N_{C}(x)^{+}=\emptyset.\]
Otherwise, \(|N_{C}(y)\cap N_{C}(x)^{+}|\geq 1\). If \(|N_{C}(y)\cap N_{C}(x)^{+}|\geq 2\), then
\[x_{p}\overleftarrow{C}x_{q}^{+}yx_{p}^{+}\overrightarrow{C}x_{q}xx_{p}\]
is a cycle longer than \(C\) for some \(x_{p}^{+},x_{q}^{+}\in N_{C}(y)\) with \(1\leq p<q\leq t\), which is a contradiction. So \(|N_{C}(y)\cap N_{C}(x)^{+}|=1\), say \(y^{\prime}\in N_{C}(y)\cap N_{C}(x)^{+}\). By Claim 1, \(N_{C}(x)^{+}\) is an independent set of \(G\). Recall that \(|N_{C}(x)^{+}|\geq 2k\). Then \(G\left[N_{C}(x)^{+}\cup\{y\}\right]\) contains exactly one edge \(yy^{\prime}\) and so it contains \(P_{2}\cup kP_{1}\) as an induced subgraph, also a contradiction. So \(N_{C}(y)\cap N_{C}(x)^{+}=\emptyset\).
Now we show that \(N_{C}(y)\cap S^{\prime}=\emptyset\). Suppose that this is not true. Then there is a vertex \(z\in N_{C}(y)\cap S^{\prime}\). Since \(N_{C}(y)\cap N_{C}(x)^{+}=\emptyset\) and \(N_{C}(x)^{+}\subseteq S^{\prime}\), we have \(z\notin N_{C}(x)^{+}\). By Claim 3, \(z\) is not adjacent to any vertex in \(N_{C}(x)^{+}\). Then \(G\left[N_{C}(x)^{+}\cup\{y,z\}\right]\) contains exactly one edge \(yz\) and so it contains \(P_{2}\cup kP_{1}\) as an induced subgraph, a contradiction.
Let \(S=V(C)\setminus S^{\prime}\). Then \(|S|=\frac{1}{2}|V(C)|\). By Claim 4, for any \(y\in V(G)\setminus V(C)\), \(y\) is not adjacent to any vertex in \(S^{\prime}\). So \((V(G)\setminus V(C))\cup S^{\prime}\) is an independent set by Claim 2. So \(c(G-S)=|V(G)|-|V(C)|+|S^{\prime}|>\frac{1}{2}|V(C)|=|S|\) and so
\[\tau(G)\leq\frac{|S|}{c(G-S)}<1,\]
contradicting to the condition that \(\tau(G)\geq 1\). This completes the proof of Theorem 4.
Proof of Theorem 5
Proof of Theorem 5.: Suppose to the contrary that \(G\) is a \(1\)-tough \((2k-2)\)-connected \((P_{2}\cup kP_{1})\)-free graph that is not Hamiltonian. Then \(G\) is not complete. As \(G\) is \((2k-2)\)-connected, there are cycles in \(G\). Let \(C\) be a longest cycle in \(G\). Then \(V(G)\setminus V(C)\neq\emptyset\). It is evident that \(N_{C}(H)\neq\emptyset\) for any component \(H\) of \(G-V(C)\).
By the same argument as in Claim 1, we have
**Claim 5**.: _For any component \(H\) of \(G-V(C)\), \(N_{C}(H)^{+}\) is an independent set, and \(N_{C}(H)\cap N_{C}(H)^{+}=\emptyset\)._
For any component \(H\) of \(G-V(C)\), \(N_{C}(H)\cap N_{C}(H)^{+}=\emptyset\) by Claim 5, and so \(N_{C}(H)\) is a vertex cut of \(G\). As \(G\) is \((2k-2)\)-connected, \(|N_{C}(H)^{+}|=|N_{C}(H)|\geq 2k-2\). Similarly as in Claim 2, we have
**Claim 6**.: _Every component of \(G-V(C)\) is trivial._
Let \(x\in V(G)\setminus V(C)\) and \(N_{C}(x)=\{x_{1},\ldots,x_{t}\}\). By Claim 6, \(x\) is an isolated vertex of \(G-V(C)\). So \(t=|N_{C}(x)|\geq 2k-2\). For convenience, let \(x_{t+i}=x_{i}\) for \(i=1,\ldots,t\). For \(i=1,\ldots,t\), denote by \(S_{i}\) the vertex set of the segment \(x_{i}^{+}\overrightarrow{C}x_{i+1}^{-}\) of \(C\) from \(x_{i}^{+}\) to \(x_{i+1}^{-}\). By Claim 5, \(x_{i}\) and \(x_{i+1}\) are not consecutive vertices on \(C\), so \(|S_{i}|\geq 1\)
**Claim 7**.: _For \(i=1,\ldots,t\) and \(j=1,\ldots,|S_{i}|\), \(N_{C}(x)^{+}\cap N_{C}(x_{i}^{+j})=\emptyset\) if \(j\) is odd and \(\left|N_{C}(x)^{+}\setminus N_{C}(x_{i}^{+j})\right|\leq k-2\) if \(j\) is even._
Proof.: We prove Claim 7 by induction on \(j\) for \(j=1,\ldots,|S_{i}|\).
By Claim 5, \(N_{C}(x)^{+}\) is an independent set, so \(N_{C}(x)^{+}\cap N_{C}(x_{i}^{+})=\emptyset\), i.e., Claim 7 follows for \(j=1\).
Suppose \(\left|N_{C}(x)^{+}\setminus N_{C}(x_{i}^{+2})\right|\geq k-1\). Let \(V_{i}=(N_{C}(x)^{+}\setminus N_{C}(x_{i}^{+2}))\cup\{x,x_{i}^{+},x_{i}^{+2}\}\). Then \(G[V_{i}]\) contains exactly one edge \(x_{i}^{+}x_{i}^{+2}\), so it contains \(P_{2}\cup kP_{1}\) as an induced subgraph, a contradiction. Thus \(\left|N_{C}(x)^{+}\setminus N_{C}(x_{i}^{+2})\right|\leq k-2\), i.e., Claim 7 follows for \(j=2\).
Suppose that \(j\) be an integer with \(3\leq j\leq|S_{i}|\) and that Claim 7 holds for \(j-1\).
Suppose that \(j\) is odd. By induction assumption,
\[\left|N_{C}(x)^{+}\setminus N_{C}\left(x_{i}^{+(j-1)}\right)\right|\leq k-2.\]
We want to show that \(N_{C}(x)^{+}\cap N_{C}(x_{i}^{+j})=\emptyset\). Suppose that \(N_{C}(x)^{+}\cap N_{C}(x_{i}^{+j})\neq\emptyset\), say \(z\in N_{C}(x)^{+}\cap N_{C}(x_{i}^{+j})\). If \(\left|N_{C}(x)^{+}\setminus N_{C}(x_{i}^{+j})\right|\geq k-1\), then \(G\left[(N_{C}(x)^{+}\setminus N_{C}(x_{i}^{+j}))\cup\{x,x_{i}^{+j},z\}\right]\) contains exactly one edge \(x_{i}^{+j}z\) and so it contains \(P_{2}\cup kP_{1}\) as an induced subgraph, a contradiction. Thus
\[\left|N_{C}(x)^{+}\setminus N_{C}(x_{i}^{+j})\right|\leq k-2,\]
which implies that
\[\left|N_{C}(x_{i}^{+j})\cap N_{C}\left(x_{i}^{+(j-1)}\right)\cap N _{C}(x)^{+}\right|\] \[\geq\left|N_{C}(x_{i}^{+j})\cap N_{C}(x)^{+}\right|+\left|N_{C} \left(x_{i}^{+(j-1)}\right)\cap N_{C}(x)^{+}\right|-\left|N_{C}(x)^{+}\right|\] \[\geq t-(k-2)+t-(k-2)-t\geq 2.\]
So we may assume that \(x_{p}^{+},x_{q}^{+}\in N_{C}(x_{i}^{+j})\cap N_{C}\left(x_{i}^{+(j-1)}\right) \cap N_{C}(x)^{+}\) with \(i+1\leq p<q\leq t+i\). Then
\[x_{i}^{+(j-1)}x_{p}^{+}\overrightarrow{C}x_{q}xx_{p}\overleftarrow{C}x_{i}^{ +j}x_{q}^{+}\overrightarrow{C}x_{i}^{+(j-1)}\]
is a cycle of \(G\) longer than \(C\), a contradiction. Therefore, \(N_{C}(x)^{+}\cap N_{C}(x_{i}^{+j})=\emptyset\), proving Claim 7 for odd \(j\).
Suppose that \(j\) is even. By induction assumption, \(N_{C}(x)^{+}\cap N_{C}\left(x_{i}^{+(j-1)}\right)=\emptyset\). If \(\left|N_{C}(x)^{+}\setminus N_{C}(x_{i}^{+j})\right|\geq k-1\), then \(G\left[(N_{C}(x)^{+}\setminus N_{C}(x_{i}^{+j}))\cup\{x,x_{i}^{+(j-1)},x_{i}^{ +j}\}\right]\) contains exactly one edge \(x_{i}^{+(j-1)}x_{i}^{+j}\), so it contains \(P_{2}\cup kP_{1}\) as an induced subgraph, a contradiction. So \(\left|N_{C}(x)^{+}\setminus N_{C}(x_{i}^{+j})\right|\leq k-2\), proving Claim 7 for even \(j\).
For \(i=1,\ldots,t\), let \(S_{i}^{\prime}=\left\{x_{i}^{+j}\in S_{i}:j\equiv 1\ (\bmod\ 2)\right\}\) and \(S^{\prime}:=\cup_{i=1}^{t}S_{i}^{\prime}\).
**Claim 8**.: \(S^{\prime}\) _is an independent set._
Proof.: Suppose that \(S^{\prime}\) is not independent. Then there is an edge \(uv\) in \(G[S^{\prime}]\). By Claim 7, \(N_{C}(x)^{+}\cap N_{C}(u)=\emptyset\) and \(N_{C}(x)^{+}\cap N_{C}(v)=\emptyset\). By Claim 5, \(N_{C}(x)^{+}\) is an independent set. It thus follows that \(uv\) is the unique edge in \(G\left[N_{C}(x)^{+}\cup\{u,v\}\right]\). As \(k\geq 3\), \(\left|N_{C}(x)^{+}\right|\geq 2k-2\geq k+1\). So \(G\left[N_{C}(x)^{+}\cup\{u,v\}\right]\) contains \(P_{2}\cup kP_{1}\) as an induced subgraph, a contradiction.
**Claim 9**.: _For any \(z\in V(C)\setminus(N_{C}(x)\cup S^{\prime})\), \(\left|S^{\prime}\setminus N_{C}(z)\right|\leq k-2\)._
Proof.: Suppose that \(|S^{\prime}\setminus N_{C}(z)|\geq k-1\) for some \(z\in V(C)\setminus(N_{C}(x)\cup S^{\prime})\). As \(z\notin N_{C}(x)\) and \(z^{-}\in S^{\prime}\), we have by Claim 8 that \(S^{\prime}\setminus N_{C}(z)\cup\{z^{-}\}\) is an independent set. Then \(G\left[(S^{\prime}\setminus N_{C}(z))\cup\{x,z,z^{-}\}\right]\) contains exactly one edge \(zz^{-}\) and so it contains \(P_{2}\cup kP_{1}\) as an induced subgraph, a contradiction. Therefore \(|S^{\prime}\setminus N_{C}(z)|\leq k-2\).
**Claim 10**.: _For any \(i=1,\ldots,t\), \(|N_{C}(x)^{+}\setminus N_{C}(x_{i})|\leq k-1\)._
Proof.: If \(|N_{C}(x)^{+}\setminus N_{C}(x_{i})|\geq k\) for some \(i=1,\ldots,t\), then by Claim 5, \(G\left[(N_{C}(x)^{+}\setminus N_{C}(x_{i}))\cup\{x_{i},x_{i}^{+}\}\right]\) contains exactly one edge \(x_{i}x_{i}^{+}\) and so it contains \(P_{2}\cup kP_{1}\) as an induced subgraph, a contradiction.
**Claim 11**.: _If \(|S_{i}|\) is even for some \(i=1,\ldots,t\), then_
* \(t=2k-2\)_,_
* \(x_{i+1}^{+},\ldots,x_{i+k-2}^{+}\notin N_{C}\left(x_{i}^{+|S_{i}|}\right)\)_,_
* \(x_{i+k}^{+},\ldots,x_{i+t}^{+}\notin N_{C}(x_{i+1})\)_._
Proof.: Let \(x_{p}^{+}\in N_{C}\left(x_{i}^{+|S_{i}|}\right)\) such that \(i+1\leq p\leq t+i\) and \(p\) is as small as possible. Let \(x_{q}^{+}\in N_{C}(x_{i+1})\) such that \(i+1\leq q\leq t+i\) and \(q\) is as large as possible. If \(p<q\), then
\[x_{i}^{+|S_{i}|}x_{p}^{+}\overrightarrow{C}x_{q}xx_{p}\overleftarrow{C}x_{i+ 1}x_{q}^{+}\overrightarrow{C}x_{i}^{+|S_{i}|}\]
is a cycle of \(G\) longer than \(C\), a contradiction. So \(p\geq q\).
As \(|S_{i}|\) is even, we have by Claim 7 that \(\left|N_{C}(x)^{+}\setminus N_{C}(x_{i}^{+|S_{i}|})\right|\leq k-2\). By Claim 10, \(|N_{C}(x)^{+}\setminus N_{C}(x_{i+1})|\leq k-1\). Hence
\[\left|N_{C}(x_{i}^{+|S_{i}|})\cap N_{C}(x_{i+1})\cap N_{C}(x)^{+}\right|\] \[\geq\left|N_{C}(x_{i}^{+|S_{i}|})\cap N_{C}(x)^{+}\right|+\left| N_{C}(x_{i+1})\cap N_{C}(x)^{+}\right|-\left|N_{C}(x)^{+}\right|\] \[\geq t-(k-2)+t-(k-1)-t\] \[\geq 2k-2-(2k-3)=1.\]
Suppose that \(\left|N_{C}(x_{i}^{+|S_{i}|})\cap N_{C}(x_{i+1})\cap N_{C}(x)^{+}\right|\geq 2\). Then there exist \(x_{\ell}^{+},x_{r}^{+}\in N_{C}(x_{i}^{+|S_{i}|})\cap N_{C}(x_{i+1})\) with \(i+1\leq\ell<r\leq t+i\). By the definition of \(p\) and
\(q\), we have \(p\leq\ell<r\leq q\), contradicting to the fact that \(p\geq q\). Therefore, we have \(\left|N_{C}(x_{i}^{+|S_{i}|})\cap N_{C}(x_{i+1})\cap N_{C}(x)^{+}\right|=1\), implying that \(t=2k-2\), \(\left|N_{C}(x)^{+}\cap N_{C}(x_{i}^{+|S_{i}|})\right|=k\) and \(|N_{C}(x)^{+}\cap N_{C}(x_{i+1})|=k-1\). Therefore, by the definition of \(p\) and \(q\) again and the fact that \(p\geq q\), we have \(p=q=i+k-1\), \(x_{i+1}^{+},\ldots,x_{i+k-2}^{+}\notin N_{C}\left(x_{i}^{+|S_{i}|}\right)\) and \(x_{i+k}^{+},\ldots,x_{i+t}^{+}\notin N_{C}(x_{i+1})\).
**Claim 12**.: _If \(|S_{i}|\) is even for some \(i=1,\ldots,t\), then \(|S_{j}|\) is even for \(j=i+k-1,\ldots,i+2k-2\)._
Proof.: Suppose that \(|S_{j}|\) is odd for some \(j=i+k-1,\ldots,i+2k-2\). Then \(x_{j}^{+|S_{j}|}\in S^{\prime}\). By Claim 9, \(\left|S^{\prime}\setminus N_{C}\left(x_{i}^{+|S_{i}|}\right)\right|\leq k-2\). As \(x_{i+1}^{+},\ldots,x_{i+k-2}^{+}\notin N_{C}\left(x_{i}^{+|S_{i}|}\right)\) by Claim 11, we have \(S^{\prime}\setminus\{x_{i+1}^{+},\ldots,x_{i+k-2}^{+}\}\subseteq N_{C}\left(x _{i}^{+|S_{i}|}\right)\) and hence \(x_{i}^{+|S_{i}|}x_{j}^{+|S_{j}|}\in E(G)\). So
\[x_{i}^{+|S_{i}|}x_{j}^{+|S_{j}|}\overleftarrow{C}x_{i+1}xx_{j+1}\overrightarrow {C}x_{i}^{+|S_{i}|}\]
is a cycle of \(G\) longer than \(C\), a contradiction. Therefore, \(|S_{j}|\) is even for all \(j=i+k-1,\ldots,i+2k-2\).
**Claim 13**.: _For \(i=1,\ldots,t\), \(|S_{i}|\) is odd._
Proof.: Suppose to the contrary that \(|S_{i}|\) is even for some \(i=1,\ldots,t\). Then by Claims 11 and 12, \(t=2k-2\) and \(|S_{j}|\) is even for \(j\) with \(i+k-1\leq j\leq i+2k-2\). As \(|S_{i+k-1}|\) is even, we have by Claim 12 that \(|S_{j}|\) is even for \(i+k-1+k-1\leq j\leq i+k-1+2k-2\) and so \(|S_{j}|\) is even for each \(j=1,\ldots,2k-2\).
By Claim 10, \(|N_{C}(x)^{+}\setminus N_{C}(x_{i+1})|\leq k-1\), i.e., \(|N_{C}(x)^{+}\cap N_{C}(x_{i+1})|\geq t-(k-1)=k-1\geq 2\). Let \(x_{p}^{+}\in N_{C}(x)^{+}\cap N_{C}(x_{i+1})\) with \(p\neq i+1\). By Claim 11, \(i+2\leq p\leq i+k-1\). Then by Claims 9 and 11 again, we have \(S^{\prime}\setminus\{x_{p+1}^{+},\ldots,x_{p+k-2}^{+}\}\subseteq N_{C}(x_{p}^{+ |S_{p}|})\). As \(i+2\leq p\leq i+k-1\), we have \(p+k-2\leq i+2k-3\), \(p+1\geq i+3\), and hence \(\{p+1,\ldots,p+k-2\}\subseteq\{i+3,\ldots,i+2k-3\}\). So \(x_{p}^{+|S_{p}|}x_{i+1}^{+}\in E(G)\), implying that
\[x_{i+1}x_{p}^{+}\overrightarrow{C}x_{p}^{+|S_{p}|}x_{i+1}^{+}\overrightarrow{ C}x_{p}xx_{p+1}\overrightarrow{C}x_{i+1}\]
is a cycle of \(G\) longer than \(C\), a contradiction. So \(|S_{i}|\) is odd.
Recall that \(S^{\prime}=\cup_{i=1}^{t}S^{\prime}_{i}\). By Claim 13, \(|S_{i}|\) is odd for each \(i=1,\ldots,t\), so
\[|V(C)|=2|S^{\prime}|.\]
By the argument in Claim 4, we have
**Claim 14**.: _For any \(y\in V(G)\setminus V(C)\), \(N_{C}(y)\cap S^{\prime}=\emptyset\)._
By Claims 6 and 8, \(V(G)\setminus V(C)\) and \(S^{\prime}\) are independent sets. So \((V(G)\setminus V(C))\cup S^{\prime}\) is an independent set by Claim 14.
Let \(S=V(C)\setminus S^{\prime}\). Then \(|S|=\frac{1}{2}|V(C)|\). Therefore, \(c(G-S)=|V(G)|-|V(C)|+|S^{\prime}|>\frac{1}{2}|V(C)|\) and so
\[\tau(G)\leq\frac{|S|}{c(G-S)}<1,\]
contradicting to \(\tau(G)\geq 1\). This completes the proof of Theorem 5.
**Acknowledgement.** This work was supported by the National Natural Science Foundation of China (No. 12071158).
|
2310.09679 | What Do Deep Saliency Models Learn about Visual Attention? | In recent years, deep saliency models have made significant progress in
predicting human visual attention. However, the mechanisms behind their success
remain largely unexplained due to the opaque nature of deep neural networks. In
this paper, we present a novel analytic framework that sheds light on the
implicit features learned by saliency models and provides principled
interpretation and quantification of their contributions to saliency
prediction. Our approach decomposes these implicit features into interpretable
bases that are explicitly aligned with semantic attributes and reformulates
saliency prediction as a weighted combination of probability maps connecting
the bases and saliency. By applying our framework, we conduct extensive
analyses from various perspectives, including the positive and negative weights
of semantics, the impact of training data and architectural designs, the
progressive influences of fine-tuning, and common failure patterns of
state-of-the-art deep saliency models. Additionally, we demonstrate the
effectiveness of our framework by exploring visual attention characteristics in
various application scenarios, such as the atypical attention of people with
autism spectrum disorder, attention to emotion-eliciting stimuli, and attention
evolution over time. Our code is publicly available at
\url{https://github.com/szzexpoi/saliency_analysis}. | Shi Chen, Ming Jiang, Qi Zhao | 2023-10-14T23:15:57Z | http://arxiv.org/abs/2310.09679v1 | # What Do Deep Saliency Models Learn about Visual Attention?
###### Abstract
In recent years, deep saliency models have made significant progress in predicting human visual attention. However, the mechanisms behind their success remain largely unexplained due to the opaque nature of deep neural networks. In this paper, we present a novel analytic framework that sheds light on the implicit features learned by saliency models and provides principled interpretation and quantification of their contributions to saliency prediction. Our approach decomposes these implicit features into interpretable bases that are explicitly aligned with semantic attributes and reformulates saliency prediction as a weighted combination of probability maps connecting the bases and saliency. By applying our framework, we conduct extensive analyses from various perspectives, including the positive and negative weights of semantics, the impact of training data and architectural designs, the progressive influences of fine-tuning, and common failure patterns of state-of-the-art deep saliency models. Additionally, we demonstrate the effectiveness of our framework by exploring visual attention characteristics in various application scenarios, such as the atypical attention of people with autism spectrum disorder, attention to emotion-eliciting stimuli, and attention evolution over time. Our code is publicly available at [https://github.com/szzexpoi/saliency_analysis](https://github.com/szzexpoi/saliency_analysis).
## 1 Introduction
Attention deployment is a complex and fundamental process that enables humans to selectively attend to important sensory data in the visual environment. Scientists have been fascinated by the question of what drives human visual attention for decades. Understanding the mechanisms of visual attention not only sheds light on the human visual system but also helps computational methods to localize critical sensory inputs more efficiently.
One approach to predicting human attention is through saliency models, which have received considerable research attention. A saliency model predicts the most visually important regions in an image that are likely to capture attention. Earlier models follow a feature integration approach [1; 2; 3; 4] and extract low-level features (_e.g.,_ colors, intensity, orientations) or higher-level features (_e.g.,_ objects, semantics) from the input image to infer saliency [5; 6; 7]. While these models show initial success, their performance is limited by the difficulty of engineering relevant visual features. In contrast, recent saliency models [8; 9; 10; 11] follow a data-driven approach, leveraging large datasets [12] and deep neural networks [13; 14; 15] to learn discriminative features. These models achieve human-level performance on several saliency benchmarks [12; 16; 17], thanks to their accurate detection of important objects and high-level semantics [18; 19]. However, due to the lack of transparency, it is still unclear what semantic features these models have captured to predict visual saliency.
To understand how deep neural networks predict visual saliency, in this paper, we develop a principled analytic framework and address several key questions through comprehensive analyses:
* How do deep saliency models differentiate salient and non-salient semantics?
* How do data and model designs affect semantic weights in saliency prediction?
* How does fine-tuning saliency models affect semantic weights?
* Can deep saliency models capture characteristics of human attention?
* What is missing to close the gap between saliency models and human attention?
Our method connects implicit features learned by deep saliency models to interpretable semantic attributes and quantifies their impact with a probabilistic method. It factorizes the features into trainable bases, and reformulates the saliency inference as a weighted combination of probability maps, with each map indicating the presence of a basis. By measuring the alignment between the bases and fine-grained semantic attributes (_e.g.,_ concepts in Visual Genome dataset [20]), it quantifies the relationships between diverse semantics and saliency. This unique capability enables us to identify the impact of various key factors on saliency prediction, including training datasets, model designs, and fine-tuning. It can also identify common failure patterns with state-of-the-art saliency models, such as SALICON [9], DINet [11], and TranSalNet [10]. Beyond general saliency prediction, the framework also shows promise in analyzing fine-grained attention preferences in specific application contexts, such as attention in people with autism spectrum disorder, attention to emotional-eliciting stimuli, and attention evolution over time. In sum, our method offers an interpretable interface that enables researchers to better understand the relationships between visual semantics and saliency prediction, as well as a tool for analyzing the performance of deep saliency models in various applications.
## 2 Related Works
Our work is most related to previous studies on visual saliency prediction, which make progress in both data collection and computational modeling.
**Saliency datasets.** With the overarching goal of understanding human visual perception, considerable efforts have been placed on constructing saliency datasets with diverse stimuli. The pioneering study [16] proposes an eye-tracking dataset with naturalistic images, which later becomes a popular online benchmark [21]. Several subsequent datasets characterize visual saliency into finer categories based on visual scenes [22], visual semantics [17], sentiments [23; 24], or temporal dynamics [25], to study the impact of different experimental factors on attention. To overcome the difficulties of tracking eye movements, Jiang _et al._[12] leverage crowd-sourcing techniques and use mouse-tracking as an approximation for eye gaze, which results in currently the largest saliency dataset. Recent works also consider broader ranges of visual stimuli, including videos [26; 27], graphical designs [28; 29], web pages [30; 31], crowds [32], driving scenes [33] and immersive environments [34; 35]. In this study, we focus on visual saliency for naturalistic images, which serves as the foundation of saliency prediction studies.
**Saliency models.** Prior works have developed saliency prediction models to quantitatively study human attention. Inspired by seminal works on saliency modeling [5; 6; 36], early saliency models [1; 2; 37; 4] typically adopt a bottom-up approach, integrating handcrafted features (_e.g.,_ colors, intensity, and orientations). On the other hand, recent approaches take a different route and leverage deep neural networks [13; 14; 15; 38] to automatically learn features and predict saliency. Vig _et al._[39] is one of the first attempts to utilize convolutional neural networks (CNNs) for saliency prediction. Huang _et al._[9] consider features learned from multi-scale inputs to model the coarse-to-fine semantics. Kruthiventi _et al._[40] leverage convolutional layers with diverse kernel sizes to capture multi-scale features and incorporate positional biases with location-dependent convolution. Kummerer _et al._[41] demonstrate the usefulness of features of visual objects in saliency prediction. Cornia _et al._[8] develop a recurrent neural network to iteratively refine features for saliency prediction. Yang _et al._[11] improve saliency prediction with dilated convolution to capture information from broader regions. Lou _et al._[10] study the usefulness of self-attention [15] for saliency prediction. Instead of building new models, several works [42; 43; 18; 44; 45; 46] study the behaviors of models. By analyzing predictions on different categories of stimuli, they identify key factors behind the successes and failures of existing saliency models (_e.g.,_ accurate detection of semantic objects, incorporation of low- and high-level features, contrast between diverse visual cues, detection of Odd-One-Out target, and etc.), and propose directions for improvements. A recent work [19] also analyzes the features
learned by saliency models by aligning activation maps with segmentation for a selection of objects (_e.g.,_ body parts, food, and vehicles in [17]).
Our research contributes to the field of attention research by introducing a rigorous methodology for analyzing deep saliency models. In contrast to previous studies [18; 19; 42; 43], our study has three key distinctions: First, while previous analyses of deep saliency models were restricted to a predefined set of salient objects (_e.g.,_ object segmentation used in [19]), our method automatically identifies both salient and non-salient semantics from a vocabulary of objects, parts, actions, and attributes. Second, different from previous qualitative analyses, our method quantifies the weights of these semantics in saliency prediction. It allows us to investigate the impacts of various factors (_e.g.,_ the contributions of positive/negative semantics, the characteristics of datasets and model designs, and the process of fine-tuning) on saliency prediction, offering insights into the development of future deep saliency models. Third, our approach goes beyond analyzing the general visual saliency and demonstrates its strength in characterizing human visual attention under specific conditions such as the attention of people with autism spectrum disorder, the saliency of emotion-eliciting images, and time-course attention evolution.
## 3 Methodology
Human visual attention is influenced by a spectrum of visual cues, from low-level contrasts to high-level semantic attributes [5]. However, current deep learning-based saliency models [8; 9; 11] remain opaque in terms of the semantic attributes they have learned and how these attributes contribute to saliency prediction. To address this gap, we propose a method that decomposes neural network features into discriminative bases aligned with a wide range of salient or non-salient semantic attributes, and quantifies their weights in saliency prediction.
As illustrated in Figure 1, our method decomposes visual features by projecting them onto a collection of trainable bases, and uses the probabilistic distribution of bases to infer visual saliency. The overall idea is to identify both salient and non-salient semantics and quantify their impact on saliency prediction. To achieve this, we start with a deep saliency model and compare the features learned at the penultimate layer with different bases. This is done using a dot product between features \(V\in\mathbb{R}^{M\times C}\) (\(M\) and \(C\) are the spatial resolution and dimension of features) and bases \(B\in\mathbb{R}^{N\times C}\) (\(N=1000\) is the number of bases defined based on the number of units in the final layers of deep saliency models [9; 10]), which corresponds to their cosine similarity:
\[\alpha=\sigma(V\otimes B^{T}) \tag{1}\]
Figure 1: Illustration of our method. It factorizes implicit features into trainable bases, and interprets the meanings of bases by aligning them with diverse semantics. Each basis can be interpreted as a weighted combination of semantics (_e.g.,_ face, female, and happy). By reformulating the saliency inference with a probabilistic method, the relationships between semantics and saliency can be quantified by integrating the model weights \(W_{sal}\) (_i.e.,_ the weight of each basis) and the semantic alignment \(O\) (_i.e.,_ the composition of semantics for each basis).
where \(\otimes\) denotes the dot product. \(\sigma\) is the Sigmoid activation for normalization. \(\alpha_{i,j}\in[0,1]\) represents the probability of \(j^{th}\) basis \(b_{j}\) detected at the \(i^{th}\) region \(P(b_{j}=1|V_{i})\). Inspired by [47] for decomposing model weights, we factorize the features as a weighted combination of matched bases:
\[V_{i}^{f}=\sum_{j=1}^{N}\alpha_{i,j}B_{j} \tag{2}\]
where \(V^{f}\in\mathbb{R}^{M\times C}\) are factorized features used to predict the saliency map \(S=W^{f}V^{f}\) (\(S\in\mathbb{R}^{M}\), \(W^{f}\) are weights of the last layer).
Upon building the connections between visual features and discriminative bases, we then re-route the final saliency prediction by (1) freezing all model weights including the bases, and (2) adjusting the last layer for saliency inference based on the probabilistic distribution \(\alpha\). We train a new layer (with weights \(W^{sal}\)) for predicting the saliency map:
\[S=\sum_{j=1}^{N}W_{j}^{sal}\alpha_{:,j} \tag{3}\]
Intuitively, the method formulates the problem of saliency prediction as learning the linear correlation between the detected bases and visual saliency, which can be denoted as learning \(P(S|b_{1},b_{2},...,b_{N})\). With the intrinsic interpretability of the design, _i.e.,_\(\alpha\) as the probabilistic distribution of bases and \(W^{sal}\) encoding the positive/negative importance of bases, we are able to investigate the weights of different bases to visual saliency.
The final step is to understand the semantic meanings of each basis. Unlike previous studies [18, 19, 42, 43] that focus on predefined salient objects, we take into account a comprehensive range of semantics without assumptions on their saliency. Specifically, our method leverages the factorization paradigm to measure the alignment between each basis and the semantics. Given an image and the regions of interest for different semantics (_e.g.,_ bounding box annotations in Visual Genome [20]), we (1) compute the probabilistic map for each \(j^{th}\) basis \(\alpha_{:,j}\in\mathbb{R}^{M}\), and (2) measure its alignment \(O_{j,p}\) with the regions of interest \(R_{p}\) for each semantic \(p\). Following [48], we binarize the probabilistic map with a threshold \(t_{j}\) and measure the alignment with Intersection over Union (IoU):
\[O_{j,p}=\frac{|\mathbb{I}[\alpha_{:,j}>t_{j}]\;\cap\;R_{p}|}{|\mathbb{I}[ \alpha_{:,j}>t_{j}]\;\cup\;R_{p}|} \tag{4}\]
We use an adaptive threshold \(t_{j}\) for each individual basis, which is defined to cover the top \(20\%\) of regions of probabilistic maps. Through iterating the measuring process for all images within the dataset, we are able to link bases learned from saliency prediction to a variety of visual semantics. We consider the top-5 semantics matched with each basis, and incorporate their average alignment scores \(\hat{O}_{j,p}\) for determining the weight \(I\) to saliency:
\[I_{p}=\frac{\sum\limits_{j=1}^{N}W_{j}^{sal}\hat{O}_{j,p}}{Z} \tag{5}\]
where \(Z\) is the normalization factor that normalizes the contribution of the semantics to the range of [-1, 1] (for semantics with positive/negative contributions, Z denotes the maximal/minimum contribution among all semantics). This approach enables us to capture a broader range of contributing semantics, while avoiding an overemphasis on dominant salient/non-salient semantics (e.g., face and cloudness).
Overall, our method establishes the foundation for bridging implicit features learned by deep saliency models with interpretable semantics. It goes beyond existing studies that analyze models' predictive behaviors on a selection of object categories, as it considers a comprehensive range of semantics without assuming their relevance to saliency. It provides insights into how well the models capture both salient and non-salient semantics, and how they quantitatively contribute to saliency prediction.
## 4 Experiment
### Implementation
**Semantic annotations.** To correlate implicit features with interpretable visual semantics, we leverage the Visual Genome [20] dataset. It has (1) multiple objects in the same scene to derive relative importance; and (2) a broad coverage of semantics in the naturalistic context, including objects, parts, attributes, and actions. We use the bounding box annotations of semantics (_i.e.,_\(R\) in Equation 4) to measure the alignment between bases and semantics.
**Model configuration.** We experiment with three state-of-the-art saliency prediction models, including SALICON [9], DINet [11] and TranSalNet [10]. All models are optimized with a combination of saliency evaluation metrics (_i.e.,_ Normalized Scanpath Saliency (NSS) [49], Correlation Coefficient (CC) [50], and KL-Divergence (KLD) [51]) as proposed in [8], and use ResNet-50 [13] as the backbone. Model training follows a two-step paradigm: (1) The model is optimized to factorize features with trainable bases, where the weighted combination of bases (_i.e.,_\(V^{f}\) in Equation 2) is used for predicting the saliency map. Note that we do not use pretrained and fixed deep saliency models, but optimize the corresponding model architecture with the proposed factorization modules. (2) We freeze the model weights learned in the previous step and reroute the saliency inference to derive the saliency map from the probabilistic distribution \(\alpha\) (see Equation 3) so that the interpretation is on features learned by the same saliency model. Only the last layer \(W^{sal}\) is fine-tuned to learn the correlation between the distribution of bases and visual saliency.
### How Do Deep Saliency Models Differentiate Salient and Non-Salient Semantics?
Deep saliency models are powerful tools for predicting visual saliency, and their ability to incorporate semantic information is crucial for closing the gap between computational modeling and human behaviors [17]. To gain insight into the semantics that deep saliency models learn and their contributions to saliency prediction, we apply our framework to the state-of-the-art DINet [11] trained on the SALICON [12] dataset (see Section 4.3 for results on other models and datasets), and explicitly measure the weights of diverse semantics during the inference process (_i.e.,_\(I_{p}\) in Equation 5).
Figure 2 shows that the DINet model effectively captures a variety of semantics that are closely related to visual saliency. These include social cues such as faces, noses, and beards, actions like having a meeting, snowboarding, and jumping, clothing such as goggles, and salient object categories like animals, vehicles, and text. These findings resonate with previous research [18; 19; 43] that saliency models learn to recognize salient cues. More importantly, they showcase the versatility of our approach in automatically identifying key contributing factors of saliency models without any preconceived assumptions [43; 19], enabling attention analyses across a diverse array of scenarios (see Section 4.5).
Figure 2: Important semantics learned by the deep saliency model and their weights. We visualize the top-60 semantics with significantly positive or negative weights.
Another unique advantage of our approach is to simultaneously derive semantics that both positively and negatively contribute to saliency, while previous studies commonly focus on the positive side. Our results reveal a clear separation between the semantics that contribute positively and negatively to saliency. Specifically, the model considers social and action cues to have a positive contribution to saliency, while semantics related to backgrounds such as sky (_e.g.,_ hazy and overcast), ground (_e.g.,_ pavement and carpet), and scene (_e.g.,_ bathroom and park) have negative weights. The observation demonstrates that deep saliency models' success is not only due to the accurate detection of salient objects [18; 19; 43], but also strongly related to the ability to distinguish salient and non-salient semantics.
### How Do Data and Model Designs Affect Saliency Prediction?
Training data and model designs play crucial roles in determining how well deep saliency models perform [18]. To understand these roles better, we conduct a comparative analysis of the effects of different training datasets and model architectures on saliency prediction.
Figure 3 compares the results of three DINet models trained on different datasets: SALICON [12], OSIE [17], and MIT [16]. It reveals that the shifts in semantics weights are tightly coupled with the characteristics of the training data. For instance, the model trained on OSIE pays more attention to social cues and non-salient semantics, because the OSIE dataset collects attention on semantic-rich stimuli with diverse social cues. Differently, the model trained on MIT assigns a less positive weight to social cues and a more negative weight to the vehicle category, which is likely due to the dataset's less emphasis on social semantics and the co-occurrence of vehicles and more salient objects. Therefore, differences in the semantic weights across models trained on different datasets reflect the variations in the semantics presented in the respective datasets.
Figure 4 compares the semantic weights of three different saliency models trained on the same SALICON [12] dataset: _i.e.,_ DINet [11], SALICON [9], and TranSalNet [10]. While all models place the highest weights on actions and the lowest weights on ground, scene, and sky, the differences in model designs are reflected in their semantic weights. For instance, compared to the other models, TranSalNet considers clothing and text to be strongly salient but sky to be less non-salient. The results
Figure 4: Semantic weights for different saliency models.
Figure 3: Semantic weights for DINet trained on different datasets.
shed light on the relationships between the designs and behaviors of saliency models, and show the usefulness of our framework in helping researchers tailor their models for specific applications.
Despite the differences in model behaviors across datasets and models, all models excel at automatically identifying salient semantics, _e.g.,_ action and social cues, and differentiating foreground from background semantics. The observation is consistent with our findings in Section 4.2, and indicates that the ability to correlate semantics with saliency and quantify their positive/negative contributions is a systematic advantage of deep saliency models.
### How Does Fine-tuning Saliency Models Affect Semantic Weights?
The process of fine-tuning is crucial for training deep saliency models, as highlighted in previous studies [18, 19]. To better understand the evolution of feature weights during fine-tuning, we conduct experiments on models trained in three scenarios of fine-tuning. These scenarios include models with fixed ImageNet [52] features (W/o fine-tuning), models that are fine-tuned for a single epoch (Single-epoch fine-tuning), and fine-tuned models with the best validation performance (Complete fine-tuning). We experiment with three different models, namely SALICON [9], DINet [11], and TranSalNet [10], and report the average results obtained from our experiments.
As shown on the left side of Figure 5, models with fixed ImageNet features already have the ability to identify salient semantics, such as action and social cues, highlighted with strong positive weights. This is consistent with previous findings [19], which validates the effectiveness of our approach.
When comparing the results of models with single-epoch fine-tuning (shown in the middle of Figure 5) to those without fine-tuning (shown on the left), we notice a significant shift in the negative weights. Specifically, while models with fixed ImageNet features identify scene-related semantics to contribute most negatively to saliency, after being fine-tuned for a single epoch, the models now focus on sky-related semantics and have a weaker emphasis on scene-related semantics. This can be attributed to the fact that ImageNet features are learned from iconic images, which are inherently insensitive to the sky background. However, since the sky is a strong indicator of low saliency values, it is necessary to learn this semantic to achieve accurate saliency prediction (note that the largest performance gain also occurs during the first epoch).
Finally, when examining the fully-tuned models (shown on the right side of Figure 5), we find that semantic weights continue to evolve during the later epochs of fine-tuning. Unlike the first epoch which imposes larger changes on a few negative semantics, the subsequent fine-tuning mostly plays a role in refining the weights of a broader range of semantics, enabling the models to become more sensitive to salient cues (_e.g._ action and text) and continuously adjusts the weights of negative semantics (_e.g._ ground and other).
These observations offer a comprehensive view of how deep saliency models progressively adapt ImageNet features through fine-tuning. They show that fine-tuning first concentrates on semantics with negative weights to saliency, which are not well captured in the pretrained features, and then gradually adjusts the relative weights of diverse semantics. Understanding the evolution of model behaviors during training can provide insights into optimizing the learning recipes for saliency models and enabling them to progressively encode the knowledge of diverse semantics.
Figure 5: Evolution of semantic weights throughout the fine-tuning process.
### Can Deep Saliency Models Capture Characteristics of Human Attention?
Human attention is influenced by several factors, such as the visual preferences of viewers, characteristics of visual stimuli, and temporal dynamics. We investigate the ability of DINet to capture the impact of these factors by training it on different attention data for each factor. We visualize results for the most discriminative semantics, and include the complete analysis in supplementary materials.
Firstly, we study the impact of visual preferences on attention with two subject groups, _i.e.,_, people with autism and those without, using a dataset with attention data of subjects from the two groups [53]. We train a DINet model on each set to compare their corresponding semantic weights. As shown in Figure 5(a), both models assign significant weights to social cues, but the model for the autism group has a considerably weaker emphasis on action cues. This can be linked to the deficits in joint attention for people with autism [54; 55]. Additionally, the model for the autism group also assigns a strong positive weight to the "other" category. The specific objects highlighted in the category are related to gadgets, _e.g.,_ digital, illuminated, and metal (see supplementary materials for details), which is consistent with the findings about the special interests of people with autism [53].
Next, we explore the impact of stimuli with diverse characteristics with images exhibiting positive, negative, and neutral sentiments, using the EMOd dataset [23]. Figure 5(b) shows that models trained on stimuli eliciting strong emotions (positive and negative) exhibit a higher emphasis on social and action cues (_e.g.,_ face and nose, see our supplementary materials for details) than the one for neutral sentiment. Their behaviors align with previous findings that human attention generally prioritizes emotion-eliciting content over non-emotion-eliciting content (_e.g.,_ clothing) [56; 57]. We also identify that models for positive/negative sentiments place more focus on human-related objects (_e.g.,_ social cues and clothing) than objects less related to humans (_e.g.,_ animals), which is a driving factor for characterizing the emotion prioritization effect on attention deployment.
Finally, we investigate the effects of temporal dynamics by conducting experiments on the CodeCharts1K [25] dataset that provides attention annotations collected at 0.5, 3, and 5 seconds. As depicted in Figure 5(c), a shift of focus from dominantly salient cues (_e.g.,_ action and social cues) to more diverse semantics (_e.g.,_ vehicle, animal, and color) is depicted by the gradually increasing weights for the latter group. It is because viewers usually engage their attention to the most salient cues at the beginning of visual exposure (_i.e.,_ within 3 seconds) before broadening their focus.
Overall, our findings suggest that deep saliency models can encode the fine-grained characteristics of diverse attention. They also validate the usefulness of our approach in revealing the discriminative patterns of attention and shed light on how visual attention is influenced by various factors.
### What Is Missing to Close the Gap Between Saliency Models and Human Attention?
Previous studies [18; 43] have identified a collection of common mistakes for saliency models by probing the predicted saliency maps. In this paper, we aim to complement these studies by analyzing the failure patterns within the intermediate inference process using the proposed factorization framework.
Figure 6: Semantic weights for characterizing the attention on three different settings. From left to right are results for the attention of people with and without autism, attention to stimuli eliciting different emotions, and the attention of different time periods. Figures share the same y-axis. Note that the results in the three figures are derived from different datasets selected based on the context of analyses.
To conduct our analysis, we first select the common success and failure examples where three tested models (_i.e.,_ SALICON [9], DINet [11], and TranSalNet [10]) consistently have high/low NSS scores [49]. Then we perform a qualitative analysis by visualizing the spatial probabilistic distribution (\(\alpha\) in Equation 3) of the bases for semantics with positive (red) and negative (blue) weights (see our supplementary materials for details), which is used to derive the final saliency maps.
In the successful examples (see the left panel of Figure 7), we find that accurate saliency prediction correlates with the differentiation of diverse semantics. Specifically, stimuli for these examples typically have salient and non-salient regions belonging to different semantics. Therefore, with the ability to distinguish positive and negative semantics (_i.e.,_ with discriminative distributions of the corresponding bases), models can readily determine their saliency distribution.
However, in the failure examples (see the right panel of Figure 7), models commonly struggle to determine the saliency within objects (_e.g.,_ bicycle in the \(1^{st}\) failure example) or among objects with similar semantics (_e.g.,_ elephants in the \(2^{nd}\) failure example). Investigation of the probabilistic distribution of bases shows that models typically have a uniform-like distribution of bases on object parts or among objects of the same category, thus inherently incapable of determining the importance of the bases to construct an accurate saliency map. We also note that existing models have difficulty with scenes without salient objects, which is illustrated in the \(3^{rd}\) failure examples with a relatively empty scene.
As shown in Figure 7) (second column of the right panel), the ground truth human attention of the failure patterns is scattered. We further look into the inter-subject variability of these cases and found that their inter-subject variability is high, suggesting that human viewers may not agree on where to look, and therefore the ground truth maps are less valid. In this case, one assumption about saliency modeling (i.e., certain commonality about human attention patterns) may not be true, and the validity of using the ground truth human map for training and evaluation (i.e., the standard leave-one-subject-out approach), and the expected behavior of targeted models are interesting and open questions.
Overall, while high-level semantics learned in existing deep saliency models are powerful, we hypothesize that leveraging more structured representations to encode the contextual relationships between semantics, and integrating mid- and low-level cues will be helpful for accommodating challenging scenarios in saliency prediction.
### Quantitative Evaluation of the Interpretable Model
The fundamental objective of our study is to develop a principled framework for understanding the underlying mechanism behind deep saliency models without altering their inherent behaviors. For this, we only introduce minimal architectural modifications, limited to the last two layers of the saliency models, thereby ensuring that their performance aligns seamlessly with the original models across all datasets. To complement our aforementioned analyses and further substantiate the efficacy
Figure 7: Successful (left) and failure (right) examples of deep saliency models. We visualize the predictions from DINet, as those for the other two models are similar. For the distribution of bases, from red to blue, the probabilities of positive bases decrease while those for negative bases increase.
of our methodology, we quantitatively evaluate the saliency prediction performance of our method (using a DINet backbone trained the SALICON training split as an example) on three commonly used saliency datasets, including OSIE [17], MIT [16], and SALICON [12]. Comparative results reported in Table 1 demonstrate the competitive nature of our approach with respect to state-of-the-art methods, and validate its effectiveness in achieving the balance between interpretability and model performance.
## 5 Conclusion, Limitations, and Broader Impacts
As deep saliency models excel in performance, it is important to understand the factors contributing to their successes. Our study introduces a novel analytic framework to explore how deep saliency models prioritize visual cues to predict saliency, which is crucial for interpreting model behavior and gaining insights into visual elements most influential in performance. We discover that the models' success can be attributed to their accurate feature detection and their ability to differentiate semantics with positive and negative weights. These semantic weights are influenced by various factors, such as the training data and model designs. Furthermore, fine-tuning the model is advantageous, particularly in allocating suitable weights to non-salient semantics for optimal performance. Our framework also serves as a valuable tool for characterizing human visual attention in diverse scenarios. Additionally, our study identifies common failure patterns in saliency models by examining inference processes, discusses challenges from both human and model attention, and suggests modeling with holistic incorporation of structures and lower-level information.
Despite advancing attention research from different perspectives, our work still has room for improvement. Specifically, the current study focuses on visual attention deployment in real-world scenarios and employs the commonly used natural scenes as visual stimuli. We are aware that certain applications (_e.g.,_ graphical design and software development) may also involve artificial stimuli, such as advertisements, diagrams, and webpages, and extension of the proposed framework to broader domains can be straightforward and interesting.
We anticipate several positive impacts stemming from this research. Firstly, by advancing the understanding of deep saliency models, valuable insights can be applied to optimize interfaces for human-computer interactions. This, in turn, will enhance the efficiency and reliability of the next generation of computer-aided systems, leading to improved user experiences and increased productivity. Secondly, the accurate capture of visual importance can have significant implications for individuals with visual impairments. Saliency models can effectively assist visually impaired individuals in navigating and interacting with both people and environments. This could empower them to engage more fully in various aspects of daily life, fostering independence and inclusivity. In sum, this comprehensive exploration of computational saliency modeling holds great potential for broader societal benefits.
## Acknowledgements
This work is supported by NSF Grants 2143197 and 2227450.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{OSIE [17]} & \multicolumn{2}{c}{MIT [16]} & \multicolumn{2}{c}{SALICON [12]} \\ \cline{2-7} & NSS & CC & NSS & CC & NSS & CC \\ \hline SALICON [9] & 2.75 & 0.63 & 2.56 & 0.70 & 1.89 & 0.86 \\ \hline SAM [8] & 2.70 & 0.65 & 2.47 & 0.69 & 1.84 & 0.86 \\ \hline DINet [11] & 2.88 & 0.63 & 2.54 & 2.70 & 1.92 & 0.87 \\ \hline Ours & 2.91 & 0.64 & 2.53 & 0.70 & 1.89 & 0.86 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparative results of saliency prediction on three popular datasets. Models are evaluated with two common metrics, including Normalized Scanpath Saliency (NSS) [49] and Correlation Coefficient (CC) [50]. |
2302.11227 | Phenomenological relationship between eccentric and quasi-circular
orbital binary black hole waveform | Eccentricity, an important parameter of gravitational waves, has been paid
more and more attention because it can reflect the dynamics of compact object
mergers. Obtaining an accurate and fast gravitational waveform template is of
great significance for the estimation of gravitational wave parameters. This
paper aims to do an extended study of the phenomenological fitting model
proposed by Setyawati and Ohme for adding eccentricity to quasi-circular
orbital waveforms. It can be applied to higher eccentricity up to e = 0.4. But
the higher the eccentricity, the less the accuracy. For e in [0, 0.1], it gives
an overlap of more than 99.99%. For e in [0.1, 0.2], it gives an overlap of
more than 99.9%. For e in [0.2, 0.3], it gives an overlap of more than 99%. For
e in [0.3, 0.4], it gives an overlap of more than 90%. The reason for these
phenomena is that the larger the eccentricity, the larger the deviation of the
eccentricity estimator from the cosine function due to the large change in the
morphology of the eccentric waveform, and the worse the fitting effect of the
model. It can be applied to higher-order modes and gives the same overlap
behavior. After adding a shift parameter, it can be applied to spin-aligned or
spin-antialigned waveforms. After obtaining spin-precessing effect, it can be
applied to the spin-precessing case. In summary, non-spining, spin-aligned,
spin-antialigned or spin-precessing waveforms with eccentricity can be
constructed from quasi-circular non-spining waveforms by the phenomenological
model, which is not only helpful for us to quickly construct phenomenological
gravitational wave templates, but also reveals a phenomenological and universal
relationship between eccentric waveform and quasi-circular orbital waveform. | Hao Wang, Yuan-Chuan Zou, Yu Liu | 2023-02-22T09:18:41Z | http://arxiv.org/abs/2302.11227v1 | Phenomenological relationship between eccentric and quasi-circular orbital binary black hole waveform
###### Abstract
Eccentricity, an important parameter of gravitational waves, has been paid more and more attention because it can reflect the dynamics of compact object mergers. Obtaining an accurate and fast gravitational waveform template is of great significance for the estimation of gravitational wave parameters. This paper aims to do an extended study of the phenomenological fitting model proposed by Setyawati and Ohme for adding eccentricity to quasi-circular orbital waveforms. We extend the range of the waveform for mass ratio to \([1,7]\), eccentricity to \([0,0.4]\), and time range from a fixed time range \([-1500M,-29M]\) to any case. We also study higher-order modes, spin-aligned and spin-precessing waveforms. We discover that, after expanding some fitting parameters, the model can be applied to mass ratios \(q\in[1,7]\) or higher, and can be applied to almost the entire time range of numerical relativity even to \(t=12000M\) prior to merger. It can be applied to higher eccentricity up to \(e=0.4\). But the higher the eccentricity, the less the accuracy. For \(e\in[0,0.1]\), it gives an overlap of more than \(99.99\%\). For \(e\in[0.1,0.2]\), it gives an overlap of more than \(99\%\). For \(e\in[0.2,0.3]\), it gives an overlap of more than \(99\%\). The reason for these phenomena is that the larger the eccentricity, the larger the deviation of the eccentricity estimator from the _cosine_ function due to the large change in the morphology of the eccentric waveform, and the worse the fitting effect of the model. It can be applied to higher-order modes and gives the same overlap behavior. After adding a shift parameter, it can be applied to spin-aligned or spin-antialigned waveforms. After obtaining spin-precessing effect, it can be applied to the spin-precessing case. In summary, non-spining, spin-aligned, spin-antialigned or spin-precessing waveforms with eccentricity can be constructed from quasi-circular non-spining waveforms by the phenomenological model, which is not only helpful for us to quickly construct phenomenological gravitational wave templates, but also reveals a _phenomenological and universal_ relationship between eccentric waveform and quasi-circular orbital waveform.
## I Introduction
Since the first detection of the binary black hole merger event GW150914 in 2015, gravitational wave astronomy has ushered in a new era [2]. So far, 93 gravitational wave events have been detected by ground-based gravitational wave detectors LIGO [3], Virgo [4] and KAGRA [5] including binary black holes (BHHs), 2 black hole-neutron stars (BHNS), and 2 binary neutron stars (NSNS) [6].
Current extraction of gravitational wave signals all use circular orbital waveforms templates, because the evolution of isolated binary stars is generally considered to circularize due to gravitational wave radiation, whose eccentricity can be negligible when entering the gravitational wave detection frequency band at about 10 Hz [7; 8; 9; 10]. However, there are many ways that BBHs will gain eccentricity before the merger. In dense regions of stars like globular clusters [11; 12; 13; 14; 15; 16; 17] and galactic nuclei [18; 19], BBHs can acquire eccentricity due to double-single [20], double-double interactions [21], and gravitational capture [18]. Or in a three-body system, such as binary objects in the neighborhood of a supermassive black hole, eccentricity of inner binary objects will oscillate due to the Kozai-Lidov mechanism [22; 23; 24], which will be detected when entering the detection frequency band. Gravitational waves of BBHs mergers in globular clusters entering the LIGO sensitive band, \(10\%\) of them still maintain an eccentricity more than \(0.1\) according to Refs. [15; 16]. GW190521 [25] is considered to be currently the only BBH merger with high mass and high eccentricity \(e=0.69^{+0.17}_{-0.22}\) through \(611\) numerical relativity simulations [26]. With the improvement of detector sensitivity, more and more eccentric BBHs mergers will be detected by next-generation ground-based gravitational-wave detectors Einstein Telescope(ET) [27] or Cosmic Explorer(CE) [28].
Errors may exist or signal-to-noise ratio will be reduced when using circular orbital waveforms for parameter estimation [29; 30]. There are some numerical relativistic simulations of eccentric BBHs mergers [31; 32; 33; 34]. Parameter estimation of gravitational waves requires millions of waveform templates. In general, complete numerical relativity (NR) simulation yields the most accurate gravitational waveform, but each NR simulation takes several weeks and months and is computationally expensive. Until now several studies developed many analytical gravitational waveforms based on post-Newtonian (PN) approximation in Refs. [35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49] or effective-one-body (EOB) in Refs. [50; 51; 52; 53; 54]. Islam _et al._[55] developed a waveform based on the calibration of complete NR. Some state of art surrogate models of full inspiral-merger-ringdown
(IMR) eccentric gravitational waveform have been developed based on hybrid of PN and NR [56; 57; 58; 59] or EOBNR [60; 61; 62; 53; 54; 63; 64; 65; 66; 67; 68; 69; 52]. There are also some special cases such as eccentric extreme-mass-ratio inspirals (EMRI) [64; 65; 66; 67; 68; 69] and gravitational wave bursts with high eccentricity [70]. It is not so accurate to replace the part of the waveform that is close to the merger with the circular orbit waveform of NR. But the complete NR BBHs simulation with eccentricity is rare and not publicly available, which makes it necessary to develop phenomenological model which can generate fast and accurate numerically relativistic eccentric gravitational waveforms.
The main purpose of this paper is to do an extended study on the phenomenological model that converts a circular orbital waveform into an eccentric orbital waveform proposed by Setyawati and Ohme [1]. The model can quickly and simply produce a complete numerical relativistic eccentric orbital waveform based on corresponding circular orbital waveform. In addition to the waveform from Simulating eXtreme Spacetime(SXS) catalog, we add some waveforms from the Rochester Institute of Technology (RIT) catalog, making the range of mass ratio extended to \(q\in[1,7]\), and the eccentricity range extended to \(e\in[0,1]\), and the time range of the waveform extended from the fixed time range \(t\in[-1500M,-29M]\) to any time range even at \(t=12000M\) before merging. We extend the model can not only to dominant mode, but to higher order modes including 3-3, 2-1, 4-4, 5-5, 3-2,4-3 modes. We also apply the model to the spin-aligned case, but for this case the model needs to be adjusted in some ways. When we apply this model to the most complex spin precession situation, it is time to build a more complete model to convert the waveform without spin and eccentricity into the waveform with eccentricity and spin precession. This implies a _phenomenological and universal_ relationship between the eccentric waveform and the circular orbit waveform, which can help us understand the relationship between eccentricity, spin and precession in BBH mergers.
This article is organized as follows: In Sec. II, we firstly introduce the numerical relativity waveform data we used and some basic notions about gravitational waves in Sec. II.1. We make a detailed description to the eccentricity estimators in Sec II.2. Next, we introduce a method to measure eccentricity from eccentric waveform in Sec. II.3. Finally, We describe the fitting process to the dominant mode in Sec. II.4, and the extended research on higher-order mode in Sec. II.5, spin-aligned in Sec. II.6 and spin-precessed waveforms in Sec. II.7, respectively. In section III, we give the fitting results and overlap for every case, and analyze them. In section IV, we give conclusions and outlook. Throughout this article, we use geometric units that \(G=c=1\). Component masses of BBHs are \(m_{1}\) and \(m_{2}\). The total mass is \(M\), which we set to unity for simplifying expression. The mass ratio \(q\) is defined as \(q=m_{1}/m_{2}\) and \(m_{1}>m_{2}\). We denote the black hole's dimensionless spin vectors by \(\vec{\chi}_{i}=\vec{S}_{i}/m_{i}^{2}\) for \(i=1,2\).
## II Method
After combining some new waveforms, the range of parameters we studied such as mass ratio, eccentricity, time and spin had been expanded, which allow us to comprehensively investigate the model and its applicable range. Here, we want to present enough details in the article, and thus add much content to present the essence of the model, it is different in some ways with the Ref. [1].
### Numerical relativity data
There are many numerical relativity collaborations that have performed lots of numerical simulations of BBHs mergers, but there are few publicly available simulations with eccentricity or high mass ratios. The data we used come from two collaborations. The first is Simulating eXtreme Spacetimes (SXS) Collaboration which evolves the initial data using a multi-domain spectral method [71; 72; 73; 74; 75] with a first-order version of the generalized harmonic formulation [76; 77; 78; 75] of Einstein's equations with constraint damping with Spectral Einstein Code (SpEC) [79]. SXS catalog has published 23 sets of non-spinning eccentric waveforms with mass ratios \(q\in[1,3]\) and eccentricity range \(e\in[0,0.2]\). The other set of waveforms we used was from the Rochester Institute of Technology (RIT) [80]. The simulations in the RIT Catalog were evolved using the LazEv code [81] implementation of the moving puncture approach [82] and the BSSNOK formalism of evolution systems [83; 84; 81]. The LazEv code uses the Cactus [85] /Carpet [86] /EinsteinToolkit [87] infrastructure. The 4th release of RIT catalog published 824 eccentric BBHs NR simulations including spinning, spin-aligned and spin-precessing cases with eccentricities from 0 to 1 [34].
Waveforms are obtained by computing the Newman-Penrose scalar \(\Psi_{4}\) at a finite radius and extrapolating to null infinity. \(\Psi_{4}\) can be expanded by the spin-weighted \(s=-2\) spherical harmonic function \({}_{-2}Y_{\ell,m}(\theta,\phi)\) as
\[r\Psi_{4}=\sum_{\ell,m}r\Psi_{4(\ell m)-2}Y_{\ell,m}(\theta,\phi), \tag{1}\]
where \(r\) is the extraction radius. Gravitational wave strain \(h\) and its relationship with \(\Psi_{4}\) can be expressed as
\[rh=r\left(h_{+}-ih_{\times}\right)=\sum_{\ell,m}rh_{\ell m-2}Y_{\ell,m}( \theta,\phi) \tag{2}\]
and as r goes to infinity
\[\Psi_{4}(t)=\frac{\partial^{2}}{\partial t^{2}}h(t), \tag{3}\]
where \(h_{+}\) and \(h_{\times}\) represent the two polarizations of gravitational waves, respectively. We can decompose \(\Psi_{4}\) and \(h\) into a combination of amplitude and phase:
\[\Psi_{4(lm)}=A_{lm}(t)\exp\left[-i\varphi_{lm}(t)\right] \tag{4}\]
\[h_{lm}=\mathcal{A}_{lm}(t)\exp\left[-i\Phi_{lm}(t)\right] \tag{5}\]
and amplitude and phase of \(h_{lm}\) can be obtained by
\[\mathcal{A}_{lm}=|h_{lm}| \tag{6}\]
\[\omega_{lm}=\frac{d\Phi_{lm}}{dt}. \tag{7}\]
Define effective spin in the z direction:
\[\chi_{\text{eff}}=\frac{m_{1}\chi_{1,z}+m_{2}\chi_{2,z}}{m_{1}+m_{2}}, \tag{8}\]
where \(\chi_{1,z}\) and \(\chi_{2,z}\) are dimensionless spins in the \(z\) direction for two black holes. We get the gravitational wave strain \(rh\) instead of \(r\Psi_{4}\) from the SXS catalog and the RIT catalog, but in fact the model is applicable for both according to analysis in Sec. II.2. In the processing of waveforms, for the waveforms of SXS and RIT, we cut off the first 300 and the first 100 respectively due to junk radiation. Other processing for waveforms is similar to the Ref. [1]. The waveforms we used and associated parameters in the RIT and SXS catalogs are given in Appendix A for TABLE I and TABLE II. Because SXS waveform catalog does not give initial eccentricity, we use the method in Sec. II.3 to measure the eccentricity of the waveform at the first 300. For the RIT waveform, we directly use the reference eccentricity given in the catalog. All waveform parameter distributions for mass ratio \(q\), effective spin \(\chi_{eff}\) and initial eccentricity \(e_{0}\) are given in FIG. 1. We only show the range of eccentricity \(e\in[0,0.45]\) here because the model is dependent on the length of the waveform, so a sufficiently long waveform is required, and many shorter waveforms are excluded. Relevant analysis is in Sec. II.3.
### Eccentricity Estimator
It is necessary to introduce the eccentricity estimator in detail. According to Ref. [88], eccentricity estimator is derived from Newton's formula containing distance, orbital phase or orbital frequency that can be used to estimate eccentricity. In the Ref. [88], an eccentricity estimator is defined as an oscillatory function
\[e_{X}(t)=e\cos\left(\Omega t+\phi\right), \tag{9}\]
where \(X\) represents the object used to measure eccentricity, and \(e\), \(\Omega\) and \(\phi\) is eccentricity, frequency and phase respectively. \(X\) can be separation, frequency, amplitude, phase, or derivative of frequency from orbital dynamics or waveform. Based on separation
\[d(t)=d_{0}\left[1+e\cos\left(\Omega t+\phi_{0}\right)\right]+O\left(e^{2} \right), \tag{10}\]
where \(d_{0}\) and \(\phi_{0}\) are the average distance and initial phase in Newtonian orbit. We get separation eccentricity estimator:
\[e_{d}(t)=(1)\left(\frac{d(t)-\bar{d}(t)}{\bar{d}(t)}\right), \tag{11}\]
where \(\bar{d}(t)\) represents the secular average to \(d(t)\), and \(1\) is its coefficient. Based on orbital phase:
\[\Phi(t)=\Phi_{0}+\Omega_{0}t+2e\sin\left(\Omega t\right)+O\left(e^{2}\right), \tag{12}\]
where \(\Phi_{0}\) and \(\Omega_{0}\) are phase offset and average frequency. We get orbital phase eccentricity estimator:
\[e_{\Phi}(t)=(1)\left(\Phi(t)-\bar{\Phi}(t)+\Phi_{1}\right), \tag{13}\]
where \(\bar{\Phi}(t)\) is the secular average to \(\Phi(t)\) and \(\Phi_{1}\) represents a phase offset. Taking the first time derivative with respect to the phase, we get orbital frequency eccentricity estimator:
\[e_{\Omega}(t)=\left(\frac{1}{2}\right)\left(\frac{\Omega(t)-\bar{\Omega}(t)}{ \bar{\Omega}(t)}\right), \tag{14}\]
where \(\bar{\Omega}(t)\) is the secular average to \(\Omega(t)\). We can not only get the eccentricity estimator based on the orbital dynamical quantities, but we can get it from the waveform. According to Ref. [89], based on the frequency, amplitude and phase of the Weyl scalar \(\Psi_{4}\), we can also define the associated eccentricity estimators. In the Ref. [89], according to Eq.(4), Weyl scalar \(\Psi_{4}\) of the 2-2 mode and its frequency can be expressed as
\[\Psi_{4(22)}=A_{22}(t)\exp\left[i\varphi_{22}(t)\right] \tag{15}\]
\[\varpi_{22}=\mathrm{d}\varphi_{22}/\mathrm{d}t, \tag{16}\]
where
\[A_{22}(t) = K_{1}\left(1+\frac{39}{8}e\cos\Omega t\right)+\mathcal{O}\left(e^{ 2}\right)\] \[\varphi_{22}(t) = -2\Omega t-\frac{21}{4}e\sin\Omega t+\mathcal{O}\left(e^{2}\right) \tag{17}\] \[\varpi_{22}(t) = -2\Omega\left(1+\frac{21}{8}e\cos\Omega t\right)+\mathcal{O}\left( e^{2}\right),\]
Figure 1: Waveform parameter distribution of the data for mass ratio \(q\), effective spin \(\chi_{eff}\) and initial eccentricity \(e_{0}\) come from SXS and RIT catalog.
which are first-order approximation of the eccentricity, where \(K_{1}\) is a constant. The associated eccentricity estimators are:
\[e_{A_{22}}(t) =\left(\frac{8}{39}\right)\left(\frac{A_{22}(t)-\bar{A}_{22}(t)}{A _{22}(t)}\right) \tag{18}\] \[e_{\varphi_{22}}(t) =\left(\frac{4}{21}\right)\left[\varphi_{22}(t)-\bar{\varphi}_{22 }(t)+\varphi_{220}\right]\] \[e_{\varpi_{22}}(t) =\left(\frac{8}{21}\right)\left(\frac{\varpi_{22}(t)-\varpi_{22} (t)}{\varpi_{22}(t)}\right),\]
where \(\varphi_{220}\) represents a phase offset to \(\varphi_{22}\). According to Eq.(5), the 2-2 mode of gravitational waves strain \(h\) and its frequency can be expressed as
\[h_{22}=\mathcal{A}_{22}(t)e^{i\Phi_{22}(t)} \tag{19}\]
\[\omega_{22}=\mathrm{d}\Phi_{22}/\mathrm{d}t, \tag{20}\]
where
\[\mathcal{A}_{22}(t) =K_{2}\left(1+\frac{3}{2}e\cos\Omega t\right)+\mathcal{O}\left(e^ {2}\right) \tag{21}\] \[\Phi_{22}(t) =-2\Omega t-3e\sin\Omega t+\mathcal{O}\left(e^{2}\right)\] \[\omega_{22}(t) =-2\Omega\left(1+\frac{3}{2}e\cos\Omega t\right)+\mathcal{O} \left(e^{2}\right),\]
which are first-order approximation of the eccentricity, where \(K_{2}\) is a constant. The associated eccentricity estimators are:
\[e_{\mathcal{A}_{22}}(t) =\left(\frac{2}{3}\right)\left(\frac{\mathcal{A}_{22}(t)-\bar{ \mathcal{A}}_{22}(t)}{\mathcal{A}_{22}(t)}\right) \tag{22}\] \[e_{\Phi_{22}}(t) =\left(\frac{1}{3}\right)\left[\Phi_{22}(t)-\bar{\Phi}_{22}(t)+ \Phi_{220}\right]\] \[e_{\varpi_{22}}(t) =\left(\frac{2}{3}\right)\left(\frac{\omega_{22}(t)-\bar{\omega}_ {22}(t)}{\omega_{22}(t)}\right).\]
Although some aspects are different in form, the ref. [88] puts the average on the denominator, but the ref. [89] not. What they express is the difference between a certain quantity of the eccentric waveform and its average value, and the meaning is the same. In fact, no matter it is based on the orbital dynamics, or the gravitational waveform, whether it is based on the 2-2 mode or the high-order modes, it turns out that we can obtain the associated eccentricity estimator, and they can be summed up in the form:
\[e_{X_{1}}(t)=(k_{1})\left(\frac{X_{1}(t)-\bar{X}_{1}(t)}{\bar{X}_{1}(t)}\right) \tag{23}\]
\[e_{X_{2}}(t)=(k_{2})\left[X_{2}(t)-\bar{X}_{2}(t)+X_{20}\right], \tag{24}\]
where \(X_{1}\) represents the quantity relative to orbital distance, frequency or amplitude, \(X_{2}\) represents the quantity relative to phase, and \(X_{20}\) represents a phase offset, and \(k_{1}\), \(k_{2}\) can be some constant. Then we will find that \(\bar{X}\) is the quantity \(X_{c}\) come from its corresponding circular orbit waveform. We need to emphasize that there is a difference between waveform and orbital dynamics, which is reflected in the different constant coefficients of the eccentricity estimator derived. However, the difference in the constant coefficients \(k_{1}\) only affects the magnitude of the Eq.(23), not the overall behavior of the eccentricity estimator. So the phenomenological fitting model which we will introduce later is still applicable for them.
### Measuring eccentricity of waveform
There are many definitions of eccentricity, and many ways to measure eccentricity [88; 89; 90]. Each of them have their own scope of application. In general, estimating eccentricity is mainly from the perspectives of waveform and orbital dynamics. The eccentricity estimators Eq.(11), Eq.(13), Eq.(14) to estimate the eccentricity is based on some orbital dynamical quantities. The eccentricity estimators Eq.(19), Eq.(23) to estimate the eccentricity is based on some quantities of waveform. In the Ref. [34], RIT catalog estimates the eccentricity by the dynamical coordinate distance \(d\):
\[e_{d}=d^{2}\ddot{d}/M. \tag{25}\]
We can obtain the eccentricity at each moment by interpolating the amplitude of the eccentricity estimator. But the larger the eccentricity, the more the eccentricity estimator deviates from the behavior of the _cosine_ or _sine_ function, so that it is not suitable for measuring high eccentricity. When we study high eccentricity, we have to introduce a new method capable of measuring high eccentricity according to the Ref. [91]:
\[e_{\Omega}(t)=\frac{\Omega_{p}^{1/2}-\Omega_{a}^{1/2}}{\Omega_{p}^{1/2}+ \Omega_{a}^{1/2}}, \tag{26}\]
where \(\Omega\) is orbital frequency. And \(\Omega_{a}\) and \(\Omega_{p}\) are orbital frequency at apastron \(\Omega_{a}\) and periastron \(\Omega_{p}\), respectively. Although it is orbital frequency \(\Omega\) here, it can apply to frequency \(\omega\) of waveforms. Our approach is similar to Ref. [58]. First We use the python function _find_peaks_ to find the apastron and periastron of the waveform. We then measure the eccentricity of the waveform at the apastron and periastron. Next we use the python function _cubic_spline_ interpolation to obtain a continuous evolution of the eccentricity. FIG. 2 shows the evolution of frequency, apastron, periastron and eccentricity of the waveform SXS:BBH:1360. According to error propagation formula:
\[\delta e_{\omega}=\frac{\delta\omega}{\left(\omega_{a}^{1/2}+\omega_{p}^{1/2} \right)^{2}}\left[\frac{\omega_{a}^{1/2}}{\omega_{p}^{1/2}}+\frac{\omega_{p}^{1/ 2}}{\omega_{a}^{1/2}}\right]. \tag{27}\]
The conservatively estimated error of frequency in the Ref. [58] is about \(\delta\omega_{a}=\delta\omega_{p}=\delta\omega=0.0001\) caused
by the different resolutions of the numerical simulations. The statistical error introduced by it in the measurement of eccentricity is \(\delta e_{\omega}\approx 0.001\), which is not the main error. The main error is caused by interpolation, but it is not easy to quantify it. When the eccentricity is about 0.4, the interpolation error can be as high as 0.1. The fewer points that can be interpolated, the higher the error. For the research in this paper, if a waveform has more than 6 cycles and the eccentricity is lower than 0.3, the measurement error for eccentricity is not very large, but as the eccentricity increases, the measurement error becomes larger and larger. In FIG. 3, we show the frequency of the waveforms separately, which have 5 cycles, 4 cycles, 3 cycles, 2 cycles, 1 cycle, 0 cycle. For a waveform, when the eccentricity is small or the distance between the BBH is large, we can get more cycles, such as SXS:BBH:1360 in the FIG. 2. On the contrary, we can only get a few cycles. If the eccentricity of the waveform is very large and close to 0.55, BBH will merge directly without any cycle such as _0 cycle_ in the FIG. 3. The way we measure eccentricity is by interpolating with _cubic_splines_, which requires at least four points to be interpolated. At the same time, fitting of the eccentricity estimator in Sec. II.4 also requires enough waveform cycles to get enough information about mass ratio and initial eccentricity, so that we can obtain a sufficiently accurate waveform, which also determines the length of the waveform that we used is at least \([-1500,-300]\) to be reliable. Therefore, for cases such as _0 cycle, 1 cycle_, _2 cycles_ and _3 cycles_ in the FIG. 3, we cannot obtain the eccentricity of the waveforms, and for cases such as _4 cycles_ and _5 cycles_, the error of the obtained eccentricity is large. For these reasons, the waveforms we can use are limited, and the range of their initial eccentricity is limited, so we only list limited waveforms in the Sec. II.1.
### Fitting process
Due to the circularization of the gravitational radiation, eccentric waveform has the same properties as the circular orbit waveform near merger [9]. So we only study the waveform before merger. We use the same notation as Ref. [1] for eccentricity estimator equivalent to taking \(k_{1}=\frac{1}{2}\) in the Eq.(23):
\[e_{X}(t)=\frac{X_{\mathrm{e}}(t)-X_{\mathrm{c}}(t)}{2X_{\mathrm{c}}(t)}, \tag{28}\]
where \(X\) represents the amplitude \(\mathcal{A}\) or frequency \(\omega\), and the subscripts \(e\) and \(c\) represent the eccentric and the circular orbital waveform, respectively. Due to the monotonic correspondence between frequency, amplitude and time for circular orbital waveform, there is an assumption implicit in the Ref. [1] for the fitting model:
\[\begin{split} e_{X}\left(t\right)&=e_{X}\left(t(X_{ c})\right)=\frac{X_{\mathrm{e}}\left(t(X_{c})\right)-X_{\mathrm{c}}}{2X_{ \mathrm{c}}}\\ &=\frac{X_{\mathrm{e}}\left(X_{c}\right)-X_{\mathrm{c}}}{2X_{ \mathrm{c}}}=Ae^{BX_{c}^{\kappa}}\sin\left(fX_{c}^{\kappa}+\varphi\right), \end{split} \tag{29}\]
where \(A\), \(B\), \(f\), \(\varphi\) and \(\kappa\) are fitting parameters. We directly write the functional relationship between the quantities, because it is not obvious that there is a functional relationship between the eccentric quantities \(X_{e}\) and the circular quantities \(X_{c}\). Instead of taking \(\kappa\) as a fixed value -59/24 for amplitude or -83/24 for frequency as in the Ref. [1], we regard \(\kappa\) as a free parameter which will allow us to generalize the fitting to arbitrary time ranges, arbitrary mass ratio and higher-order modes. It is some coincident that the Ref. [1] uses \(\kappa\) as a fixed constant to fit and obtain good results, because they only use the 2-2 mode with time range \(t\in[-1500,-29]\) and mass ratio \(q\in[1,3]\). According to the analysis in Sec. II.2, the constant \(k_{1}=\frac{1}{2}\) in the formula can be any other constant, which does not affect the fitting result of the Eq.(29) to the waveform. We still use it for convenience. Unlike any other fitting model to get a local fitting to waveform, the model is a global fitting to the waveform, which reflects global nature of waveform. The fitting uses the python function _op.curve_fit_, which uses the non-linear least square fitting method to obtain the best fit.
Figure 3: Frequency of waveforms which have 5 cycles, 4 cycles, 3 cycles, 2 cycles, 1 cycle, 0 cycle. For 0 cycle, 1 cycle, 2 cycles and 3 cycles, we cannot obtain the eccentricity of the waveforms, and for 4 cycles and 5 cycles, the error of the obtained eccentricity is large.
Figure 2: Time evolution of waveform frequency \(\omega\), apastron \(\omega_{a}\), perisatron \(\omega_{p}\), and eccentricity \(e\) for numerical simulation SXS:BBH:1360.
We can fit the quantities such as amplitude \(\mathcal{A}\) and frequency \(\omega\) of waveform well through the Eq.(29), and then look for the relationship between initial eccentricity \(e\), mass ratio \(q\) and fitting parameters \(A\), \(B\), \(f\), \(\varphi\) and \(\kappa\). When the data of the waveforms is not so much, it is difficult for us to discover the relationship between them. After adding the waveforms form RIT catalog, the data is more and the relationship between the parameters is more obvious. After obtaining the relationship between the fitting parameters and the waveform parameters, we can cover the entire parameter range by interpolation or polynomial fitting. By inverting Eq.(29), we get the amplitude or frequency of the eccentric waveform we want as follows:
\[\begin{split} X_{e}(t)&=X_{e}(t(X_{c}))=X_{e}(X_{c })\\ &=2X_{c}e_{X}\left(t(X_{c})\right)+X_{c}\\ &=2X_{c}Ae^{BX_{e}^{\kappa}}\sin\left(fX_{c}^{\kappa}+\varphi \right)+X_{c}.\end{split} \tag{30}\]
In order to improve the fitting effect, we can also set \(\kappa\) to different \(\kappa_{1}\) and \(\kappa_{2}\):
\[e_{X}\left(t\right)=Ae^{BX_{e}^{\kappa_{1}}}\sin\left(fX_{c}^{\kappa_{2}}+ \varphi\right). \tag{31}\]
But the results of the fitting show that it does not work, because
(i) The larger parameter space makes it difficult to find relationships between fitting parameters and waveform parameters. The error is larger when we obtain eccentric waveforms through the Eq.(30).
(ii) The existence of the free parameter \(\kappa_{1}\) makes \(e^{BX_{e}^{\kappa_{1}}}\approx 1\) unstable, thus destroying the magnitude relationship \(A\approx Ae^{BX_{e}^{\kappa}}\), thereby destroying the proportional relationship between \(A\) and eccentricity \(e\), and introducing a larger error. It is very important that the eccentricity \(e\) is proportional to \(A\), because it makes the fitting more accurate. So we must maintain the relationship.
Unlike the Ref. [1], we do not want to construct a complete inspiral-merger-ringdown waveform, but study the waveform of 300 prior to merger. Near merging, eccentricity is very small and negligible. The part close to merging cannot be well fitted, and forced fitting will only bring errors. We give the fitting results of several different cases as follows:
(i) The same mass ratio \(q=1\), the same time range \(t\in[-2000,-300]\), and different eccentricity (see FIG. 4).
(ii) Different mass ratios \(q\in[1,7]\), the same time range \(t\in[-2000,-300]\), and similar initial eccentricity (see FIG. 4 (b) and FIG. 5)
(iii) The same mass ratio \(q=1\), different time range, and different eccentricity for numerical simulation RIT:eBBH:1422 (see FIG. 6).
From fitting results of each case, we can draw some conclusions:
(i) From FIG. 4, we find the larger the eccentricity, the worse the fitting effect. When the eccentricity is very small, for \(e=0.0522\), we can almost achieve complete fitting, but when \(e=0.3592\), we cannot achieve complete fitting. For the case with larger eccentricity, it is not easy to obtain a waveform of time range \(t\in[-2000,-300]\), because it requires BBH to have a larger initial separation.
(ii) From FIG. 4 (b) and FIG. 5, we find the fitting model can be applied to mass ratios \(q\in[1,7]\). We find that a poor fitting result is obtained in FIG. 5 (e), because there are some problems with the numerical simulation RIT:eBBH:1357 itself, not with the fitting model.
(iii) From FIG. 6, we find the fitting model can be applied to a very long time range. Even for \(t\in[-12000,-300]\), the morphology of the waveform can be roughly grasped.
### Extend to higher-order modes
In fact, this model can be applied not only to 2-2 mode, but also to higher-order modes. Exactly the same as 2-2 mode, we must maintain the one-to-one correspondence between the eccentric waveform and the circular waveform. As an example, when we study the eccentric nonspinning 2-1 mode, we have to use the associated circular nonspinning 2-1 mode. Here, we only list the 3-3, 2-1, 4-4, 5-5, 3-2, 4-3 modes of some waveforms given in the SXS catalog, because some other modes are not given in the catalog, and there are not many high-order modes of waveforms in the RIT catalog. In FIG. 7, we can see that there is a one-to-one correspondence between the eccentric nonspinning high-order modes and circular ones, which makes the same fitting model applicable to them. Here, we only take numerical simulation SXS:BBH:1368 with mass ratio \(q=1\), time range \(t\in[-2000,-300]\) and initial eccentricity \(e_{0}=0.0929\) as an example to give the fitting result for higher-order modes. In FIG. 8, we present the fitting results for each modes. But in fact, the variation behavior of all the higher-order modes is exactly the same as the 2-2 mode, which means that the model can also be applied to higher-order modes with other mass ratios, other eccentricities, and other time range.
### Extend to spin-aligned
Spin is an important parameter of gravitational waves, which is very necessary to contain. From the numerical simulations RIT:eBBH:1282, RIT:eBBH:1740, RIT:eBBH:1763 and RIT:eBBH:1899, we can see that the spin has the same effect as eccentricity to speed up BBHs merger. We discover that we can establish a relationship between eccentric spin-aligned waveforms and circular orbit waveforms from two perspectives.
(i) As in the previous case, we can establish a relationship as Eq.(29) between eccentric spin-aligned waveforms and circular orbit spin-aligned waveforms. We must maintain the consistency of spins for eccentric waveforms and circular waveforms. For example, if initial dimensionless spins of BBH \(\chi_{z1}=-0.5\), \(\chi_{z2}=-0.5\) for
one of the waveforms, it must be the same for another. Here, we take the eccentric waveform as RIT:eBBH:1899, whose \(q=1\), two dimensionless spins are \(\chi_{z1}=-0.5\), \(\chi_{z2}=-0.5\), time range is \(t\in[-3000,-300]\) and initial eccentricity \(e=0.1110\), and the circular orbit waveform as SXS:BBH:0325, which has the same mass ratio, time range and dimensionless spin. In FIG. 9 (a), we present the fitting result.
waveform and straight line \(e_{\mathcal{A}}=g\) is the approximate axis of symmetry of the waveform. In FIG. 9 (b), we show the fitting result and the position of the parameter \(g\). We select the waveforms RIT:eBBH:1282, RIT:eBBH:1740, RIT:eBBH:1763, RIT:eBBH:1899 to study the influence of spin on the fitting effect. We find that, the spin makes the entire waveform translate \(g\) to the negative half axis of the \(e_{\mathcal{A}}\)-axis. The larger the spin, the greater the translation effect(see FIG. 9 (c)). We discover a strictly proportional relationship between \(g\) and the absolute value of effective spin \(\chi_{eff}\) of BBH(see FIG. 10). It can be expressed as
\[g=a\left|\chi_{\text{eff}}\right|, \tag{33}\]
where a=-0.04355, obtained by linear fitting. What we need to emphasize is that the waveforms in the FIG. 9 (c) do not have the same initial eccentricity at time \(t=-3000\), because both eccentricity and spin have an impact on the evolution of the waveform. If we want to obtain a counterpart of RIT:eBBH:1899 with the same eccentricity from RIT:eBBH:1282, we can use the method introduced in Sec. III.1.
### Extend to spin-precession
When the spin direction of BBHs is inconsistent with the direction of orbital angular momentum, it will cause orbital precession. The larger the effective spin, the stronger the precession effect. The spin-precessing waveform with eccentricity is extremely rare. In principle, we can generate the eccentric waveform from two perspectives of spin and non-spinning as in Sec. II.6, but since the corresponding waveform of the circular orbit with similar spin is lacking, here we only study non-spinning case. We take the numerical simulation BBH:eBBH:1631 as study object because of its large effective spin, long time range \(t\in[-12000,0]\), and low initial eccentricity \(e_{0}=0.19\). In FIG. 11 (a), we present the waveform BBH:eBBH:1631 which has been shifted \(\left|g\right|=0.03\) towards the positive semi-axis of \(e_{\mathcal{A}}\)-axis. This value is obtained through the Eq.(33). This operation removes the spin effect of the waveform, leaving only the eccentricity and precession effects. It is not easy to obtain the precession effect of the waveform. But when the eccentricity of the waveform is relatively low, we can interpolate the peaks of the waveform, and obtain the midpoint of the upper and lower peaks. The reason for this operation is that the symmetry axis of the nonspinning waveform is approximately located at the midpoint of peaks of the waveform. We can accurately model the precession effect by a polynomial fit, but any other analytical method is also possible. We can analytically express the precession effect as follows:
\[f_{p}=\sum_{i=0}^{n}a_{i}\mathcal{A}_{c}^{\ i}, \tag{34}\]
where \(a_{i}\) is the polynomial fit coefficient. In FIG. 11 (a), for the sake of accuracy, we fit the precession effect by a 10 order polynomial fit. If we express the spin effect as \(f_{s}=g\), then we can get a nonspinning and nonprecessing amplitude eccentricity estimator by
\[e_{Anons,nonp}=e_{\mathcal{A}s,p}-f_{s}-f_{p}, \tag{35}\]
where the subscripts \(nons\) and \(nonp\) stand for nonspinning and nonprecessing, respectively, and \(s\) and \(p\) stand for spin and precession. If we want to obtain a precessing and spining amplitude eccentricity estimator, we just invert Eq.(35). In the FIG. 11 (b), we subtract the precession effect and then obtain an \(e_{\mathcal{A}}\) without spin or precession. However, what we want to emphasize is that the waveform in FIG. 11 (b) are not the same as RIT:eBBH:1282, due to the influence of spin and eccentricity on the evolution of the waveform, whose imprints have been left in it.
In summary, incorprating Eq.(29), Eq.(33) and Eq.(34), if we denote the eccentricity effects as \(f_{e}\), we can obtain an eccentric spin-precessing \(e_{\mathcal{A}}\) by gradually adding these effects to a nonspinning, nonprecessing circular orbit waveform, which can be described as follows:
\[\begin{split} e_{\mathcal{A}e,s,p}&=\mathcal{A}_{c} +f_{e}+f_{s}+f_{p}\\ &=Ae^{B\mathcal{A}_{c}^{e}}\sin\left(f\mathcal{A}_{c}^{e}+ \varphi\right)+a\left|\chi_{eff}\right|\\ &+\sum_{i=0}^{n}a_{i}\mathcal{A}_{c}^{\ i}.\end{split} \tag{36}\]
Then, we obtain the corresponding amplitude by the Eq.(30)
\[\begin{split}\mathcal{A}_{e,s,p}&=2\mathcal{A}_{c} \left(\mathcal{A}_{c}+f_{e}+f_{s}+f_{p}\right)+\mathcal{A}_{c}\\ &=2\mathcal{A}_{c}[Ae^{B\mathcal{A}_{c}^{e}}\sin\left(f\mathcal{ A}_{c}^{e}+\varphi\right)\\ &+a\left|\chi_{eff}\right|+\sum_{i=0}^{n}a_{i}\mathcal{A}_{c}^{ \ i}]+\mathcal{A}_{c}.\end{split} \tag{37}\]
We can also get the frequency \(\omega_{e,s,p}\) by the same process. Both of them are fundamental components of gravitational waves.
Figure 7: Amplitude \(\mathcal{A}\) of high-order modes including 3-3, 2-1, 4-4, 5-5, 3-2, 4-3 modes of waveform SXS:BBH:1368 expressed as \(e\) and its corresponding circular orbit waveform SXS:BBH:1165 expressed as \(c\).
## III Results
### 2-2 mode
#### iii.1.1 Fitting results
Some waveforms with large errors need to be discarded. The \(q=6\) waveforms are not shown due to their own problems from numerical simulation, because they deviate far from the behavior shown by the parameters of other waveforms. We only show the behavior here for amplitude and time range \(t\in[-2000,-300]\), but the fitting parameters for frequency or other time range share the same behavior. The fitting results of parameters \(A_{\mathcal{A}}\), \(B_{\mathcal{A}}\), \(f_{\mathcal{A}}\), \(\kappa_{\mathcal{A}}\) for different mass ratio \(q\) are shown in FIG. 12, where the subscript \(\mathcal{A}\) represents that it is the fitting of amplitude. We take the values of \(A_{\mathcal{A}}\) and \(f_{\mathcal{A}}\) as positive, and \(B_{\mathcal{A}}\) and \(\kappa_{\mathcal{A}}\) as positive and negative. In fact, the values of \(A_{\mathcal{A}}\) and \(f_{\mathcal{A}}\) can be positive or negative, depending on the parity of the _sine_ function, while \(B_{\mathcal{A}}\) and \(\kappa_{\mathcal{A}}\) must be positive and negative. The sign of \(A_{\mathcal{A}}\) and \(f_{\mathcal{A}}\) does not affect the relationship between them and eccentricity \(e\). The parameter \(\varphi_{\mathcal{A}}\) has no effect on the morphological properties of the amplitude and frequency of the waveform, but only translates the frequency and amplitude on the time coordinate. That is, it does not affect eccentricity \(e\) and mass ratio \(q\) of the waveform. So \(\varphi_{\mathcal{A}}\) is a free parameter. We also discover it has a certain periodicity.
As we see in FIG. 12, the parameters \(A_{\mathcal{A}}\), \(B_{\mathcal{A}}\), \(f_{\mathcal{A}}\), and \(\kappa_{\mathcal{A}}\) are related to the mass ratio \(q\) and can be judged by the hierarchical phenomenon of the curves.
(i) There is a strict proportional relationship between \(A_{\mathcal{A}}\) and eccentricity \(e\). The larger the eccentricity is, the larger \(A_{\mathcal{A}}\) is, which comes from the relationship between the amplitude eccentricity estimator \(e_{\mathcal{A}}\) and the eccentricity \(e\). In FIG. 12 (a), we see that \(A_{\mathcal{A}}\) is not only
Figure 10: The red ’x’s are the relationship between \(g\) from the FIG. 8 (c) and effective spin \(|\chi_{\rm eff}|\), and the blue line represents a linear fit to them.
Figure 9: From left to right. Figure (a) is the fitting result of the amplitude eccentricity estimator of waveform 1899, which comes from a circular orbit waveform with the same spin as it. Figure (b) is the fitting result of the amplitude eccentricity estimator of waveform 1899, which comes from a circular orbit waveform without spin, the line \(g\) shows the effect of spin on it. Figure (c) are the amplitude eccentricity estimators of waveform RIT:eBBH:1282, RIT:eBBH:1740, RIT:eBBH:1763, RIT:eBBH:1899, which come from a circular orbit waveform without spin, the corresponding colors lines \(g\) shows the effect of spin on them.
related to eccentricity, but also to mass ratio \(q\).
(ii) The correspondences between \(B_{\mathcal{A}}\) and \(f_{\mathcal{A}}\) and eccentricity are very similar, but their magnitudes are not the same. They are distinctly stratified at different mass ratios (see in FIG. 12 (b) and (c)). And there is a slightly monotonic relationship between them and eccentricity. However, we cannot ignore this monotonic relationship, because the accuracy of the waveform is extremely demanding on these two parameters.
(iii) The parameter \(\kappa_{\mathcal{A}}\) is also related to both mass ratio and eccentricity. When we obtain the fitting result of the higher-order modes, we will find that the function of \(\kappa_{\mathcal{A}}\), \(f_{\mathcal{A}}\) and \(B_{\mathcal{A}}\) is to adjust the Eq.(28), making it suitable for different mass ratios and higher-order modes.
(iv) The larger the eccentricity, the more spread out the points are, which means larger errors due to interpolation and fitting, as analyzed in the Sec II.3.
These parameters have a strong dependence on the eccentricity \(e\) and mass ratio \(q\). When the amount of data is relatively small, it is difficult for us to judge it as in Ref. [1]. When the amount of data is large, we can see it clearly. Since the parameters have a complex relationship not only with the mass ratio but also with the eccentricity, which makes it very difficult to cover the parameter space of \(q\) and \(e\) at the same time. We can only fix the mass ratio \(q\) and then cover the eccentricity \(e\) space. As shown in the FIG. 12, since there are too many data points and very scattered, it is difficult to interpolate them. So we obtain a relationship between parameters \(A_{\mathcal{A}}\), \(B_{\mathcal{A}}\) by polynomial fitting, \(f_{\mathcal{A}}\), \(\kappa_{\mathcal{A}}\) and \(e\). We use linear fit for \(A_{\mathcal{A}}\). For \(B_{\mathcal{A}}\), \(f_{\mathcal{A}}\) and \(\kappa_{\mathcal{A}}\), the most accurate results are obtained by a second order polynomial fitting, and others are possible. It is worth emphasizing that polynomial fitting does not mean that there must be a continuous monotonic relationship between these four parameters and eccentricity, which is only an approximation. If we want to get a more accurate relationship between them, we need more eccentric waveform data.
#### iii.2.2 Mismatch
Since our purpose is to reproduce the complete numerically relativistic waveforms, they are naturally our comparison target. We use the leave-one-out method, setting one waveform of all data as test data and the other waveforms as training data to generate fitting parameters \(A_{\mathcal{A}}\), \(B_{\mathcal{A}}\), \(f_{\mathcal{A}}\) and \(\kappa_{\mathcal{A}}\). Next we obtain the corresponding fitting parameters through the eccentricity of the left test waveform. After we get the fitting parameters, we can obtain the corresponding amplitude \(\mathcal{A}\) and frequency \(\omega\) through Eq.(29). After this, we have to integrate the frequency to get phase by
\[\Phi=\int_{t_{1}}^{t_{2}}\omega dt, \tag{38}\]
where \(t_{1}\) and \(t_{2}\) are the integral lower and upper limits. Then we can reconstruct the test waveform by Eq.(19). In order to evaluate the similarity between the test waveform and the newly reconstructed waveform, we need to
Figure 11: Figure (a) is amplitude eccentricity estimator of waveform BBH:eBBH:1631 which has been shifted \(|g|=0.03\) towards the positive semi-axis of \(e_{\mathcal{A}}\)-axis. \(f_{p}\) represents the precession effect. Figure (b) is a non-spinning, non-precessing amplitude eccentricity estimator of waveform after subtracting the precession effect.
calculate overlap as in Ref. [31]:
\[\mathcal{O}=\max_{t_{0},\Phi_{0},\varphi_{\mathcal{A}},\varphi_{\omega}}\frac{ \left\langle h_{1},h_{2}\right\rangle}{\sqrt{\left\langle h_{1},h_{1}\right\rangle \left\langle h_{2},h_{2}\right\rangle}}, \tag{39}\]
where \(\left\langle h_{1},h_{2}\right\rangle\) is the inner product of waveform \(h_{1}\) and \(h_{2}\) defined as
\[\left\langle h_{1},h_{2}\right\rangle=\left|\int_{t_{\min}}^{t_{\max}}h_{1}(t )h_{2}^{*}(t)dt\right|, \tag{40}\]
where \(h_{2}^{*}(t)\) is complex conjugate of \(h_{2}(t)\). \(t_{0}\) and \(\Phi_{0}\) are given time and phase \(\varphi_{\mathcal{A}}\) and \(\varphi_{\omega}\) are free parameters inherited from the construction of waveform. We calculate the overlap in time domain because the waveform we constructed is a time domain waveform and each fitting parameter related to eccentricity is equivalent to related to time. At the same time, we choose a uniform Power Spectral Density (PSD) set to unity instead of the noise PSD of LIGO or other gravitational wave detectors in calculation, in order to to reflect the fitting effect in the entire time domain because we do not care about the application on detection of gravitational waves, but only care about the waveform itself. For convenience, we can also define mismatch or unfaithfulness as
\[\mathcal{M}=1-\mathcal{O}. \tag{41}\]
We do not calculate \(\mathcal{M}\) all the time range, but every \(250\) from \(t\in[-3000,-300]\) to \(t\in[-1500,-300]\). However, we think it is common for any other continuous time range. Here, we need not to consider the total mass \(M\), because all units have been cancelled in Eq.(39). When calculating \(\mathcal{M}\), we divide all eccentricities into four ranges, and use different colors to represent them. Blue means \(e\in[0,0.1]\). Green means \(e\in[0.1,0.2]\). Red means \(e\in[0.2,0.3]\). Black means \(e\in[0.3,0.4]\). The purpose of this is to study the effect of eccentricity on \(\mathcal{M}\). In FIG. 13, we show the \(\mathcal{M}\) between the waveform obtained by leave-one-out method and the test waveform in different eccentricities \(e\in[0,0.4]\), different mass ratios \(q\in[1,3]\) and different time ranges. For mass ratio \(q\in[4,7]\), since there are few waveforms, we can only use the fit of the test waveform to calculate \(\mathcal{M}\), which is also meaningful to reflect the fitting effect of the model.
#### iii.2.3 Analysis
\(\mathcal{M}\) reflects the similarity of the waveforms. From the FIG. 13, we can see that
(i) For waveforms with different time ranges, we get different mismatch. When the time range is very long, such as \(t\in[-3000,-300]\) or longer, or the time range is short, such as \(t\in[-1500,-300]\) or shorter, the fitting effect we obtained are not so good as time range \(t\in[-2000,-300]\) and \(t\in[-2500,-300]\). The reason for it is that when the waveform is very long, the model cannot fully capture all the overall properties of it, and when the waveform is short, the model gives a large error due to too little information given by the waveform.
(ii) For the waveforms with mass ratio \(q\in[1,3]\), we obtain a relatively small \(\mathcal{M}\), but when the mass ratio is in \(q\in[4,7]\), we cannot give a small \(\mathcal{M}\) even with the fitting of the waveform itself. It is not because the model is not suitable for mass ratio \(q\in[4,7]\), but the errors in the waveforms themselves in the RIT catalog, which is particularly obvious in the amplitude eccentricity estimator \(e_{\mathcal{A}}\). So we do not show it in FIG. 5 but present frequency eccentricity estimator \(e_{\omega}\). Some \(e_{\mathcal{A}}\)s come from RIT catalog has a strong local ups and downs, which is obviously caused by errors of the waveforms themselves.
(iii) We stratify different eccentricities in order to show that we obtain different \(\mathcal{M}\) under different eccentricities. We find that if we don't consider \(\mathcal{M}\) with a mass ratio \(q\in[4,7]\) that has a large error, when the eccentricity \(e\in[0,0.1]\), we obtain an \(\mathcal{M}\) less than \(10^{-4}\), and when the eccentricity \(e\in[0.1,0.2]\), we obtain an \(\mathcal{M}\) less than \(10^{-3}\), and when the eccentricity \(e\in[0.2,0.3]\), we obtain an \(\mathcal{M}\) less than \(10^{-2}\), and when the eccentricity \(e\in[0.3,0.4]\), we obtain an \(\mathcal{M}\) less than \(10^{-1}\). This implies that as the eccentricity becomes larger, the model fits the waveform worse, which is consistent with the conclusion we have drawn in the Sec.II.4.
#### iii.2.4 Morphology of eccentric waveform
There is a clear difference in morphology between the eccentric waveform and the circular orbit waveform. The separation of BBHs is relatively close and far away at the periastron and apastron. So the amplitude and frequency of the waveform is relatively large at the periastron and relatively small at the apastron, which is the same for dominant mode in FIG. 2 or higher order modes in FIG. 7. However, not only the eccentric waveform and the circular orbit waveform are morphologically different, but
Figure 13: \(\mathcal{M}\) between the waveform obtained by leave-one-out method and the test waveform in different eccentricities \(e\in[0,0.4]\), different mass ratios \(q\in[1,3]\) and different time ranges. For mass ratio \(q\in[4,7]\), since there are few waveforms, we can only use the fit of the test waveform to calculate \(\mathcal{M}\). In figure, blue means \(e\in[0,0.1]\), green means \(e\in[0.1,0.2]\), red means \(e\in[0.2,0.3]\) and black means \(e\in[0.3,0.4]\).
also the low eccentricity waveform and the high eccentricity waveform are very different in morphology. It is difficult for us to see this morphological difference only through the comparison between the eccentric waveforms, but the circular orbit waveform and the eccentricity estimator provide us with a new perspective. The eccentricity estimator is defined as a _cosine_ function by Eq.(9). That is, when the eccentricity estimator deviates from the behavior of the _cosine_ function, measuring eccentricity by the eccentricity estimator will introduce errors [88]. In FIG. 14, we show the amplitudes of the waveforms with mass ratio \(q=1\), time range \(t\in[-2000,-300]\) and initial eccentricities of \(e_{0}=0\) (circular), \(e_{0}=0.0522\) (1355), \(e_{0}=0.2014\) (1362) and \(e_{0}=0.3653\) (1286), respectively. \(T_{p1}\) and \(T_{a1}\) represent the time of the perisatron passage and apastron passage of the first and second half cycle of the waveform 1355 based on the circular orbit waveform. \(T_{p2}\), \(T_{a2}\) and \(T_{p3}\), \(T_{a3}\) are for 1362 and 1286. From FIG. 14 (a), we can find that, the greater the eccentricity of the waveform, the stronger the oscillation in amplitude. As the eccentricity increases, the perisatron passage and apastron passage of the waveform gradually show different behaviors. The former becomes sharper and the latter becomes smoother. We can get the ratio \(T_{a3}/T_{p3}>T_{a2}/T_{p2}>T_{a2}/T_{p2}\), which means that the time of the apastron passage is getting longer and longer than the time of the perisatron passage. Behavior in amplitude is passed to the associated amplitude eccentricity estimator. In order to show this effect, in FIG. 14 (b), we take the absolute value of the eccentricity estimator, so that perisatron and apastron are at the same level, and then we connect all the points to draw a trend line, where \(a1\), \(p1\), etc. are the corresponding perisatron and apastron in the FIG. 14 (a). The ups and downs of these trend lines indicate how far the eccentricity estimator deviates from the _cosine_ function. The trend line of waveform 1355 is roughly a straight line, implying that it does not deviate from _cosine_ behavior, but the trend line of waveform 1286 is an obvious broken line, implying that it deviates from _cosine_ behavior. Due to the monotonic relationship between the amplitude of the circular orbit and time, the behavior of the amplitude eccentricity estimator with respect to time will be passed to the behavior of the amplitude eccentricity estimator with respect to the amplitude of the circular orbit(see in FIG. 14 (c)). We can derive the behavior of the Eq.(28) by magnitude analysis. From the FIG. 12, we get \(B\sim 10^{-3}\), \(X_{c}{}^{e}\sim 10^{3}\) and \(e^{BX_{c}^{e}}\sim 1\). So the overall behavior of Eq.(29) is a _sine_ or _cosine_ function which is similar to FIG. 4 (a). The trend lines in FIG. 14 (c) show that 1355 does not deviate from sinusoidal behavior, while 1286 does. All in all, the degree of deviation of the eccentricity estimator from the sinusoidal function determines the scope of application of the phenomenological fitting model. The greater the eccentricity, the greater the deviation of the eccentricity estimator from the sinusoidal behavior, which also determines the model cannot be used for high eccentricity. As we can see in the FIG. 13, different eccentricities give different fitting effects, so when we want to obtain higher accuracy, we generally need to keep the eccentricity at different intervals in \([0,0.4]\).
### Other situations
#### iii.2.1 Higher-order modes
Compared with the 2-2 mode, the high-order modes have a large difference in amplitude and frequency, which leads to different magnitudes of parameters, but their related behavior with eccentricity and mass ratio is the same as the 2-2 mode. Similarly, we can also obtain it according to the method in the Sec. III.1. Here we will not go into details.
The calculation of mismatch \(\mathcal{M}\) can be obtained through the Eq.(39). We take fitting of the high-order modes 3-3, 2-1, 4-4, 5-5, 3-2, 4-3 mode with mass ratio \(q=2\), eccentricity \(e\in[0,0.1]\) and time range \(t\in[-2000,-300]\) as an example to show the mismatch \(\mathcal{M}\) we get (see in FIG. 15). The situation is similar for other parameters. Here we only use fitting to calculate \(\mathcal{M}\) because there is too little data for the high-order modes of the eccentric waveforms. When we get enough
Figure 14: Amplitudes of the waveforms with mass ratio \(q=1\), time range \(t\in[-2000,-300]\) and initial eccentricities of \(e_{0}=0\) (circular), \(e_{0}=0.0522\) (1355), \(e_{0}=0.2014\) (1362) and \(e_{0}=0.3653\) (1286), respectively. \(T_{p1}\) and \(T_{a1}\) represent the time of the periastron passage and apastron passage of the first and second half cycle of the waveform 1355 based on the circular orbit waveform. \(T_{p2}\), \(T_{a2}\) and \(T_{p3}\), \(T_{a3}\) are for 1362 and 1286. From the figure, we can find that the eccentric behavior of the _cosine_ function is passed from Figure (a) to Figure (b) and then passed to Figure (c).
high-order modes data of the eccentric waveforms, we can also do the same procedure as the Sec. III.1. From FIG. 15, we can see that the mismatch \(\mathcal{M}\) of high-order modes and the 2-2 mode share roughly the same behavior with the eccentricity. Therefore, this model is able to fit the higher order modes very well.
#### iii.2.2 Spin-aligned
As stated in the Sec. II.6, the waveform of spin aligned can be obtained in two ways, one is the circular orbit waveform with spin, and the other is the circular orbit waveform without spin. We can fit waveforms by Eq.(29) or Eq.(32). Since the number of eccentric spinning waveforms is few, and it is not easy to find an eccentric waveform whose magnitude and direction of component spins are exactly the same as a circular orbital waveform, we can only present the fitting effect for two cases here. We show mismatch \(\mathcal{M}\) of different time range with mass ratio \(q=1\) in the FIG. 16 in which RIT:eBBH:1740, RIT:eBBH:1763 and RIT:eBBH:1899 are for circular orbit waveforms without spin, and the other two RIT:eBBH:1763 and RIT:eBBH:1899 are for circular orbit waveforms with spin. For both cases, the mismatch \(\mathcal{M}\) we obtained is less than about \(10^{-4}\), indicating that the model can fit them well.
#### iii.2.3 Spin-precession
The eccentric spin-precessing BBHs merger is one of the most complicated cases in numerical relativity, and there are few studies on it. Their waveforms are interspersed with multiple complex effects and are difficult to analyze. In the Sec. II.7, we propose a method for separating the eccentric spin-precessing waveform into different effects as in Eq.(36). Now, we select the part of the waveform RIT:eBBH:1361 in the time range \(t\in[-3000,-300]\). Let us take the amplitude as an example. First, we measure the eccentricity of the waveform given by the FIG. 11 (b) at time \(t=-3000\) and get \(e=0.090\). Then we obtain the values of each parameter \(A_{\mathcal{A}}=0.037593098\),\(B_{\mathcal{A}}=0.000556893\), \(f_{\mathcal{A}}=0.034178913\), \(\kappa_{\mathcal{A}}=-3.277506968\) corresponding to the eccentricity \(e=0.090\) through the FIG. 12. Next we obtain an eccentricity estimator with mass ratio \(q=1\), no spin and eccentricity \(e=0.090\) via the Eq.(29). The results are shown in FIG. 17 (a), where the blue solid line is named "removed", because we removed the spin and precession effects inside it, and the dark orange dashed line is what we "got". Then we add the spin effect \(-g=0.3\) and the precession effect \(f_{p}\) to the "got" waveform to obtain an eccentricity estimator with spin-precession through the Eq.(36), which precisely recovers the characteristics of the eccentricity estimator of original waveform 1631(see in FIG. 17(b)). Finally, we get the amplitude of a waveform with eccentricity and spin precession by the Eq.(37), which has a good overlap with the amplitude of waveform 1631(see in FIG. 17 (c)). For frequency \(\omega\), the manipulations are the same. Combining with it, we get a mismatch \(\mathcal{M}=0.00056\), which means that we accurately reproduce the original waveform 1631.
Not all eccentric spin-precessing waveforms have a simple precession effect like RIT:eBBH:1361. Waveform RIT:eBBH:1701 has an effective spin of zero due to the spin anti-alignment of BBHs, leading to \(g=0\). If we assume that the precession effect and the eccentricity effect are independent of each other, then the parameter \(f\) only depends on the eccentricity. So we can force fit its eccentricity estimator to try to obtain its precession effect(see in FIG. 18). As in the previous case, subtracting the fit, we get the precession effect.
## IV Conclusion and Outlook
The eccentricity and spin of gravitational waves can reflect the dynamics of BBHs merger. However, there are very few public numerical relativity simulations with eccentricity. Setyawati and Ohme [1] propose a novel method to convert quasi-circular orbit waveforms into eccentric ones, but their method is very limited because
Figure 16: Mismatch \(\mathcal{M}\) of different time range with mass ratio \(q=1\). RIT:eBBH:1740, RIT:eBBH:1763 and RIT:eBBH:1899 are come from circular orbit waveforms without spin, and the other two RIT:eBBH:1763 and RIT:eBBH:1899 are come from circular orbit waveforms with spin.
of the small range of parameters. Based on the eccentric waveforms of the SXS catalog, we add some eccentric waveforms of the RIT catalog, including the waveforms of mass ratio \(q\in[1,7]\), eccentricity \(e\in[0,0.4]\), spin alignment, and spin precession, greatly expanding the parameter space. We find that after setting the fixed constant parameter to the variable parameter \(\kappa\) as in Eq.(29), the applicability of the model becomes wider, and can be applied to the case of mass ratio \(q\in[1,7]\), eccentricity up to \(e=0.4\), time range up to \(t\in[-12000,-300]\), high-order modes and spin alignment. We use the leave-one-out method to verify this model. For \(e\in[0,0.1]\), it gives an overlap of more than \(99.99\%\). For \(e\in[0.1,0.2]\), it gives an overlap of more than \(99.9\%\). For \(e\in[0.2,0.3]\), it gives an overlap of more than \(99\%\). For \(e\in[0.3,0.4]\), it gives an overlap of more than \(90\%\). The reason for these phenomenons is that the larger the eccentricity, the larger the deviation of the eccentricity estimator from the _cosine_ function due to the large change in the morphology of the eccentric waveform, and the worse the fitting effect of the model. After adding a shift parameter as in Eq.(32), the model can convert the waveform without spin and eccentricity into a waveform with spin and eccentricity. That is, the spin effect can be added to nonspinning waveform. For some waveforms with relatively simple and obvious precession effects, we can separate the precession effects. Finally, the spin precession waveform with eccentricity can be regarded as the superposition of various effects including eccentricity, spin and precession, which is very novel and simple. We can also obtain models of complex precession phenomena by the model. We believe that this _phenomenological and universal_ relationship can not only help us to generate fast and accurate gravitational waveforms with eccentricity and spin precession, but also put up with a new perspective for understanding eccentricity, spin and precession effects of the waveform BBHs dynamics.
Due to the small amount of waveform data used in this work and the lack of generality in the cases of spin alignment and spin precession, we only study some special cases for them. We hope that there will be more and more numerical relativity simulations with eccentricity in the future, which will make this phenomenological model richer and more accurate.
Once a large coverage on the parameters including mass ratio, eccentricity, and spins can be considered in the phenomenological relationship, we will be able to construct a large amount of waveforms easily. This could be used as the waveform template in searching GWs by LIGO/Virgo [92] or by the upcoming next generation gravitational wave detectors, such as the Einstein Telescope [93]. As the waveforms are scalable to any mass, it can also be applied to the merging of galactic center binary black holes, which happens in the early universe, and could be detectable by the space borne gravitational wave missions, such as Laser Interferometer Space Antenna (LISA) [94], and Tianqin [95].
###### Acknowledgements.
The authors are very grateful to the SXS collaboration and the RIT collaboration for the numerical simulation of eccentric BBHs mergers, and thanks to Yun-Gui Gong, Xiao-Lin Liu, Ying-Yan Li, Chao Zhang, Qingwen Wu, and Shi-Yan Tian for their helpful discussions. This
Figure 17: In figure (a), the blue solid line represents the part of the waveform RIT:eBBH:1361 in the time range \(t\in[-3000,-300]\), which is named “removed”, because we removed the spin and precession effects inside it. The dark orange dashed line represents a new waveform we “got” from the parameters corresponding to the eccentricity \(e=0.090\) through the FIG.12. In figure (b), the blue solid line represents the “got” amplitude eccentricity estimator of waveform that we add the spin effect \(-g=0.3\) and the precession effect \(f_{p}\). The dark orange dashed line represents the 1631 amplitude eccentricity estimator. In figure (c), the dark orange dashed line is the “got” the amplitude of a waveform with eccentricity and spin precession by the Eq.(37). The blue solid line represents the amplitude of waveform 1631.
Figure 18: The blue solid line represents amplitude eccentricity estimator of waveform 1701, and the dark orange dashed line represents a forced fit for it.
work is in part supported by the National Key R&D Program of China (2022SKA0130103, 2021YFC2203100), and by the National Natural Science Foundation of China (Grant Nos. 12041306 and U1931203). We also acknowledge the science research grants from the China Manned Space Project with No. CMS-CSST-2021-B11. The computation is partly completed in the HPC Platform of Huazhong University of Science and Technology.
|
2301.08226 | Preparing quantum many-body scar states on quantum computers | Quantum many-body scar states are highly excited eigenstates of many-body
systems that exhibit atypical entanglement and correlation properties relative
to typical eigenstates at the same energy density. Scar states also give rise
to infinitely long-lived coherent dynamics when the system is prepared in a
special initial state having finite overlap with them. Many models with exact
scar states have been constructed, but the fate of scarred eigenstates and
dynamics when these models are perturbed is difficult to study with classical
computational techniques. In this work, we propose state preparation protocols
that enable the use of quantum computers to study this question. We present
protocols both for individual scar states in a particular model, as well as
superpositions of them that give rise to coherent dynamics. For superpositions
of scar states, we present both a system-size-linear depth unitary and a
finite-depth nonunitary state preparation protocol, the latter of which uses
measurement and postselection to reduce the circuit depth. For individual
scarred eigenstates, we formulate an exact state preparation approach based on
matrix product states that yields quasipolynomial-depth circuits, as well as a
variational approach with a polynomial-depth ansatz circuit. We also provide
proof of principle state-preparation demonstrations on superconducting quantum
hardware. | Erik J. Gustafson, Andy C. Y. Li, Abid Khan, Joonho Kim, Doga Murat Kurkcuoglu, M. Sohaib Alam, Peter P. Orth, Armin Rahmani, Thomas Iadecola | 2023-01-19T18:36:46Z | http://arxiv.org/abs/2301.08226v3 | # Preparing quantum many-body scar states on quantum computers
###### Abstract
Quantum many-body scar states are highly excited eigenstates of many-body systems that exhibit atypical entanglement and correlation properties relative to typical eigenstates at the same energy density. Scar states also give rise to infinitely long-lived coherent dynamics when the system is prepared in a special initial state having finite overlap with them. Many models with exact scar states have been constructed, but the fate of scarred eigenstates and dynamics when these models are perturbed is difficult to study with classical computational techniques. In this work, we propose state preparation protocols that enable the use of quantum computers to study this question. We present protocols both for individual scar states in a particular model, as well as superpositions of them that give rise to coherent dynamics. For superpositions of scar states, we present both a system-size-linear depth unitary and a finite-depth nonunitary state preparation protocol, the latter of which uses measurement and postselection to reduce the circuit depth. For individual scarred eigenstates, we formulate an exact state preparation approach based on matrix product states that yields quasipolynomial-depth circuits, as well as a variational approach with a polynomial-depth ansatz circuit. We also provide proof of principle state-preparation demonstrations on superconducting quantum hardware.
+
Footnote †: preprint: FERMILAB-PUB-22-904-SQMS
## I Introduction
Recent decades have seen remarkable advances in our understanding of how quantum statistical mechanics can emerge from isolated strongly-interacting quantum mechanical systems. One of the most fundamental of these advances is the so-called eigenstate thermalization hypothesis (ETH) [1; 2; 3; 4], which posits that individual quantum mechanical eigenstates at finite energy density become locally equivalent in the thermodynamic limit to equilibrium Gibbs ensembles at a corresponding temperature. Such eigenstates also govern the approach to this local equilibrium under unitary dynamics [5]. In parallel with these developments, there have been enormous strides towards realizing quantum technologies based on coherent quantum systems that are approximately isolated on experimentally relevant time scales. This gives rise to the possibility of testing the ETH and the related phenomenon of quantum information scrambling experimentally using analog quantum simulators [6; 7; 8; 9; 10] and digital quantum computers [11].
Along with this progress in understanding the ETH and quantum thermalization has come the realization that there are quantum systems that do not thermalize under certain conditions. Two notable examples include integrable [12; 13] and many-body localized systems [14; 15; 16], where an extensive number of conserved quantities preclude the possibility of reaching a conventional locally thermalized state. An alternative means of avoiding thermalization is provided by quantum many-body scars (QMBS), a phenomenon whereby nonintegrable quantum systems exhibit a set of rare finite-energy-density eigenstates that do not satisfy the ETH [17; 18; 19]. Such eigenstates have been found in a variety of contexts, including the Affleck-Kennedy-Lieb-Tasaki (AKLT) model [20; 21], ensembles of Rydberg atoms [22; 23; 24; 25], and various other interacting spin [26; 27; 28; 29; 30; 31; 32; 33; 34], bosonic [35], and fermionic [36; 37; 38] models. These eigenstates can also give rise to coherent periodic dynamics from certain initial states, which has allowed QMBS to be observed in quantum simulation experiments [22; 25; 39; 40].
Substantial effort has been devoted to formulating mathematical criteria for the emergence of QMBS [28; 29; 31; 34; 35; 41], and all examples so far require some degree of fine tuning. Understanding the fate of scarred eigenstates and the associated coherent dynamics un
der perturbations is thus an important research direction, but relatively little progress has been made so far. Ref. [42] found a general lower bound on the thermalization timescale for a scarred state in the presence of a perturbation, \(t_{*}=O(\epsilon^{-1/(d+1)})\), where \(\epsilon\) is the perturbation strength and \(d\) is the system's spatial dimension. However, this bound relies only on the underlying Hamiltonian's spatial locality, and numerical studies of specific models in, e.g., Refs. [34; 42] found lifetimes that substantially exceed this bound.
Studying the lifetime of QMBS under perturbations is challenging because, although scarred eigenstates often have modest entanglement and can be represented efficiently in terms of matrix product states (MPSs) [21; 30; 43], perturbations couple them to states nearby in energy which typically have extensive volume-law entanglement. Analytical perturbation theory is impractical here owing both to the complexity of these highly excited eigenstates and to the exponentially large density of states at finite energy density. Numerical methods to directly evaluate the real time evolution of a scarred state under perturbations also encounter challenges. Exact methods are limited to small system sizes, while approximate tensor network methods [44; 45] are generally limited to early times [46].
A natural question is whether one could exploit quantum computers to investigate the behavior of scarred states under perturbations. It has long been known that quantum computers can evaluate real-time dynamics efficiently [47]; indeed, simulating quantum dynamics is one of the leading candidates for near-term practical quantum advantage [48; 49]. However, even if we assume access to a noiseless quantum computer that can perform accurate time evolution, we are still left with the challenge of state preparation: how can we prepare scarred states on a digital quantum computer, and what are the resources required to do so? This is the subject of the present work.
There are two general state preparation tasks that one faces in this context, depending on whether one wants to investigate the lifetime of scarred _dynamics_ or scarred _eigenstates_. In the first case, the aim is to time-evolve a _superposition_ of scarred states that exhibits periodic dynamics in the unperturbed limit and extract the lifetime of the observed oscillations. In many cases of interest, a product state is sufficient for these purposes and the state preparation is therefore trivial [22; 23; 25; 34; 35; 39; 40; 50]. However, in other cases, the simplest superposition of scar states is area-law entangled and has a nontrivial MPS representation with a finite correlation length [51; 27; 52], and here some thought must be put into the most efficient method to prepare such states.
In the second case, the aim is to prepare an _individual_ scarred eigenstate and evolve it under the perturbed Hamiltonian. In many cases of interest, scarred eigenstates have entanglement scaling logarithmically with system size [21; 24; 26; 27; 36; 37; 53; 51], similar to critical states in one dimension that are described by conformal field theories. Although the entanglement content of these states is modest compared to typical volume-law states at the same energy density, it is an open question what are the minimal quantum resources needed to prepare such a state.
In this work we address both of the above cases for a specific model with QMBS. In Sec. II we define the model and state preparation tasks in more detail. In Sec. III we consider the problem of preparing a particular class of superpositions of scarred eigenstates that can be realized as a one-parameter family of MPSs with bond dimension \(\chi=2\). We consider two related approaches. First, we explicitly construct a linear-depth circuit that prepares the desired state with perfect fidelity. Second, we discuss a probabilistic method that prepares the desired states in constant depth using measurements and postselection. The latter method has a postselection success probability that decays exponentially with system size, albeit with a base that can be tuned by adjusting the circuit depth. This allows for a flexible tradeoff between circuit depth and success probability that is advantageous for implementation on near-term quantum hardware. In Sec. IV, we discuss two strategies for the preparation of individual scarred eigenstates. In the first strategy, we identify MPS representations for the scarred eigenstates and convert these to quantum circuits that prepare the states with perfect fidelity in quasipolynomial depth. In the second, we propose a polynomial-depth variational ansatz that we show numerically captures the scarred eigenstates with at least 99% fidelity at numerically accessible system sizes. In both Secs. III and IV, we provide proof-of-concept demonstrations of the respective state preparation tasks on Rigetti quantum processing units (QPUs), often with some additional simplifications in order to obtain better results on hardware. Finally, we provide a conclusion and outlook in Sec. V.
## II Model and state preparation tasks
In this work, we select a particular reference model with QMBS to exemplify our state preparation techniques for scarred eigenstates and their superpositions. We consider the spin-\(1/2\) model defined in Ref. [27], whose Hamiltonian reads
\[\begin{split} H_{0}=\lambda&\sum_{i=2}^{N-1}(X_{i}-Z _{i-1}X_{i}Z_{i+1})\\ &+\Delta\sum_{i=1}^{N}Z_{i}+J\sum_{i=1}^{N-1}Z_{i}Z_{i+1}.\end{split} \tag{1}\]
This Hamiltonian acts on a chain of \(N\) qubits with open boundary conditions, and each qubit is equipped with Pauli operators \(X_{i},Y_{i},\) and \(Z_{i}\). Note that \(H_{0}\) commutes with the operators \(Z_{1}\) and \(Z_{N}\)--the \(Z\)-basis projections
of the edge qubits are therefore conserved quantities. \(H_{0}\) also conserves the number of Ising domain walls, measured by the operator \(n_{\text{DW}}=\sum_{i=1}^{N-1}(1-Z_{i}Z_{i+1})/2\). The \(\lambda\) term in Eq. (1) can be viewed as a kinetic term for domain walls, while the Ising interaction \(J\) serves as a chemical potential for the domain walls. The \(\Delta\) term induces nonlocal interactions between domain walls and makes the model nonintegrable. It is interesting to note that this model is dual to a \(\mathbb{Z}_{2}\) lattice gauge theory coupled to fermionic matter [54] and can be realized in Rydberg atom quantum simulators in the antiblockade regime [55].
The model (1) has two towers of QMBS states related by the global spin-flip operation \(G=\prod_{i=1}^{N}X_{i}\). The first tower is given by
\[\ket{\mathcal{S}_{k}}=\frac{1}{k!\sqrt{\mathcal{N}(N,k)}}(Q^{\dagger})^{k} \ket{\Omega}, \tag{2}\]
where \(\mathcal{N}(N,k)=\binom{N-k-1}{k}\), \(\ket{\Omega}=\ket{0\dots 0}\), and \(k=0,\dots,L/2-1\) (we take \(N\) even for simplicity). The raising operator for this tower of states is given by
\[Q^{\dagger}=\sum_{i=2}^{N-1}(-1)^{i}P_{i-1}\sigma_{i}^{+}P_{i+1}, \tag{3}\]
where \(P_{j}=\ket{0}_{j}\bra{0}_{j}=(1+Z_{j})/2\) and \(\sigma_{j}^{\pm}=(X_{j}\pm iY_{j})/2\). The states \(\ket{\mathcal{S}_{k}}\) are eigenstates of \(H_{0}\) with energies \(E_{k}=\Delta N+J(N-1)-(2\Delta+4J)k\). The second tower of states
\[\ket{\mathcal{S}_{k}^{\prime}}=G\ket{\mathcal{S}_{k}},\quad k=0,\dots,\frac{ N}{2}-1 \tag{4}\]
has energies \(E_{k}^{\prime}=-\Delta N+J(N-1)+(2\Delta-4J)k\). Both towers of states therefore have an extensive energy bandwidth, so that typical states in each tower correspond to highly excited states of the model \(H_{0}\). Nevertheless, states in either tower for which \(k/N\) is finite as \(N\to\infty\) have bipartite entanglement entropy scaling as \(\ln(N)\), in contrast to the volume-law entanglement entropy of typical eigenstates at the same energy density [27]. In the remainder of the paper we will restrict our attention to the tower \(\{\ket{\mathcal{S}_{k}}\}\), but all results we obtain for this tower hold equally well for the tower \(\{\ket{\mathcal{S}_{k}^{\prime}}\}\).
Dynamical consequences of these scar states can be observed by evolving the system from a suitable family of initial states with support on the towers of interest. In Ref. [27], it was shown that the following family of states parameterized by \(\xi\in\mathbb{C}\) is exclusively supported on the tower \(\{\ket{\mathcal{S}_{k}}\}\):
\[\ket{\xi}=\frac{1}{\sqrt{Z(\ket{\xi}^{2})}}\mathcal{P}_{\text{fib}}\prod_{i=2 }^{N-1}[1+(-1)^{i}\xi\sigma_{i}^{+}]\ket{\Omega}, \tag{5}\]
where the projection operator
\[\mathcal{P}_{\text{fib}}=\prod_{i=1}^{N-1}(1-P_{i}^{\prime}P_{i+1}^{\prime}), \quad P_{i}^{\prime}=1-P_{i} \tag{6}\]
excludes any computational basis state in Eq. (5) containing the local configuration \(\ket{1}_{i}\ket{1}_{i+1}\), and where the normalization factor
\[Z(\ket{\xi}^{2})=\sum_{k=0}^{N/2-1}\ket{\xi}^{2k}\mathcal{N}(N,k). \tag{7}\]
The state \(\ket{\xi}\) can be decomposed onto the tower \(\{\ket{S_{k}}\}\) as follows:
\[\ket{\xi}=\sum_{k=0}^{N/2-1}\xi^{k}\sqrt{\frac{\mathcal{N}(N,k)}{Z(\ket{\xi}^{ 2})}}\ket{\mathcal{S}_{k}}. \tag{8}\]
As such, for any value of \(\xi\), evolving the initial state \(\ket{\xi}\) under \(H_{0}\) yields perfect periodic revivals of \(\ket{\xi}\) with period \(T=\pi/(\Delta+2J)\) set by the energy spacing between consecutive states in the tower. Unlike typical states in the tower \(\{\ket{\mathcal{S}_{k}}\}\), the state \(\ket{\xi}\) is area-law entangled. In fact, it can be written as an MPS with bond dimension \(\chi=2\)[27], so its bipartite entanglement entropy is upper bounded by \(\ln 2\).
The state \(\ket{\xi}\) has connections to several interesting problems in condensed matter and atomic physics. For instance, it is unitarily equivalent to a state found in Ref. [56] to be a good approximation to the ground state of a system of Rydberg atoms in the so-called Rydberg blockade regime [57; 58], where the computational basis states \(\ket{0}_{i}\) and \(\ket{1}_{i}\) correspond to atom \(i\) being in its ground state or a highly excited Rydberg state, respectively. Due to strong interactions, such systems are subjected to an energetic penalty for having two excited atoms next to one another. This constraint, sometimes called the "Fibonacci constraint" because the number of states of \(m\) qubits that satisfy it is given by \(F_{m+2}\) (where \(F_{\ell}\) is the \(\ell\)-th Fibonacci number), is implemented by the projector \(\mathcal{P}_{\text{fib}}\) in Eq. (5). Intriguingly, this constraint also emerges in theoretical descriptions of the \(\nu=1/3\) Laughlin fractional quantum Hall (FQH) state in a particular quasi-1D limit [59; 60; 61; 62]. In fact, the ground state of the system in this limit is unitarily equivalent to \(\ket{\xi}\) for a particular choice of the parameter \(\xi\)[59]. Thus, our state preparation results for \(\ket{\xi}\) will also be applicable to the seemingly disparate settings of Rydberg-atom quantum simulators and FQH liquids.
Studying the stability of scarred eigenstates and dynamics on quantum computers requires algorithms for high-fidelity preparation of the states \(\ket{\mathcal{S}_{k}}\) and \(\ket{\xi}\), respectively. This paper provides a survey of approaches to both state preparation tasks and a snapshot of their feasibility with current quantum hardware. We first address the preparation of the superposition state \(\ket{\xi}\) in Sec. III, before moving onto the preparation of the scar states \(\ket{\mathcal{S}_{k}}\) in Sec. IV.
Before proceeding, we note that the alternating signs \((-1)^{i}\) appearing in Eqs. (3) and (5) can be removed by the simple unitary circuit \(\prod_{i\text{ odd}}Z_{i}\). Therefore, it is
useful to define the states
\[\ket{\tilde{\xi}}=\left(\prod_{i\text{ odd}}Z_{i}\right)\ket{\xi}\] (9a) and \[\ket{\tilde{\mathcal{S}}_{k}}=\left(\prod_{i\text{ odd}}Z_{i}\right)\ket{ \mathcal{S}_{k}}, \tag{9b}\]
which are now equal-amplitude and equal-sign superpositions of computational basis states. We will at times find it more convenient to work with these "tilde" states than with the original states.
## III Preparing the state \(\ket{\xi}\)
There are several possible approaches to preparing the superposition state \(\ket{\xi}\) from Eq. (5). In Ref. [27] it was shown that \(\ket{\xi}\) is the unique ground state of a local parent Hamiltonian with finite correlation length (see also Ref. [56]). Thus one approach is to prepare \(\ket{\xi}\) adiabatically using an appropriate parameter sweep from a zero-correlation-length paramagnetic Hamiltonian. We investigate this approach in App. A, where we find evidence that the gap of the interpolating Hamiltonian does not close with increasing \(N\) and the state \(\ket{\xi}\) is a suitable candidate for adiabatic state preparation. In practice, adiabatic state preparation on a digital quantum computer suffers from Trotter error, even for a fully gapped interpolation Hamiltonian and assuming perfect implementation of the Trotter circuit. For this reason a finite depth circuit always incurs finite error. This motivates the consideration of alternative state preparation strategies.
In Sec. III.1, we demonstrate that perfect state preparation can be achieved with a circuit of depth \(O(N)\). Sec. III.2 shows that a stochastic strategy [63; 64; 65; 66; 67; 68] using measurements and post-selection can reduce the circuit depth to a constant at the price of an exponential post-selection overhead. Finally, in Sec. III.3, we benchmark both state preparation strategies on Rigetti QPUs.
### Linear-Depth Unitary Circuit
To facilitate our discussion, we rewrite the superposition state \(\ket{\xi}\) with open-boundary conditions on \(N\) qubits [Eq. (5)] as
\[\ket{\xi}=\ket{0}\otimes\ket{\xi;N-2}\otimes\ket{0}. \tag{10}\]
In this section, we show that the state \(\ket{\xi;m}\) as well as its counterpart \(\ket{\tilde{\xi};m}\) with alternating phases removed per Eqs. (9) can be prepared in linear depth \(O(m)\) by a unitary circuit \(\mathfrak{U}_{\xi}(m)\) consisting of \((m-1)\) controlled \(Y\)-rotation gates and one \(Y\)-rotation gate. We note that a similar linear depth circuit was obtained in the FQH context in Ref. [61], but that the nonunitary approach explored in Sec. III.2 has not been discussed in this context.
To understand this preparation circuit, we recall that \(\ket{\xi}\) or \(\ket{\xi;m}\) is the superposition of all computational basis states excluding any local configuration \(\ket{1}_{j-1}\ket{1}_{j}\) with the weights of each state determined by the parameter \(\xi\). Starting with all qubits in the \(\ket{0}\) state, the absence of \(\ket{1}_{j-1}\ket{1}_{j}\) pairs is guaranteed under a sequence of controlled \(Y\)-rotations \(\text{C}_{0}\text{R}_{Y,j-1,j}(\theta_{j})\) by an angle \(\theta_{j}\) targeted on the \(j\)-th qubit that is triggered only by \(\ket{0}\) of the \((j-1)\)-th qubit. For a set of properly chosen rotation angles \(\theta_{j}\), \(\ket{\xi;m}\) can be prepared by this preparation circuit:
\[\ket{\xi;m} =\mathfrak{U}_{\xi}(m)\ket{\Omega}, \tag{11}\] \[\mathfrak{U}_{\xi}(m) =\left[\prod_{j=2}^{m}\text{C}_{0}\text{R}_{Y,j-1,j}(\theta_{j}) \right]\text{R}_{Y,1}(\theta_{1}).\]
The corresponding circuit diagram of the preparation gate \(\mathfrak{U}_{\xi}(m)\) is shown in Fig. 1. The angles are determined as a function of \(\xi\) according to the following recursion relation:
\[\begin{split}\theta_{j}=& 2\arg\left[1+i\frac{(-1)^{j+1}\xi}{ \phi_{j+1}}\right],\\ \phi_{j}=&\sqrt{1+\left|\frac{(-1)^{j+1}\xi}{\phi _{j+1}}\right|^{2}}\text{ \ \ and \ \ }\phi_{m+1}=1.\end{split} \tag{12}\]
The preparation circuit \(\mathfrak{U}_{\xi}(m)\) for the state \(\ket{\tilde{\xi};m}\) is obtained by simply removing the alternating phase factors \((-1)^{j+1}\) from the above definition. We will prove this recursion relation for the rotation angles and the preparation circuit in the following.
An alternative definition of \(\ket{\xi}\) is given in Ref. [27] as
\[\ket{\xi}=\frac{1}{Z}\prod_{j=2}^{N-1}\left[1+(-1)^{j}\xi P_{j-1}\sigma_{j}^{+ }P_{j+1}\right]\ket{\Omega}, \tag{13}\]
Figure 1: Linear-depth circuit showing the preparation gate \(\mathfrak{U}_{\xi}(m)\) to prepare \(\ket{\xi;m}\) from the zero state \(\ket{\Omega}\). This circuit consists of \((m-1)\) control-rotation gates and one rotation gate. The rotation angles \(\theta_{j}\) are given in Eq. (12).
where \(Z\) is the normalization factor in Eq. (7). Without loss of generality, we choose the convention where \(j=2\) is the rightmost term in the above product, i.e., \(\prod_{j=2}^{N-1}V_{j}|\Omega\rangle=V_{N-1}V_{N-2}\cdots V_{2}|\Omega\rangle\). In this convention, the \((j+1)\)-th qubit is always in its \(|0\rangle\) state when applying the \(j\)-th term, and hence the projector \(P_{j+1}\) can be omitted in the above expression. This means that
\[|\xi\rangle=\frac{1}{Z}\prod_{j=2}^{N-1}\left[1+(-1)^{j}\xi P_{j-1}\sigma_{j}^{ +}\right]|\Omega\rangle\,. \tag{10}\]
With the definition of \(|\xi;m=N-2\rangle\) [Eq. (11)], we can get rid of the two qubits in the \(|0\rangle\) state at both ends to write
\[|\xi;m\rangle=\frac{1}{Z}\prod_{j=1}^{m}\left[1+(-1)^{j+1}\xi P_{j-1}\sigma_{j }^{+}\right]|\Omega\rangle\,, \tag{11}\]
where we fix \(P_{0}\equiv 1\) to be consistent with open boundary conditions. Note that in this equation, the \(j\)-th qubit of \(|\xi;m\rangle\) corresponds to the \((j+1)\)-th qubit of \(|\xi\rangle\).
To facilitate the derivation, we recall the projector \(P_{j}^{\prime}=\left|1\right\rangle_{j}\left\langle 1\right|_{j}=(1-Z_{j})/2\) and introduce \(x_{j}=(-1)^{j+1}\xi\) to rewrite the equation as
\[|\xi;m\rangle=\frac{1}{Z}\prod_{j=1}^{m}\left[P_{j-1}^{\prime}\left(1\right)+ P_{j-1}\left(1+x_{j}\sigma_{j}^{+}\right)\right]|\Omega\rangle\,, \tag{12}\]
where \(P_{0}^{\prime}=0\) for open boundary conditions. Note that to prepare the state \(|\tilde{\xi};m\rangle\), one can simply pick a different definition of \(x_{j}\) to be \(x_{j}=\xi\).
The term in square brackets in the above equation is in the form of a controlled-rotation gate except that \((1+x_{j}\sigma_{j}^{+})\) is not a unitary operator. To make it unitary, we first introduce a set of real constants \(\phi_{j}\) whose values will be determined later in this subsection. We can divide each term in the product by \(\phi_{j}\) to get
\[|\xi;m\rangle=\frac{1}{Z^{\prime}}\prod_{j=1}^{m}\left[P_{j-1}^{\prime}\left( \frac{1}{\phi_{j}}\right)+P_{j-1}\frac{1}{\phi_{j}}\left(1+x_{j}\sigma_{j}^{+ }\right)\right]|\Omega\rangle\,,\]
where \(Z^{\prime}=Z/\prod_{j=1}^{m}\phi_{j}\). We recognize that the local state \(|1\rangle\) is only generated by the raising operator \(\sigma_{j}^{+}\). Hence, the factor \(1/\phi_{j}\) associated with \(P_{j-1}^{\prime}\) can be moved to be associated with \(\sigma_{j-1}^{+}\) in the previous product term. We can then write
\[|\xi;m\rangle=\frac{1}{Z^{\prime}}\prod_{j=1}^{m}\left[P_{j-1}^{\prime}+P_{j- 1}\frac{1}{\phi_{j}}\left(1+\frac{x_{j}}{\phi_{j+1}}\sigma_{j}^{+}\right) \right]|\Omega\rangle\,,\]
where \(\phi_{m+1}=1\). Since \(\sigma_{j}^{+}\) only acts on \(|0\rangle\) in this equation, we have \(\sigma_{j}^{+}|0\rangle=i\sigma_{j}^{y}|0\rangle\) where \(\sigma_{j}^{y}=-i\left|0\right\rangle_{j}\left\langle 1\right|_{j}+i\left|1 \right\rangle_{j}\left\langle 0\right|_{j}\). This allows us to write
\[|\xi;m\rangle=\frac{1}{Z^{\prime}}\prod_{j=1}^{m}\left[P_{j-1}^{\prime}+P_{j- 1}\frac{1}{\phi_{j}}\left(1+i\frac{x_{j}}{\phi_{j+1}}\sigma_{j}^{y}\right) \right]|\Omega\rangle\,. \tag{13}\]
If we pick
\[\phi_{j}=\sqrt{1+\left|\frac{x_{j}}{\phi_{j+1}}\right|^{2}}\quad\text{and} \quad\phi_{m+1}=1, \tag{14}\]
the operator associated with \(P_{j-1}\) is unitary such that
\[\frac{1}{\phi_{j}}\left(1+i\frac{x_{j}}{\phi_{j+1}}\sigma_{j}^{y}\right)=\cos \frac{\theta_{j}}{2}+i\sin\frac{\theta_{j}}{2}\sigma_{j}^{y}=\text{R}_{Y,j}( \theta_{j}),\]
where \(\text{R}_{Y,j}\) is the \(Y\)-rotation gate on the \(j\)-th qubit and \(\theta_{j}=2\arg(1+ix_{j}/\phi_{j+1})\) is the rotation angle. Rewriting Eq. (13) in terms of \(Y\)-rotation gates, we have
\[|\xi;m\rangle=\frac{1}{Z^{\prime}}\prod_{j=1}^{m}\left[P_{j-1}^{\prime}+P_{j- 1}\text{R}_{Y,j}(\theta_{j})\right]|\Omega\rangle\,. \tag{15}\]
Since all terms are unitary, \(\frac{1}{Z^{\prime}}\) is just a global phase factor not relevant for the state preparation and hence can be dropped. Note also that \(P_{0}^{\prime}=0\) and \(P_{0}=1\) and hence the \(j=1\) term is just a \(Y\)-rotation gate. Further rewriting the above equation using the controlled-\(Y\) rotation gates, we arrive at Eq. (10) with \(\theta_{j}\) given by Eq. (11).
### Probabilistic Constant-Depth Circuit with Postselection
While the linear-depth circuit explored in the previous section is sufficient for state preparation, it may be challenging to implement on near-term devices for large \(N\) owing to the linear circuit depth. Here, we show that it is possible to prepare the equal-amplitude superposition state \(|\tilde{\xi};N-2\rangle\) stochastically in constant depth using measurements and postselection. The idea is to prepare \(k\)\(m\)-site blocks in the state \(|\tilde{\xi};m\rangle\), which can be achieved in depth \(O(m)\) using the circuit \(\mathfrak{U}_{\xi}(m)\) [Eq. (10) with alternating phases removed in the recursive formula Eq. (11)]. The resulting state \(|\tilde{\xi};m\rangle^{\otimes k}\) obeys the Fibonacci constraint enforced by the projector \(\mathcal{P}_{\text{fib}}\) [Eq. (6)] within each \(m\)-site block, but adjacent blocks need not obey the constraint. To "stitch" the adjacent blocks together into a state that globally satisfies the constraint, adjacent \(m\)-site blocks are each coupled to an ancilla qubit using an appropriate unitary operation. Measuring the ancilla qubit and postselecting onto an appropriate measurement outcome prepares the desired state \(|\tilde{\xi};km\rangle\), which can then be trivially converted to \(|\xi;km\rangle\) using Eq. (9).
To illustrate, let us set \(\xi=1\) for simplicity (this is not strictly necessary but does simplify the analytical expressions for the states and success probabilities). As an example, consider stitching together two copies of the two-qubit state \(|1;2\rangle\). The two-qubit state is given by
\[|\tilde{1};2\rangle=\frac{1}{\sqrt{3}}\Big{(}\left|00\right\rangle+\left|10 \right\rangle+\left|01\right\rangle\Big{)}, \tag{16}\]
and the state we aim to prepare is
\[\begin{split}|\tilde{1};4\rangle=\frac{1}{\sqrt{8}}(|0000\rangle+|000 1\rangle+|0010\rangle+|0100\rangle\\ +|1000\rangle+|1010\rangle+|0101\rangle+|1001\rangle).\end{split} \tag{3.12}\]
We first initialize the system in \(|\Omega\rangle=|0000\rangle\) and apply the circuit \(\mathfrak{U}_{1}(2)\) to the two consecutive two-qubit blocks to prepare the state
\[\begin{split}|\tilde{1};2\rangle\otimes|\tilde{1};2\rangle=& \frac{1}{3}\Big{(}\left|00\right\rangle+|10\rangle+|01\rangle \Big{)}\otimes\\ &\Big{(}\left|00\right\rangle+|10\rangle+|01\rangle\Big{)}.\end{split} \tag{3.13}\]
There is exactly one configuration in the above superposition that would be projected away by \(\mathcal{P}_{\rm fib}\), namely \(|0110\rangle\). Therefore if we apply a Toffoli (CCNOT) gate controlled on qubits \(2\) and \(3\) and targeted on an ancilla qubit initialized in the state \(\left|0\right\rangle_{a}\) to the above state, we obtain
\[\frac{\sqrt{8}}{3}|\tilde{1};4\rangle\left|0\right\rangle_{a}-\frac{1}{3}\left| 0110\right\rangle\left|1\right\rangle_{a}. \tag{3.14}\]
Measuring the ancilla qubit in the computational basis, we obtain the desired state \(|\tilde{1};4\rangle\) whenever the measurement outcome is \(0\), which occurs with probability \(8/9\).
The same procedure can be generalized to stitch together \(k\) copies of \(|\tilde{1};2\rangle\) into the state \(|\tilde{1};2k\rangle\) using \(k-1\) ancilla qubits initialized in the \(|0\rangle\) state. Each ancilla qubit is coupled to the array of \(2k\) primary qubits using a Toffoli gate controlled by the neighboring qubits from consecutive two-site blocks. In this way, the state of each ancilla qubit records whether the Fibonacci constraint is violated for the pair of primary qubits to which it is coupled. The probability that the Fibonacci constraint is satisfied (such that the ancilla register is in the state \(\left|0\ldots 0\right\rangle_{a}\)) is given in terms of Fibonacci numbers \(F_{\ell}\) by \(F_{2k+2}/F_{4}^{k}\), which decays exponentially with \(k\).
The above success probability can be improved by using the same strategy to stitch together larger blocks. For example, suppose we wish to prepare the state \(|\tilde{1};2m\rangle\) by stitching together two \(m\)-site blocks prepared in the state \(|\tilde{1};m\rangle\) using the \(O(m)\)-depth circuit \(\mathfrak{U}_{1}(m)\). The stitching can again be achieved by coupling the two neighboring qubits from the two blocks to an ancilla qubit using a Toffoli gate. An illustration of this procedure for general \(\xi\) is shown in Fig. 3. The success probability can be obtained by noting that there are \(F_{2m+2}\) states of the full \(2m\)-qubit system that satisfy the Fibonacci constraint (for which a measurement of the ancilla qubit would yield \(0\)), while the initial tensor product state contains \(F_{m+2}^{2}\) configurations; the success probability is then \(F_{2m+2}/F_{m+2}^{2}\).
Applying the same logic to \(k\) blocks of \(m\) sites yields a postselection success probability
\[p_{\rm success}(m,k)=F_{km+2}/F_{m+2}^{k}\,, \tag{3.15}\]
which is plotted for various \(m\) against \(N=km\) in Fig. 2. Although this expression still decays exponentially with \(k\) for fixed \(m\), it grows with \(m\) at fixed \(k\). Thus, the exponential sampling overhead can be mitigated by increasing \(m\) at the price of increasing the depth of the state preparation circuit. Fig. 2 shows the success probability for a lattice of length \(N\) using blocks of length \(m\). On present-day NISQ devices, gate error and qubit decoherence rates are sufficiently high that the reduction of circuit depth at the expense of postselection may be an acceptable tradeoff. We analyze this tradeoff further in Sec. III.3 when we implement this state preparation protocol on quantum hardware.
Figure 2: Success probability of the finite-depth postselection-based protocol for \(N=km\) qubits and initial block size \(m\). The inset shows the same data with a logarithmic scale on the \(y\)-axis, indicating that the success probability decays exponentially with \(N\) with a base set by \(m\); increasing \(m\) increases the success probability by bringing the base closer to \(1\).
Figure 3: Example circuit for stitching two copies of \(|\tilde{\xi};m\rangle\) together into \(|\tilde{\xi};2m\rangle\) using a single Toffoli gate targeted onto an ancilla, which is then measured and postselected onto the \(|0\rangle\) state. The circuit \(\mathfrak{U}_{\xi}(m)\) is simply Eq. (3.2) with the alternating signs removed from the angles in Eq. (3.3).
### QPU Results
We now implement the \(|\xi\rangle\) state preparation protocols on Rigetti's Aspen QPUs and discuss the execution results of the linear-depth circuit in Sec. III.1 and its probabilistic post-selection variant in Sec. III.2. We set \(\xi=1\) for numerical evaluations, in which case the Pauli rotation angles in Fig. 1 can be summarized as
\[\theta_{m-i+1}=2\tan^{-1}(\sqrt{F_{i+1}/F_{i}}) \tag{21}\]
with Fibonacci coefficients \(F_{i}=(1,1,2,3,5,8,\cdots)_{i}\). The quilc compiler generates an optimized program for the instruction set architecture of QPU chips, compiling all logical gates into the following group of native gates:
\[\mathrm{RZ}(\theta),\mathrm{RX}(\pi/2),\mathrm{RX}(\pi),\mathrm{CPHASE}( \theta),\mathrm{CZ},\mathrm{XY}(\theta).\]
Results obtained from the NISQ hardware are typically prone to multiple sources of error, leading to deviations from ideal unitary calculations. Here we quantify these errors by estimating two figures of merit: first, the Bhattacharyya distance
\[D(p,q)=-\ln\sum_{x}\sqrt{p(x)q(x)} \tag{22}\]
that captures the divergence between the exact and the observed distributions, \(p(x)\) and \(q(x)\), of the measured bitstrings \(x\). Second, we use the expectation value of the parent Hamiltonian of the state \(|\xi\rangle\), namely [27]
\[H_{\xi}=\sum_{i=2}^{N-1}P_{i-1}\left[\xi^{-1}P_{i}^{\prime}+\xi P_{i}-(-1)^{i }X_{i}\right]P_{i+1}, \tag{23}\]
that vanishes \(\langle H_{\xi}\rangle=0\) in the case of ideal noiseless preparation of the state \(|\xi\rangle\)[69]. We collect 10 observed samples of these quantities, along with the success probability of the post-selection protocol, for a range of different system sizes \(N\) and block sizes \(m\). Each sample measurement is made with \(10^{4}\) shots for the energy and the success probability, and \(10^{5}\) shots for the Bhattacharyya distance. The individual samples are depicted as round dots in Fig. 5 and 6, and their average values are connected with solid lines. See the captions of Fig. 5 and 6 for the specification of used device nodes.
When the qubits are measured immediately after state preparation, without performing further unitary operations for, e.g., time evolution, the two-step process of applying Toffoli gates and post-selecting on ancilla bits can be reduced to the classical post-selection of \(N\)-qubit output bitstrings. Such replacement reduces the room for error, since Toffoli (CCNOT) is a non-native gate that must be compiled into multiple imperfect two-qubit gates (e.g., CPHASE, CZ, XY) before running on QPUs [70]. It also "unstitches" the whole circuit back into \(\frac{N-2}{m}\) decoupled blocks of \(\mathfrak{U}_{\xi}(m)\), which introduces certain computational advantages. For example, the asynchronous execution of fragmented circuits allows us to simulate systems larger than the hardware size, or to reduce the overall error by avoiding the usage of low-fidelity qubits, at an exponential overhead of classical post-processing [71; 72]. With this in mind, we repeatedly use the same \(m\)-qubit sublattice that exhibit the best average readout and gate fidelities, and then classically combine the measurement outcomes with the Fibonacci constraint to simulate \(N=\alpha m+2\) qubit observables (\(\alpha\in\mathbb{N}\)).
Fig. 5 displays the results of the \(N=14\) state preparation experiment with different block sizes \(m\), to which the circuit \(\mathfrak{U}_{\xi}(m)\) is applied. We first discuss the raw values without error mitigation, shown as green dots. As \(m\) increases, both the energy expectation value \(\langle H_{\xi}\rangle\) and the Bhattacharyya distance deviate farther from their ideal values of \(0\). At the same time, the success probability grows with increasing \(m\). These manifest the trade-off between the error and the post-selection success probability: the use of a greater number of smaller circuit fragments facilitates higher-fidelity execution of the circuit, while demanding an increased sample complexity. Note that our experimental protocol is a special case of quantum divide-and-conquer [71; 72], where each block does not depend on another block's measurement result.
Fig. 6 summarizes the output of the \(4\leq N\leq 14\) state preparation circuit that merges \(\frac{N-2}{2}\) copies of \(\mathfrak{U}_{\xi}(m=2)\). As the system size \(N\) increases, the Bhattacharyya distance stays close to \(0\). While the energy deviation shows
Figure 4: Aspen M-2 and M-3 device layout graphs, with each node representing a superconducting qubit and each edge indicating connectivity via two-qubit gates. Circled nodes denote qubits used in the QPU experiments in Sec. III.3 and IV.3.
an apparent \(O(N)\) scaling, its values are considerably less than those with larger \(m\) building blocks. We also illustrate in Fig. 7 the state tomography of \(|\xi=1\rangle\langle\xi=1|\) in the computational basis. In particular, its right panel highlights the sparsity of \(|\xi\rangle\) states at \(N=14\).
Finally, we attempt to improve the precision of experimental results by adopting error mitigation techniques, which produce the blue and orange dots in Fig. 5 and 6. To handle the measurement error, we apply the Bayesian unfolding [73] that uses the confusion matrix of 8-bit strings associated with Aspen-M's octagonal layout. The samples obtained from "corrected" bitstrings are colored in blue.
They amplify the error measured in the Bhattacharyya distance and underestimate \(\langle H_{\xi}\rangle\) when compared to the raw samples.
We note that, for the \(\mathfrak{U}_{\xi}(m=2)\) block calculation, most blue samples have negative energy that violates the positive semi-definiteness of the Hamiltonian Eq. (3.18). The presence of the spurious samples is apparently an artifact of the Bayesian unfolding. It likely occurs due to a phase error since the sample Bhattacharyya distances are close to 0. When using larger circuit blocks \(\mathfrak{U}_{\xi}(m>2)\), however, the Bayesian unfolding, combined with randomized compilation [74] and readout symmetrization [75], helps to lower the estimated energy \(\langle H_{\xi}\rangle\) while maintaining the Bhattacharyya distance near the same level. An individual orange dot represents the average measurement results from 30 logically equivalent circuits varied by random twirling gates between each cycle and random Pauli operators inserted at the circuit end along with classical post-processing that reverts the induced basis changes.
## IV Preparing the states \(|\mathcal{S}_{k}\rangle\)
Since generic states \(|\mathcal{S}_{k}\rangle\) in this tower have entanglement scaling logarithmically with system size (see Sec. II), they cannot be prepared with quantum circuits of constant depth. In this section, we demonstrate two (quasi-)polynomial-depth algorithms for preparing these states. The first, described in Sec. IV.1, relies on building an MPS representation of the states \(|\mathcal{S}_{k}\rangle\) and then converting this representation to a quasipolynomial depth quantum circuit. The second, described in Sec. IV.2, is a variational strategy that uses a polynomial-depth variational ansatz circuit to represent the states. Finally, we provide a proof-of-principle realization of these states on quantum hardware in Sec. IV.3, where we also write down a simplified linear-depth circuit for preparing the highest-weight state in the tower, \(|\mathcal{S}_{N/2-1}\rangle\).
### Quasipolynomial Depth Circuit from Matrix Product State Representation
To facilitate the discussion below, we rewrite the tower of scarred eigenstates for \(N\) sites as follows [see Eqs. (2),
Figure 5: The QPU experiment results of the \(|\xi=1\rangle\) state preparation circuit on Aspen-M devices, which uses the probabilistic post-selection protocol that combines multiple \(m\)-qubit blocks \(\mathfrak{U}_{\xi}(m)\) to build the \(N=14\) state. The indices of the participating Aspen qubits are: \(\{105,104\}\) (M-3) for \(m=2\), \(\{106,107,100,101\}\) (M-3) for \(m=4\), \(\{16,17,10,11,26,27\}\) (M-2) for \(m=6\). The EXACT curve represents expected values under noiseless execution. The RAW samples are obtained through the post-selection of measurement bitstrings. The bitstrings for the REM samples undergo correction via readout error mitigation, i.e., iterative Bayesian unfolding, prior to the post-selection. The REM+SYMM+RC samples are averages of error-corrected and post-selected measurement outcomes, taken over 30 logically equivalent circuits with Pauli twirling and readout symmetrization. Results in the middle and right panels are obtained with \(10^{4}\) shots per sample and in the left panel with \(10^{5}\) shots per sample. Specifically, an individual RC circuit contributes approximately \(\frac{10^{4}}{30}\) or \(\frac{10^{5}}{30}\) shots to the collected REM+SYMM+RC samples. We gather 10 sample data points for each quantity, varying the value of \(m\). Error bars denote error of the mean over the different samples. (a) Bhattacharyya distance, which is a measure of the difference between the ideal and measured bitstring probabilities versus block size \(m\). (b) Energy versus \(m\), which ideally vanishes. (c) Observed postselection success probability versus \(m\).
(2.3), and (2.9)]:
\[\ket{\tilde{\mathcal{S}}_{k}}=\ket{0}\otimes\ket{\mathcal{D}_{k}^{N-2}}\otimes \ket{0}, \tag{4.1}\]
where \(\ket{\mathcal{D}_{k}^{N-2}}\) is an equal-amplitude superposition of all bitstrings of length \(N-2\) with Hamming weight \(k\) (i.e., containing \(k\) 1s) that obey the Fibonacci constraint. The state \(\ket{\mathcal{D}_{k}^{m}}\propto\mathcal{P}_{\mathrm{fb}}\ket{D_{k}^{m}}\), where the Dicke state \(\ket{D_{k}^{m}}\) is the equal-amplitude superposition of _all_ length-\(m\) bitstrings with Hamming weight \(k\). While Dicke states can be prepared in depth \(O(mk)\) using the recursive strategy proposed in Ref. [76], we have not found an analogous strategy to prepare the projected Dicke states \(\ket{\mathcal{D}_{k}^{m}}\) (see Sec. V for further discussion on this point). We therefore resort to an alternative approach that prepares the desired states in depth \(O(mk\ln^{2}k)\). For generic scarred eigenstates \(\ket{\mathcal{S}_{k}}\), which are written in terms of \(\ket{\mathcal{D}_{k}^{m}}\) with \(m=N-2\) and \(k=O(N)\), this translates to a circuit with quasipolynomial depth \(O(N^{2}\ln^{2}N)\). However, for states in the tails of the tower (i.e., ones for which \(k\) is finite), the resulting circuits are of linear depth.
Standard algorithms exist for converting an MPS with \(m\) sites and bond dimension \(\chi\) into \(m\) unitaries of width
Figure 6: The \(\ket{\xi=1}\) state preparation experiment on Aspen M-3 that combines \(\mathfrak{U}_{\xi}(m=2)\) to construct the \(4\leq N\leq 14\) states. The collected sample data points are obtained with the \(\{105,104\}\) device qubits. The EXACT curve represents expected values under noiseless execution. The RAW samples are obtained through the post-selection of measurement bitstrings. The bitstrings for the REM samples undergo correction via readout error mitigation, i.e., iterative Bayesian unfolding, prior to the post-selection. The REM+SYMM+RC samples are averages of error-corrected and post-selected measurement outcomes, taken over 30 logically equivalent circuits with Pauli twirling and readout symmetrization. Results in the middle and right panels are obtained with \(10^{4}\) shots per sample and in the left panel with \(10^{5}\) shots per sample. Specifically, an individual RC circuit contributes approximately \(\frac{10^{4}}{30}\) or \(\frac{10^{5}}{30}\) shots to the collected REM+SYMM+RC samples. We gather 10 sample points for each quantity, varying the value of \(N\). Error bars denote error of the mean over the different samples. (a) Bhattacharyya distance, which is a measure of the difference between the ideal and measured bitstring probabilities versus system size \(N\). (b) Energy versus \(N\), which ideally vanishes. (c) Observed postselection success probability versus \(N\).
Figure 7: The state tomography of the \(\ket{\xi=1}\) states for \(N=4\) and 14, constructed from multiple copies of \(\mathfrak{U}_{\xi}(m=2)\) with the classical post-processing. It is computed using \(\{105,104\}\) qubits on Aspen M-3. The \((i,j)\) block of the orange / blue density plots shows \(\ket{\ket{\rho_{i,j}}}/\)\(\pi\)\((\rho_{i,j})\), respectively, where \(\rho_{i,j}\equiv\langle i|\xi=1\rangle\langle\xi=1|j\rangle\) and the integers \(i,j\) are in the binary representation. The nearly empty density matrix plot in the right panel (\(N=14\)) illustrates the sparsity of the state \(\ket{\xi}\). The inset plots show the ideal density matrices for comparison.
\(\left[\log_{2}2\chi\right]\)[77; 78]. We first describe how to represent \(\left|\mathcal{D}_{k}^{m}\right\rangle\) as an MPS, and then convert this MPS to a unitary circuit for state preparation.
One can express a matrix product state on \(m\) sites with open boundary conditions as
\[\ket{\psi}=\sum_{\vec{s}}\bra{L}M^{s_{1}}M^{s_{2}}\ldots M^{s_{m}}\ket{R}\ket{ \vec{s}}, \tag{10}\]
where \(\ket{\vec{s}}=\bigotimes_{i=1}^{m}\ket{s_{i}}\) are computational basis states labeled by bitstrings, \(M^{s_{i}}\) are square matrices of size \(\chi\), and \(\bra{L}\) and \(\ket{R}\) are respectively \(\chi\)-dimensional row and column vectors implementing open boundary conditions. One can view an MPS as a representation of a deterministic finite automaton (DFA), where the matrices \(M^{s_{i}}\) correspond to a transition matrix \(M\) for the DFA states [79]. Since, for any \(i\), \(M^{s_{i}}\) is a square matrix of size \(\chi\), this DFA will have \(\chi\) states, and \(M^{s_{i}}_{j,k}\) will be nonzero if the character \(s_{i}\) takes the state \(j\) to the state \(k\). The bra \(\bra{L}\) and ket \(\ket{R}\) denote the initial and final states of the DFA, respectively. It is important to note that the states in the DFA live in the auxiliary bond space of the MPS and are not quantum states themselves.
DFAs are used to represent regular expressions, or string expressions using a finite set of characters. To see how we can use this language to represent quantum states, consider \(\left|\mathcal{D}_{2}^{4}\right\rangle\):
\[\left|\mathcal{D}_{2}^{4}\right\rangle=\frac{1}{\sqrt{3}}\left(\left|0101 \right\rangle+\left|1001\right\rangle+\left|1010\right\rangle\right) \tag{11}\]
The set of computational basis states in the superposition forms a language \(\left\{\left|0101\right\rangle,\left|1001\right\rangle,\left|1010\right\rangle\right\}\) from which a regular expression can be constructed, where the characters in the regular expression are spin states \(\left\{\left|0\right\rangle,\left|1\right\rangle\right\}\). For the states \(\left|\mathcal{D}_{k}^{*}\right\rangle=\sum_{n=0}^{\infty}\left|\mathcal{D}_{ k}^{*}\right\rangle\), the corresponding regular expression is
\[\left|0\right\rangle^{*}\left[\left|1\right\rangle\!\left|0\right\rangle\! \left|0\right\rangle^{*}\right]^{k-1}\left|1\right\rangle\!\left|0\right\rangle ^{*}, \tag{12}\]
where \({}^{*}\) is a Kleene star (\(a^{*}=I+a+aa+aaa+...\) where \(I\) is identity (or empty string)) and is only applied to \(\left|0\right\rangle\), and \(\left[.!\right]^{m}\) means to repeat the term inside the bracket \(m\) times.
An example of a DFA representing the regular expression for \(\left|\mathcal{D}_{k}^{*}\right\rangle\) (with \(k=4\)) is shown in Fig. 8. Note that if we limit the number of transitions through the DFA to \(m\), the regular expression for \(\left|\mathcal{D}_{k}^{*}\right\rangle\) reduces to that for \(\left|\mathcal{D}_{k}^{*}\right\rangle\). For general \(k\), the DFA is defined on \(2k\) states \(\left\{\mathrm{S}_{0},\mathrm{A}_{j},\mathrm{B}_{j},\mathrm{F}_{k}\right\}_{j =1\ldots k-1}\). The DFA starts at \(\mathrm{S}_{0}\) and ends at \(\mathrm{F}_{k}\). The first computational state can either be \(\left|0\right\rangle\), which transitions the DFA back to \(\mathrm{S}_{0}\), or \(\left|1\right\rangle\), which transitions the DFA to \(\mathrm{A}_{1}\). We denote this by
\[\mathrm{S}_{0}(\left|0\right\rangle) \rightarrow\mathrm{S}_{0} \tag{13}\] \[\mathrm{S}_{0}(\left|1\right\rangle) \rightarrow\mathrm{A}_{1}\]
For the intermediate states, we have
\[\mathrm{A}_{j}(\left|0\right\rangle) \rightarrow\mathrm{B}_{j} \tag{14}\] \[\mathrm{B}_{j}(\left|0\right\rangle) \rightarrow\mathrm{B}_{j}\] \[\mathrm{B}_{j}(\left|1\right\rangle) \rightarrow\mathrm{A}_{j+1}\]
Finally, the DFA terminates at the final state \(\mathrm{F}_{k}\), where
\[\mathrm{B}_{k-1}(\left|1\right\rangle) \rightarrow\mathrm{F}_{k} \tag{15}\] \[\mathrm{F}_{k}(\left|0\right\rangle) \rightarrow\mathrm{F}_{k}\]
All these transition rules can be summarized by a transition matrix \(\mathcal{M}\), where \(\mathcal{M}_{i,j}\) gives the character needed to take the DFA from state \(i\) to state \(j\):
\[\mathcal{M}_{\mathrm{S}_{0},\mathrm{S}_{0}} =\left|0\right\rangle, \mathcal{M}_{\mathrm{S}_{0},\mathrm{A}_{1}} =\left|1\right\rangle, \tag{16}\] \[\mathcal{M}_{\mathrm{A}_{j},\mathrm{B}_{j}} =\left|0\right\rangle, \mathcal{M}_{\mathrm{B}_{j},\mathrm{A}_{j+1}} =\left|1\right\rangle, \mathcal{M}_{\mathrm{B}_{j},\mathrm{B}_{j}} =\left|0\right\rangle,\] \[\mathcal{M}_{\mathrm{B}_{k-1},\mathrm{F}_{k}} =\left|1\right\rangle, \mathcal{M}_{\mathrm{F}_{k},\mathrm{F}_{k}} =\left|0\right\rangle.\]
The full transition matrix is then
Figure 8: A graphical representation of the DFA transition matrix (12) for the state \(\left|\mathcal{D}_{4}^{*}\right\rangle\).
\[\mathcal{M}=\begin{pmatrix}\text{S}_{0}&\text{A}_{1}&\text{B}_{1}& \cdots&\text{A}_{k-1}&\text{B}_{k-1}&\text{F}_{k}\\ \text{S}_{0}&\text{A}_{1}&\text{B}_{1}&\cdots&\text{A}_{k-1}&\text{B}_{k-1}& \text{F}_{k}\\ \text{S}_{0}&\text{A}_{1}&\text{B}_{1}&\cdots&\text{A}_{k-1}&\text{B}_{k-1}& \text{F}_{k}\\ \text{S}_{0}&\text{A}_{1}&\text{B}_{1}&\cdots&\text{A}_{k-1}&\text{B}_{k-1}& \text{F}_{k}\\ \text{S}_{0}&\text{A}_{1}&\text{B}_{1}&\cdots&\text{A}_{k-1}&\text{B}_{k-1}& \text{F}_{k}\\ \text{S}_{0}&\text{A}_{1}&\text{B}_{1}&\cdots&\text{A}_{k-1}&\text{B}_{k-1}& \text{F}_{k}\\ \text{S}_{0}&\text{A}_{1}&\text{B}_{1}&\cdots&\text{A}_{k-1}&\text{B}_{k-1}& \text{F}_{k}\\ \text{S}_{0}&\text{A}_{1}&\text{B}_{1}&\cdots&\text{A}_{k-1}&\text{B}_{k-1}& \text{F}_{k}\\ \text{S}_{0}&\text{A}_{1}&\text{B}_{1}&\cdots&\text{A}_{k-1}&\text{B}_{k-1}& \text{F}_{k}\\ \text{S}_{1}&\text{B}_{1}&\cdots&\text{A}_{k-1}&\text{B}_{k-1}&\text{F}_{k}\\ \end{pmatrix}\begin{array}{c}\text{S}_{0}\\ \text{A}_{1}\\ B_{1}\\ \vdots\\ \text{A}_{k-1}\\ \text{B}_{k-1}\\ \text{F}_{k}\\ \end{array} \tag{4.9}\]
To obtain the matrices from Eq. (4.2), separate this transition matrix as \(\mathcal{M}=M^{0}\ket{0}+M^{1}\ket{1}\), where
\[M^{0}=\begin{pmatrix}1&0&0&0&0&\cdots&0&0&0\\ 0&0&1&0&0&\cdots&0&0&0\\ 0&0&0&0&1&\cdots&0&0&0\\ 0&0&0&0&1&\cdots&0&0&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ 0&0&0&0&0&\cdots&0&1&0\\ 0&0&0&0&0&\cdots&0&1&0\\ 0&0&0&0&0&\cdots&0&0&1\end{pmatrix},\quad M^{1}=\begin{pmatrix}0&1&0&0&0& \cdots&0&0&0\\ 0&0&0&0&0&\cdots&0&0&0\\ 0&0&0&0&0&\cdots&0&0&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ 0&0&0&0&0&\cdots&0&0&0\\ 0&0&0&0&0&\cdots&0&0&1\\ 0&0&0&0&0&\cdots&0&0&0\end{pmatrix} \tag{4.10}\]
That is, \(M^{0}\) is obtained by applying \(\bra{0}\) to every entry in \(\mathcal{M}\), and \(M^{1}\) is obtained by applying \(\bra{1}\) to every entry in \(\mathcal{M}\). These are \(2k\times 2k\) matrices, and so the overall bond dimension of the MPS will be \(2k\). Also, since we start at state \(\text{S}_{0}\) and end at state \(\text{F}_{k}\), the boundary vectors become
\[\begin{split}\bra{L}&=\begin{pmatrix}1&0&\cdots&0\end{pmatrix} \\ \ket{R}&=\begin{pmatrix}0&\cdots&0&1\end{pmatrix}^{T}\end{split} \tag{4.11}\]
One can now read Eq. (4.2) from left to right as a traversal through the DFA in Fig. 8, starting at DFA state \(\text{S}_{0}\), making \(m\) transitions according to the spin configuration \(\vec{s}\), and ending at DFA state \(\text{F}_{k}\).
The resulting MPS can be converted into a quantum circuit by following the sequential preparation procedure of Ref. [77]. Orthogonalizing the MPS in one direction results in \(m\) two-level unitary matrices of width \(\lceil\log_{2}4k\rceil\). Using Gray codes [80], each of these unitaries can be decomposed into \(O(k)\) controlled-unitary gates, each spanning over \(O(\ln k)\) qubits [81]. Each of these controlled-unitaries then requires \(O(\ln^{2}k)\) CNOT gates. The depth of the resulting circuit is then \(O\left(mk\ln^{2}k\right)\). Fig. 9 depicts an example preparation circuit for the state \(\ket{\mathcal{D}_{2}^{6}}\). The circuit makes use of the following single-qubit unitaries:
\[U_{1} =\begin{pmatrix}-\sqrt{\frac{2}{5}}&-\sqrt{\frac{3}{5}}\\ -\sqrt{\frac{3}{5}}&\sqrt{\frac{2}{5}}\end{pmatrix} \tag{4.12}\] \[U_{2} =\begin{pmatrix}\sqrt{\frac{1}{2}}&\sqrt{\frac{1}{2}}\\ \sqrt{\frac{1}{2}}&-\sqrt{\frac{1}{2}}\end{pmatrix}\] \[U_{3} =\begin{pmatrix}-\sqrt{\frac{1}{4}}&\sqrt{\frac{3}{4}}\\ \sqrt{\frac{3}{4}}&\sqrt{\frac{1}{4}}\end{pmatrix}\] \[U_{4} =\begin{pmatrix}\sqrt{\frac{2}{3}}&\sqrt{\frac{1}{3}}\\ \sqrt{\frac{1}{3}}&-\sqrt{\frac{2}{3}}\end{pmatrix}\] \[U_{5} =\begin{pmatrix}-\sqrt{\frac{1}{3}}&\sqrt{\frac{2}{3}}\\ \sqrt{\frac{2}{3}}&\sqrt{\frac{1}{3}}\end{pmatrix}.\]
### Polynomial Depth Variational Ansatz
A strategy similar to the recursive linear circuit used in the preparation of the \(\ket{\xi}\) state suggests a variational circuit architecture for creating the \(\ket{\mathcal{S}_{k}}\) state should exist. Up to unimportant relative phase factors, the \(\ket{\mathcal{S}_{k}}\) state is an equal-weight superposition of all bitstrings of \(N\) sites, such that there are \(k\) 1s, the first and last sites are 0, and there are no two neighboring 1s. All such
basis states can be generated by a four-qubit building-block unitary operator that transforms the state \(\left|0100\right\rangle\) to a linear superposition of states \(\left|0100\right\rangle\) and \(\left|0010\right\rangle\)[82], and acts trivially on all other 14 basis states.
One can construct the unitary by applying a two-qubit gate on the two middle sites controlled by the first and the last qubit such that the gates act nontrivially only when the first and the last qubits are 0, in which case the two middle qubits transform as
\[\left|0\right\rangle_{j+1}\left|1\right\rangle_{j+2}\rightarrow\cos(\theta_{j })\left|0\right\rangle_{j+1}\left|1\right\rangle_{j+2}+\sin(\theta_{j})\left|1 \right\rangle_{j+1}\left|0\right\rangle_{j+2}\]
We have chosen a building block with only real elements for easier optimization.
Mathematically, we can write the unitary \(U_{j}(\theta_{j})\) acting on qubits \(j\) to \(j+3\) as as
\[U_{j}(\theta)=\exp\left[i\frac{\theta}{2}P_{j}(X_{j+1}Y_{j+2}-Y_{j+1}X_{j+2})P _{j+3}\right], \tag{4.13}\]
where the operators \(P_{j}=\left|0\right\rangle_{j}\left\langle 0\right|_{j}\) implement the control on the first and last qubits and \((X_{j+1}Y_{j+2}-Y_{j+1}X_{j+2})\) acts as a Pauli-\(Y\) on the 01 and 10 states of the two middle qubits, while annihilating 00 and 11. The building block \(U_{j}\) is illustrated in Fig. 10(a).
We now construct an ansatz circuit. Since the gates are designed to conserve both the total number of 1s, i.e., \(k\), and the Fibonacci constraint of no neighboring 1s, we start with the following initial state that has \(k\) 1s:
\[|\psi_{0}\rangle=|01\rangle^{\otimes k}|0\rangle^{\otimes N-2k}. \tag{4.14}\]
In Fig. 10(b), we show an example of the circuit acting on the above initial state for \(k=2\) and \(N=8\). Our
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(N\) & \(k\) & \(d\) & \(n_{a}\) & \(1-\left|\left\langle\psi|\mathcal{S}_{k}\right\rangle\right|^{2}\) \\ \hline
[MISSING_PAGE_POST]
\hline \hline \end{tabular}
\end{table}
Table 1: The dimension of the constrained Hilbert space \(d\), the number of variational angles \(n_{a}\) and the minimum error \(1-|\left\langle\psi|\mathcal{S}_{k}\right\rangle|^{2}\) achieved with 2000 local constrained optimization runs starting from random initial angles, for several \(N\) and \(k\).
Figure 10: (a) The four-qubit gate with one variational angle serves as a building block for constructing the variational ansatz. These gates conserve the number of ones and preserve the Fibonacci constraint while allowing transformations between all basis states in the constrained Hilbert space. (b) An example of the general architecture of the variational circuit for \(N=8\) and \(k=2\).
ansatz consists of multiple staircases of the \(U_{j}\) gates. In the first layer, we apply \(U_{1}...U_{2k-3}U_{2k-1}\) to move the last \(1\) from qubit \(2k\) to qubit \(2k+1\) and generate all states with the last \(1\) at or before qubit \(2k+1\). The next layer, \(U_{2}...U_{2k-1}U_{2k}\), generates all states with the last \(1\) at or before \(2k+2\). These staircases continue until we reach the staircase starting with \(U_{N-3}\), acting on the last four qubits. All these staircases have unitaries acting on every other site. These layers generate all \(d\) states. However, we have found that the number of variational parameters is insufficient to obtain an equal-weight superposition. Thus the ansatz also contains a complete staircase at the end with a unitary \(U_{1}U_{2}...U_{N-3}\) to equalize the probabilities. We then optimize the variational angles in these four-qubit unitaries to generate an equal-weight superposition of all \(d\) basis states by minimizing \(1-|\langle\psi|\mathcal{S}_{k}\rangle|^{2}\), where \(|\psi\rangle\) is the state after the application of the variational unitaries. A final layer of \(Z\) gates corrects the signs of the amplitudes.
Two important characteristics of this circuit are the dimension \(d\) of the constrained Hilbert space and the number \(n_{a}\) of variational parameters. While \(d\) scales exponentially in the asymptotic limit, \(n_{a}\) is quadratic in system size. The dimension of the constrained Hilbert space is \(d=\mathcal{N}(N,k)=\binom{N-k-1}{k}\). We need our circuit to generate all the states subject to the Fibonacci constraint and the conserved number of \(1\)s. Thus, the basis states can be viewed as \(k\) blocks of \(01\) as partitions between \(N-2k-1\) zeros with one zero tagged at the end of the chain. The number of variational angles \(n_{a}\) is easy to calculate. For even \(N\), the first and the second layer have \(k\) gates each, the third and fourth layer \(k+1\) gates, and so forth until we reach the staircase with \(N/2-2\) gates containing every site from \(2\) to \(N-4\). We then have a staircase with \(N/2-1\) gates and then a full staircase with \(N-3\) gates. Thus \(n_{a}=2[k+(k+1)+...+N/2-2]+(N/2-1)+(N-3)\), which simplifies to
\[n_{a}=N^{2}/4-k(k-1)-2,\quad N\text{ even}. \tag{4.15}\]
Similarly for odd \(N\), we have \(n_{a}=2[k+(k+1)+...+(N+1)/2-2]+(N-3)\), which leads to
\[n_{a}=(N^{2}-1)/4-k(k-1)-2,\quad N\text{ odd}. \tag{4.16}\]
Importantly, the number of variational gates is quadratic in both \(N\) and \(k\).
The numerical optimization results are listed for several \(N\) and \(k\) in Table 1. The system exhibits many local minima (local minimization routines with different initial angles tend to converge to different optimal angles), making global optimization challenging. However, we have found that constrained local optimization with random initial angles between \(0\) and \(2\pi\) finds solutions with reasonably small error \(1-|\langle\psi|\mathcal{S}_{k}\rangle|^{2}<0.01\). For many cases with smaller \(n_{a}\) and \(d\), and less complex optimization, the minimum error is zero to within machine precision, suggesting the \(|\mathcal{S}_{k}\rangle\) state may be exactly reachable with the ansatz architecture. In practice, we can only find very good approximations for large \(d\) and \(n_{a}\).
### QPU Results
It is noticeable that the doubly-controlled gates are essential pieces in building up the state preparation circuits described in Secs. III.1 and IV.2. Compiling them into the set of native gates often produces quantum programs relatively expensive in the two-qubit gate complexity. For experimental realization of the \(|\mathcal{S}_{k}\rangle\) state preparation, we therefore consider the specific case of \(k=k_{\text{max}}\equiv N/2-1\) for which the alternative circuit in Fig. 11 that involves only \((N-3)\) two-qubit gates can be used.
We can deterministically realize the maximal \(k\) condition by setting the spin state of the \((i+1)\)-th qubit to be opposite to that of the \(i\)-th qubit for all \(i\in\{2,4,\cdots,N-2\}\). This can be implemented by initializing the target qubit in \(|0\rangle\) and then acting with the controlled bit-flip operator,
\[\text{C}_{0}\text{NOT}_{i,i+1}=X_{i}\cdot\text{CNOT}_{i,i+1}\cdot X_{i}, \tag{4.17}\]
triggered only when the control qubit is in the \(|0\rangle\) state. Next, the superposition state can be prepared with a variant of the unitary block Eq. (3.2) where we modify the controlled rotations to be triggered only if the control qubit is in the \(|1\rangle\) state. Running it on the even-indexed qubits between \(2\leq i\leq N-2\), we obtain a superposition of the following bitstring states,
\[|0(10)^{a}(00)^{b}0\rangle\ \ \text{satisfying}\ \ a+b=k_{\text{max}}. \tag{4.18}\]
Upon acting with Eq. (4.17) on all the adjacent \((i,i+1)\) pairs with \(i\in\{2,4,\cdots,N-2\}\), the superposition state becomes
\[\sum_{a+b=k_{\text{max}}}c_{a,b}\,|0(10)^{a}(01)^{b}0\rangle \tag{4.19}\]
where \(c_{a,b}\in\mathbb{R}\) and \(\sum_{a,b}c_{a,b}^{2}=1\). Lastly, Eq. (4.19) becomes the equal-amplitude superposition state \(|0\rangle\otimes|\mathcal{D}_{k_{\text{max}}}^{N-2}\rangle\otimes|0\rangle\) if the Pauli rotation parameters are
\[\theta_{i}=2\tan^{-1}(\sqrt{N/2-i}). \tag{4.20}\]
Figure 11 illustrates the \(|\mathcal{S}_{k_{\text{max}}}\rangle\) state preparation circuit.
We execute the state preparation circuit on Aspen M-2 and M-3. Ideally, if we perform Pauli-\(Z\) measurements after running the circuit, the resulting \(N\)-bitstrings must include \(k_{\text{max}}\) non-consecutive \(1\)'s and begin and end with \(0\) by construction. However, due to limitations of current NISQ hardware, the circuit simulation of the \(|\mathcal{S}_{k_{\text{max}}}\rangle\) state often induces an error in the total spin measurement \(k=k_{\text{max}}\) and may violate the boundary condition or the Fibonacci constraint. Therefore, to increase the fidelity of the simulation, we post-select the measurement bitstrings along with optional application of the readout error mitigation and randomized compiling techniques.
It is equivalent to decoupling the non-entangled boundary spins and replacing the error-prone quantum operator Eq. (4.17) with deterministic post-processing. Its use of fewer qubits also brings the advantage of circumventing non-linear qubit connectivity and considerable error reduction. To make sure the Fibonacci constraint is satisfied, we apply the following projection operator
\[\mathcal{P}=\prod_{i=2,4,\cdots,N-4}(1-P_{i}P_{i+2}^{\prime}) \tag{4.21}\]
that removes \(|01\rangle\) configurations in the compressed \(k_{\text{max}}\)-qubit encoding, or equivalently, \(|0110\rangle\) configurations in the full \(N\)-qubit representation.
Having run the state preparation program, we measure the errors through the Bhattacharyya distance defined in Eq. (3.17) between the ideal and empirical bitstring distributions and the expectation value of the spin-\(1/2\) Hamiltonian from Eq. (2.1). With the \(k_{\text{max}}\)-qubit encoding, the Hamiltonian is mapped to
\[H_{0}= \,\lambda H_{\lambda}+\Delta H_{\Delta}+JH_{J} \tag{4.22}\]
where
\[\begin{split} H_{\lambda}&=0,\quad H_{\Delta}=2,\quad\text{and}\\ H_{J}&=\,Z_{2}-Z_{N-2}-\tfrac{N}{2}+1-\sum_{i=1}^{ N/2-2}Z_{2i}Z_{2i+2}.\end{split} \tag{4.23}\]
It is obtained by applying the encoding rules of Table 2, where vanishing substitutions on the second and third columns impose the total spin selection rule \(k=k_{\text{max}}\). This reduces the task of computing \(\langle H_{0}\rangle\) to measuring \(\langle H_{J}\rangle\).
Fig. 12 collects the results of the \(|\mathcal{S}_{k_{\text{max}}}\rangle\) state preparation experiment, showing the Bhattacharyya distance, the energy \(\langle H_{J}\rangle\) and its error, and the success probability of the Fibonacci projection, see Eq. (4.21). Individual sample points are drawn as round dots, and their averages are connected as the solid line. Each dot aggregates \(10^{4}\) shots for the energy and success probability, and \(10^{5}\) shots for the Bhattacharyya distance. We record the qubit indices used for the QPU experiment in the caption of Fig. 12.
We consider the raw samples without the readout error mitigation and randomized compiling techniques, denoted as green dots. We note a rapid drop of the success probability as \(N\) increases. Unlike the probabilistic \(|\xi\rangle\) state preparation protocol in Section III.2, the noiseless probability should be \(1\), and thus the decrease from \(1\) highlights the detrimental effects of the noise. While the measured Bhattacharyya distance after the Fibonacci projection remains close to \(0\), the estimated energy \(\langle H_{J}\rangle\) exhibits a linearly growing error rate as compared to the ideal value \(-(N-3)\), reaching around \(10\%\) at \(N=14\). Also note that the noise can create spurious samples whose \(\langle H_{J}\rangle\) is less than the theoretical minimum of the \(H_{J}\) in Eq. (4.23)
Next, we try various error mitigation techniques to improve the estimate accuracy. To reduce the measurement error, the iterative Bayesian unfolding [73] uses the confusion matrix of \(8\)-bit strings that reflects Aspen-M's octagonal layout. The samples obtained from the modified bitstrings are marked as blue dots and feature interesting changes. They have a higher success probability--i.e., they remain more in the Fibonacci subspace--and feature less error in estimating \(\langle H_{J}\rangle\) after the Fibonacci projection. However, their Bhattacharyya distance also amplifies significantly with increased sample variance. Application of randomized compilation [74] and readout symmetrization [75] can help to reduce the Bhattacharyya distance while maintaining other improvements, as seen from the orange entries in Fig. 12.
Figure 13 displays the results of the \(|\mathcal{S}_{k_{\text{max}}}\rangle\) state tomography after applying the aforementioned error mitigation methods and the Fibonacci post-projection. It illuminates the sparsity of the \(|\mathcal{S}_{k_{\text{max}}}\rangle\) state that populates only \(N/2\) bitstrings out of \(2^{k_{\text{max}}}\) different choices.
## V Conclusion and outlook
In this work we have explored various approaches to preparing quantum many-body scar states and their superpositions on quantum computers with a focus on the spin-\(1/2\) chain model of Ref. [27], where an emergent Fibonacci constraint ensures that superpositions of
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Before & After & Before & After & Before & After & Before & After \\ \hline \(Z_{1}\) and \(Z_{N}\) & \(1\) & \(X_{2i}Z_{2i+1}\) & \(0\) & \(X_{2i}Z_{2i+1}\) & \(0\) & \(X_{2i}X_{2i+1}\) & \(X_{2i}\) \\ \(Z_{2i}Z_{2i+1}\) & \(Z_{2i}\) & \(Y_{2i}Z_{2i+1}\) & \(0\) & \(Y_{2i}Z_{2i+1}\) & \(0\) & \(X_{2i}Y_{2i+1}\) & \(-Y_{2i}\) \\ \(I_{2i}Z_{2i+1}\) & \(-Z_{2i}\) & \(Z_{2i}X_{2i+1}\) & \(0\) & \(I_{2i}X_{2i+1}\) & \(0\) & \(Y_{2i}X_{2i+1}\) & \(Y_{2i}\) \\ \(Z_{2i}Z_{2i+1}\) & \(-1\) & \(Z_{2i}Y_{2i+1}\) & \(0\) & \(I_{2i}Y_{2i+1}\) & \(0\) & \(Y_{2i}Y_{2i+1}\) & \(X_{2i}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Projection rule for \(N\)-qubit Pauli strings to Pauli operators on the \(k_{\text{max}}\)-qubit Hilbert space with \(1\leq i\leq k_{\text{max}}\).
scarred eigenstates must be entangled. Linear-depth circuits to prepare a one-parameter family of area-law-entangled superpositions of scarred eigenstates, as well as a nonunitary preparation scheme that uses measurement and postselection to prepare the same superposition in constant depth are provided. We also derived an MPS representation of the tower of scarred eigenstates and used this to generate a quasipolynomial-depth circuit to prepare individual scar states; additionally, we proposed a variational scheme based on a polynomial-depth ansatz that captures the scar states with fidelity at least \(99\%\) in all numerically accessible cases. Proof-of-concept demonstrations of scar-state preparation were executed on quantum hardware.
While the state preparation protocols explored in this work were formulated with a specific model in mind, they can readily be adapted to other towers of scar states. For instance, entangled superposition states analogous to \(|\xi\rangle\) are known for both the AKLT model [52] and the bond-bimagnon tower in the spin-1 XY model [51; 26]. In
Figure 12: The execution results of the \(|\mathcal{S}_{k_{\max}}\rangle\) state preparation circuit on Aspen-M devices, which uses the \(k_{\max}=\frac{N}{2}-1\) qubit encoding for \(6\leq N\leq 14\). The indices of the participating Aspen nodes are: \(\{147,140\}\) (M-3) for \(N=6\), \(\{147,140,141\}\) (M-3) for \(N=8\), \(\{16,17,10,11\}\) (M-2) for \(N=10\), \(\{11,10,17,16,1\}\) (M-2) for \(N=12\), \(\{26,11,10,17,16,1\}\) (M-2) for \(N=14\). The EXACT curve represents expected values under noiseless execution. The RAW samples are obtained through the postselection of measurement bitstrings. The bitstrings for the REM samples undergo correction via readout error mitigation, i.e., iterative Bayesian unfolding, prior to the post-selection. The REM+SYMM+RC samples are averages of error-corrected and post-selected measurement outcomes, taken over 30 logically equivalent circuits with Pauli twirling and readout symmetrization. Results in the middle and right panels are obtained with \(10^{4}\) shots per sample and in the left panel with \(10^{5}\) shots per sample. Specifically, an individual RC circuit contributes approximately \(\frac{10^{6}}{30}\) or \(\frac{10^{5}}{30}\) shots to the collected REM+SYMM+RC samples. We gather 5 sample points for each quantity, varying the value of \(N\). Error bars denote error of the mean over the different samples. (a) Bhattacharyya distance, which is a measure of the difference between the ideal and measured bitstring probabilities versus system size \(N\). (b) Energy versus \(N\), which ideally takes the value of \(-(N-3)\). (c) Energy error versus \(N\), computed from \(100\frac{|(H_{J})+N-3|}{N-3}\). (d) Observed postselection success probability versus \(N\).
Figure 13: The \(|\mathcal{S}_{k_{\max}}\rangle\) state tomography in the compressed \(k_{\max}\)-qubit encoding at \(N=6\) and \(14\). It uses the \(\{147,140\}\) (\(N=6\)) and \(\{26,11,10,17,16,1\}\) (\(N=14\)) device qubits on Aspen M-3 and M-2, respectively. The \((i,j)\) block of the orange and blue density plots shows \(||\rho_{i,j}||\) and \(\arg(\rho_{i,j})\), respectively, where \(\rho_{i,j}\equiv\langle i|\mathcal{S}_{k_{\max}}\rangle\langle\mathcal{S}_{k_ {\max}}|j\rangle\) and the integers \(i,j\) are in the binary representation. The nearly empty appearance of the density plot in the right panel (\(N=14\)) manifests the sparsity of the scar state \(|\mathcal{S}_{k_{\max}}\rangle\) which only occupies \(N/2\) out of \(2^{k_{\max}}\) possible bitstrings. The inset plots show the ideal density matrices for comparison.
both cases, the initial state is a finite-bond-dimension MPS which can be viewed as the projection of a simple wavefunction on \(\chi\)-dimensional qudits. One expects that nonunitary methods along the lines of Sec. III could be used to prepare such states. Likewise, both towers admit exact MPS representations that can be converted to quasipolynomial-depth quantum circuits. It is also worth noting that simpler state preparation protocols can be used in some cases--for example, the bimagnon tower in the spin-1 XY model [26] can be superposed into a product state that is trivial to prepare, and the scarred eigenstates themselves are effective spin-1/2 Dicke states that can be generated using a slight variation on the approach of Ref. [76].
Several state preparation techniques are worth future investigation. First, for the superposition state \(\ket{\xi}\), it is desirable to find a deterministic (i.e., postselection-free) finite-depth state preparation protocol in which unwanted ancilla measurement outcomes are corrected by local unitary operations instead of being discarded. Such a scheme can leverage the fact that the local form of the state in the vicinity of an ancilla measurement outcome of 1 is fixed. For example, consider stitching together two states of the form \(\ket{\tilde{1};m}\) with a single ancilla measurement, and suppose the measurement outcome was 1. Then the primary qubit register is in the state
\[\ket{\tilde{1};m-2}\otimes\ket{0110}\otimes\ket{\tilde{1};m-2}, \tag{10}\]
since the state away from the central two sites (which are projected onto \(\ket{11}\) by the measurement) obeys the Fibonacci constraint. One can imagine trying to correct the four site block \(\ket{0110}\) using a unitary circuit controlled by the states of the two adjacent qubits from the \((m-2)\)-site blocks to its left and right. However, the target states in the four cases (labeled by the four possible states of the adjacent qubits) have different normalization factors due to the Fibonacci constraint; therefore, the state of the qubit chain after feedback will not be an equal-amplitude superposition state as desired. For example, this situation should be contrasted with a recently proposed finite-depth deterministic scheme to prepare the AKLT ground state [68], which is able to correct undesired measurement outcomes with a local circuit. However, the correction operation designed in that work makes use of the fact that the AKLT state is a symmetry-protected topological state [83, 84], which is a property that is not shared by the state \(\ket{\xi}\). We leave the question of whether an undesired measurement outcome in our probabilistic protocol can be corrected by a finite-depth circuit for future work.
With regard to the states \(\ket{\mathcal{S}_{k}}\), it would be interesting to determine whether the variational ansatz circuit explored in Sec. IV.2 can represent the \(\ket{\mathcal{S}_{k}}\) states exactly. This would conclusively demonstrate that the scarred eigenstates of the model 1 can be prepared in polynomial depth (as opposed to quasipolynomial depth as shown in Sec. IV.1), which is known to be possible for, e.g., Dicke states [76]. Attempts to find an exact solution for the rotation angles in the circuit architecture of Sec. IV.2 yield \(n_{a}\) coupled nonlinear equations that are intractable in all but the simplest cases. It would be interesting to see whether a modified circuit architecture involving gates of the form in Eq. (4.13) can be used to obtain an analytically tractable system of equations for the rotation angles.
It may also be worth considering whether the recursive method of Ref. [76] to prepare Dicke states can be adapted to exactly prepare the \(\ket{\mathcal{S}_{k}}\) states, or equivalently the projected Dicke states \(\ket{\mathcal{D}_{k}^{m}}\) [see Eq. (4.1)], in polynomial depth. The recursive construction of a state preparation circuit in Ref. [76] hinges on a recursion relation for Dicke states with different magnetizations. The projected Dicke states also obey a recursion relation:
\[\begin{split}\ket{\mathcal{D}_{k}^{m}}&=\sqrt{ \frac{k}{m-k+1}}\ket{\mathcal{D}_{k-1}^{m-2}}\otimes\ket{01}\\ &\quad+\sqrt{\frac{m-2k+1}{m-k+1}}\ket{\mathcal{D}_{k}^{m-1}} \otimes\ket{0}.\end{split} \tag{11}\]
However, unlike the recursion relation for Dicke states, this one relates \(\ket{\mathcal{D}_{k}^{m}}\) to both \(\ket{\mathcal{D}_{k}^{m-1}}\) and \(\ket{\mathcal{D}_{k-1}^{m-2}}\), which are defined on qubit registers of different sizes. This complicates the possibility of a straightforward modification of the protocol of Ref. [76], and we leave this question for future work.
Another potentially useful avenue to explore is the application of nonunitary methods to prepare the scarred eigenstates \(\ket{\mathcal{S}_{k}}\). In particular, these eigenstates can be prepared from the state \(\ket{\xi}\) by projectively measuring the magnetization operator \(M_{z}=\sum_{i=1}^{N}Z_{i}\). For any \(k=0,\ldots,N/2-1\), \(\ket{\mathcal{S}_{k}}\) is an eigenstate of \(M_{z}\) with eigenvalue \(N-2k\). Since each of these eigenvalues is unique, we see from Eq. (2.8) that the projection of \(\ket{\xi}\) into an \(M_{z}\) eigenspace must yield one of the \(\ket{\mathcal{S}_{k}}\). The probability of a given measurement outcome \(k\) is peaked around a value that depends on \(\ket{\xi}^{2}\); this most probable outcome can be tuned from \(k=0\) as \(\ket{\xi}^{2}\to 0\) to \(k=N/2-1\) as \(\ket{\xi}^{2}\to\infty\). Thus, given the ability to measure \(M_{z}\) without fully collapsing the system into a \(z\)-basis eigenstate, one can in principle prepare the state \(\ket{\mathcal{S}_{k}}\) in constant depth. One way to achieve such a measurement is by a dispersive coupling \(\chi_{d}(\sum_{i}Z_{i})a^{\dagger}a\) between the qubits and a single cavity mode, where the measurement outcome would be recorded as a shift of the cavity frequency by an integer multiple of \(\chi_{d}\). A projective measurement of \(M_{z}\) can also be achieved using an ancilla register containing \(\lceil\log_{2}N\rceil\) qubits, one for each digit of the binary representation of the magnetization eigenvalue, using methods proposed in Refs. [85] and [86]. This state preparation method also suffers from postselection overhead beyond what is needed to prepare \(\ket{\xi}\): the probability of the most probable measurement outcome decays as a power law in \(N\) for large \(N\). However it may have the advantage of comparatively modest circuit depth relative to the unitary state preparation protocols considered here.
Our results can feed into future studies of the stability
of scarred eigenstates and their dynamics on quantum computers. There are two experiments one would like to perform--one in which the state \(\ket{\xi}\) is evolved under a perturbation of the Hamiltonian in Eq. (1), and another in which the state \(\ket{\mathcal{S}_{k}}\) is evolved. In the former case, the goal is to extract the lifetime of the oscillatory scarred dynamics from the time series of a local operator's expectation value [87], while in the latter it is to extract the lifetime of the eigenstate from similar data. One relevant class of perturbations to consider is an external magnetic field in the \(x\)-direction, \(\epsilon\sum_{i}X_{i}\), which breaks the conservation of the Ising domain wall number \(n_{\text{DW}}\) by disrupting the balance of \(X\) and \(ZXZ\) in the first line of Eq. (1) [27]. The evolution operator over a small time step \(\delta t\) can be written in a Trotter product form as (setting \(J=0\) for simplicity)
\[U(\delta t)\approx e^{-i\Delta\delta t\sum_{i}Z_{i}}\,e^{i\lambda\delta t\sum _{i\text{\tiny{odd}}}Z_{i-1}X_{i}Z_{i+1}}\,e^{-i(\lambda+\epsilon)\delta t\sum _{i}X_{i}}\,e^{i\lambda\delta t\sum_{i\text{\tiny{even}}}Z_{i-1}X_{i}Z_{i+1}}. \tag{10}\]
The nontrivial \(ZXZ\) rotations above can be compiled using standard methods into four CNOT gates, two H gates, and an R\({}_{Z}\) gate. An informative figure of merit to track while evolving the state \(\ket{\xi}\) is the expectation value of, e.g., \(Z_{i-1}X_{i}Z_{i+1}\), which exhibits coherent oscillations under evolution by \(H_{0}\) that should decay in the presence of a nonzero \(\epsilon\). For \(\ket{\mathcal{S}_{k}}\), it is informative to track the expectation value of \(M_{z}\), which is not conserved under \(H_{0}\) for generic initial states but is conserved for the initial state \(\ket{\mathcal{S}_{k}}\); thus tracking \(\bra{M_{z}(t)}\) provides information about the fidelity decay of \(\ket{\mathcal{S}_{k}}\). These calculations are in principle straightforward to carry out on present-day quantum computers, but our QPU results in this work demonstrate that a significant fraction of the QPU's coherence budget will be expended on state preparation. Thus, any near-term demonstration of this calculation must employ substantial problem-tailored optimizations and error mitigation strategies, which should be explored in future work.
###### Acknowledgements.
This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Superconducting Quantum Materials and Systems Center (SQMS) under the contract No. DE-AC02-07CH11359 and through NASA-DOE interagency agreement SAA2-403602. Ames National Laboratory is operated for the U.S. Department of Energy by Iowa State University under Contract No. DE-AC02-07CH11358. A.R. acknowledges support from NSF Award No. DMR-1945395. T.I. and A.R. acknowledge the Aspen Center for Physics, which is supported by NSF Grant No. PHY-1607611. M.S.A. acknowledges support from USRA NASA Academic Mission Services under contract No. NNA16BD14C. A.K. participated in the Feynman Quantum Academy internship program. We acknowledge helpful discussions with Andrew Arrasmith, Emanuele Dalla Torre, Bram Evert, Pouyan Ghaemi, Lesik Motrunich, Sanjay Moudgalya, Zlatko Papic, and Matt Reagor.
## Appendix A Adiabatic Preparation of \(\ket{\xi}\)
In this Appendix, we investigate the possibility of preparing the state \(\ket{\xi=1}\) [Eq. (5)] by performing a linear interpolation between the simple paramagnetic Hamiltonian
\[H_{X}=\sum_{i=2}^{N-1}\frac{I-(-1)^{i}X_{i}}{2}, \tag{11}\]
Figure 14: Energy difference between instantaneous ground and first excited states, \(E_{1}-E_{0}\), versus interpolation parameter \(s/T\). The total number of spins are \(N=4\) (blue), \(N=6\) (red), \(N=8\) (green), \(N=10\) (black), \(N=12\) (magenta), \(N=14\) (cyan), and \(N=16\) (brown). The minimal energy gap occurs at \(s=T\) and approaches the exact value \(1-1/\sqrt{2}=0.292\dots\) (dashed horizontal line) for large \(N\). This is also shown in the inset, which contains the final energy gap between the first excited and the ground state at \(s=T\) versus \(1/N\). The red line shows a fit to a quadratic function with \(y\)-intercept \(0.2920\), which is consistent with the analytical value of the gap.
and the target Hamiltonian \(H_{P}\),
\[H_{P}=\sum_{i=2}^{N-1}\left(P^{\prime}_{i-1}P^{\prime}_{i}+P^{\prime}_{i}P^{ \prime}_{i+1}+P_{i-1}\frac{I-(-1)^{i}X_{i}}{2}P_{i+1}\right). \tag{10}\]
Note that \(H_{P}\) is simply \(H_{\xi=1}/2\) [see Eq. (20)], plus terms involving the projectors \(P^{\prime}_{i}=\left|1\right\rangle_{i}\left\langle 1\right|_{i}\) that ensure the ground state is in the Fibonacci Hilbert space. Both Hamiltonians are positive semidefinite (i.e., their ground states have zero energy) and conserve the \(Z\)-basis projection of the first and last qubit, which can be fixed to be in the \(0\) state. The ground state of \(H_{X}\) is then the product state \(\left|\psi(0)\right\rangle=\left|0\right\rangle\otimes\left|+-+-\cdots+- \right\rangle\otimes\left|0\right\rangle\), where \(\left|\pm\right\rangle\) satisfy \(\left(1\mp X_{i}\right)\left|\pm\right\rangle=0\), while the ground state of \(H_{P}\) is \(\left|\xi=1\right\rangle\). To adiabatically prepare the desired state, we evolve \(\left|\psi(0)\right\rangle\) under the time-dependent interpolating Hamiltonian
\[H(s)=\left(1-\frac{s}{T}\right)H_{X}+\frac{s}{T}H_{P} \tag{11}\]
such that \(H(0)=H_{X}\) and \(H(T)=H_{P}\). Provided \(H(s)\) has an energy gap for all \(s\), then if \(T\) is sufficiently large, the time-evolved state \(\left|\psi(T)\right\rangle\approx\left|\xi=1\right\rangle\) to a good approximation. The adiabatic time evolution was carried out with exact diagonalization of the Hamiltonian (11). After \(\left|\psi(0)\right\rangle\) is prepared, the operator \(U(s,s+\delta s)\) is applied \(n_{s}=T/(\delta s)\) times with \(s=n\delta s\), \(0\leq n\leq n_{s}-1\). Here, \(n_{s}\) is the number of iterations, \(T\) is the total evolution time and \(\delta s\) is the time separation. The time evolution operator \(U(s,s+\delta s)=e^{-i(\delta s)H(s)}\) and the overlap \(\left|\,\langle\psi(s)|\xi=1\rangle\,\right|\) are calculated at each iteration.
In Fig. 14, we plot the difference between the instantaneous energies of the ground and first excited states \(E_{1}-E_{0}\) of \(H(s)\) against the interpolation parameter, \(s/T\), for several values of \(N\). The minimum energy gap occurs at \(s=T\) and appears to saturate to a constant as \(N\) increases. In fact, an exact calculation of the energy gap of \(H_{P}\) gives \(1-1/\sqrt{2}=0.292\dots\)[56], which appears to be in reasonable agreement with the limiting value of the numerically observed gap as obtained from a quadratic extrapolation in \(1/N\) (see Fig. 14 inset). This indicates that high-fidelity adiabatic state preparation should be possible in a finite total time \(T\). In Fig. 15, we plot the total time \(T_{*}\) required to achieve state preparation with \(99\%\) fidelity as a function of \(N\). \(T_{*}\) is calculated by solving for the root of the function \(f(T)=\left|\,\langle\psi(T)|\xi=1\rangle\,\right|-0.99\) using Newton's method. We find that \(T_{*}\) appears to approach a constant value \(T_{*}\approx 25\). This is consistent with the behavior of the gap observed in Fig. 14.
These results lead to the conclusion that adiabatic state preparation should be possible for the state \(\left|\xi=1\right\rangle\). However, it is important to compare the quantum resources required to implement adiabatic evolution on a QPU against those of the methods discussed in Sec. III. In particular, if the evolution operator is approximated using a Trotter product formula, the number of Trotter steps required to achieve Trotter error \(\epsilon\) after evolution by a time \(t\) scales as \(t^{2}/\epsilon\). Inserting \(t=T_{*}\approx 25\) and demanding a Trotter error \(\epsilon=0.01\) yields an estimated \(62\,500\) Trotter steps. Thus the needed circuit depth is constant, but prohibitively large for execution on present-day QPUs. This fact is a well-known shortcoming of the adiabatic algorithm [88] which partially motivated the development of alternative approaches like QAOA [89].
|
2303.06472 | Shape index, Brouwer degree and Poincaré-Hopf theorem | In this paper we study the relationship of the Brouwer degree of a vector
field with the dynamics of the induced flow. Analogous relations are studied
for the index of a vector field. We obtain new forms of the Poincar% \'{e}-Hopf
theorem and of the Borsuk and Hirsch antipodal theorems. As an application, we
calculate the Brouwer degree of the vector field of the Lorenz equations in
isolating blocks of the Lorenz strange set. | Héctor Barge, José M. R. Sanjurjo | 2023-03-11T17:59:59Z | http://arxiv.org/abs/2303.06472v2 | # Shape index, Brouwer degree and Poincare-Hopf theorem
###### Abstract.
In this paper we study the relationship of the Brouwer degree of a vector field with the dynamics of the induced flow. Analogous relations are studied for the index of a vector field. We obtain new forms of the Poincare-Hopf theorem and of the Borsuk and Hirsch antipodal theorems. As an application, we calculate the Brouwer degree of the vector field of the Lorenz equations in isolating blocks of the Lorenz strange set.
Key words and phrases:Shape index, Brouwer degree, Poincare-Hopf theorem, Non-saddle set 2020 Mathematics Subject Classification: 37B30, 37B25, 55M25 The authors are partially supported by the Spanish Ministerio de Ciencia, Innovacion y Universidades (grant PID2021-126124NB-I00).
## Introduction
The aim of this paper is to study the relationship of the Brouwer degree of a vector field with the dynamics of the induced flow, in particular with the dynamical and topological properties of the isolated invariants sets and their unstable manifolds. Analogous relations are studied for the index of a vector field, obtaining in this way new forms of the Poincare-Hopf theorem. This classic result has brought a considerable amount of attention in the past ten years and several results in the same spirit have been obtained in different contexts (see for instance [10, 16, 17, 20, 23]).
As consequences of these relations we also obtain generalizations of Borsuk's and Hirsch's antipodal theorems for domains that are isolating blocks. We calculate the Brouwer degree and the index of vector fields in several situations of dynamical and topological significance. Some applications include the detection of linking orbits in attractor-repeller decompositions of isolated invariant compacta and the calculation of the Brouwer degree of the vector field of the Lorenz equations in isolating blocks of the Lorenz strange set. Furthermore, we present an expression of the shape index, originally defined by Robbin and Salamon [25], in terms of the Euclidean topology. This new expression is quite intuitive and easy to handle.
We consider flows \(\varphi:\mathbb{R}^{n}\times\mathbb{R}\longrightarrow\mathbb{R}^{n}\) in the Euclidean space, induced by smooth vector fields \(F:\mathbb{R}^{n}\longrightarrow\mathbb{R}^{n}\). We will use through the paper some basic notions of dynamical systems (see [6]) and the Conley index theory (see [8, 26]).
We shall make use of the concepts of \(\omega\)-limit and \(\omega^{*}\)-limit of a compactum \(X\subset\mathbb{R}^{n}\) defined as
\[\omega(X)=\bigcap_{t\geq 0}\overline{X[t,+\infty)},\quad\omega^{*}(X)=\bigcap_{ t\leq 0}\overline{X(-\infty,t]}.\]
We recall that a compact invariant set \(K\subset\mathbb{R}^{n}\) is said to be _isolated_ whenever it is the maximal invariant subset of some neighborhood \(N\) of itself. A neighborhood \(N\) satisfying this
## 1. Introduction
The _solation problem_ is a problem in which the number of _solation_ problems is the problem in which the number of _solation problems_ is the problem in which the number of _solation
If a pair \((X,A)\) is of finite type, its _Euler characteristic_ is defined as follows
\[\chi(X,A)=\sum_{k}(-1)^{k}\operatorname{rk}\check{H}^{k}(X,A).\]
We shall make use of the following property of the Euler characteristic [31, Exercise B.1, p. 205]: If two of the three \((X,A)\), \(X\), \(A\), are of finite type, then so is the third and
\[\chi(X)=\chi(X,A)+\chi(A).\]
Notice that if \((X,A)\) is a pair of manifolds or polyhedra, then the Euler characteristic defined in an analogous way using singular cohomology coincides with the previous one.
The _Conley index_ of an isolated invariant set \(K\), denoted \(h(K)\), is defined as the pointed homotopy of the quotient space \((N/L,[L])\) where \(N\) is any isolating block for \(K\). Notice that, while different isolating blocks may represent different homotopy types, the Conley index only depends on \(K\). The _cohomology index_\(CH^{*}(K)\) is defined to be the Cech cohomology of the pointed space \((N/L,[L])\). Using the strong excision property of Cech cohomology we get that the cohomology index is isomorphic to \(\check{H}^{*}(N,L)\).
Given an isolating block \(N\) for an isolated invariant set \(K\), we shall denote by \(\deg(F,N)\) to the degree of \(F_{|_{\check{N}}}\) and, if \(K\) has only a finite number of singularities, by \(I(F_{|_{N}})\) to the total index of \(F_{|_{N}}\), that is, the sum of the indices of the singularities. When we refer to \(I(F_{|_{N}})\), we say that the index is defined if \(F_{|_{N}}\) has a finite number of singular points. For a detailed treatment of mapping degree theory, including the index theory of vector fields, see the book by Outreelo and Ruiz [22].
We shall make use of the following result, obtained by Srzednicki [33], McCord [19], Fotouhi and Razvan [24], in different levels of generality, that relates degree of a vector field near an isolated invariant set, with its Conley index. We present here a slightly different but equivalent version of the one presented by Izydorek and Styborski in [14, Theorem 4.2].
**Theorem 0.1**.: _Let \(\varphi:\mathbb{R}^{n}\times\mathbb{R}\longrightarrow\mathbb{R}^{n}\) be a flow induced by a smooth vector field \(F\) defined on \(\mathbb{R}^{n}\). Suppose that \(K\) is an isolated invariant set for \(\varphi\) and \(N\) an isolating block for \(K\). Then,_
\[\deg(F,N)=(-1)^{n}\chi(h(K)).\]
_Moreover, if \(K\) contains only a finite number of equilibria then_
\[I(F_{|_{N}})=(-1)^{n}\chi(h(K)).\]
Notice that the Euler characteristic of the Conley index is well-defined, taking into account that the pair \((N,L)\) used to compute it can be chosen to be a pair of compact manifolds. The second part of the statement follows from the fact that the Brouwer degree is, by the additivity property, the sum of the indices of all the singular points of \(F\) in \(N\).
Finally, we will also use some elementary facts from Borsuk's homotopy theory (named Shape Theory by him). This theory was introduced by K. Bosuk in 1968 in order to study homotopy properties of compacta with bad local behaviour for which the classical homotopy theory is not well suited. We are not going to make an extensive use of Borsuk's homotopy theory, in particular we are only interested in the following very simple situation: Consider a compact metric space \(K\), a closed subspace \(K_{0}\) and a sequence of maps \(f_{k}:K\longrightarrow K\) such
that \(f_{k|_{K_{0}}}:K_{0}\longrightarrow K_{0}\) (i.e. \(f_{k|_{K_{0}}}\) maps \(K_{0}\) to itself) and the following conditions hold for almost every \(k\):
1. For every neighborhood \(U\) of \(K_{0}\) in \(K\) we have \(f_{k}\simeq f_{k+1}\) in \(U\).
2. \(f_{k}\simeq\operatorname{id}_{K}.\)
3. \(f_{k|_{K_{0}}}\simeq\operatorname{id}_{K_{0}}\) in \(K_{0}\).
Then \(K\) and \(K_{0}\) have the same shape (there is an analogous statement for pointed shape). We shall use the notation \(\operatorname{Sh}(K)=\operatorname{Sh}(K_{0})\) to denote that both \(K\) and \(K_{0}\) have the same shape. We shall also make use of the following fact from shape theory
1. If \(X\) and \(Y\) have the same homotopy type, then they have the same shape.
2. If \(X\) and \(Y\) are polyhedra (or more generally, ANR), \(X\) and \(Y\) have the same shape if and only if they have the same homotopy type.
3. If \(X\) and \(Y\) have the same shape, then they have isomorphic Cech cohomology groups.
More information about the theory of shape can be found in the books by Borsuk [7], by Mardesic and Segal [18] and by Dydak and Segal [11]. Some applications of shape theory to dynamics can be seen in [15, 28].
## 1. Shape index, initial sections and the degree of a vector field
In [3], the authors proved that some parts of the unstable manifold of an isolated invariant set admit sections that carry a considerable amount of information. These sections enable the construction of parallelizable structures which facilitate the study of the flow.
**Definition 1.1**.: Let \(K\) be an isolated invariant compactum and let \(S\) be a compact section of the truncated unstable manifold \(W^{u}(K)\setminus K\). Then \(S\) is said to be an _initial section_ provided that \(\omega^{*}(S)\subset K.\)
If \(S\) is an initial section we define
\[I^{u}_{S}(K)=S(-\infty,0].\]
Obviously, \(I^{u}_{S}(K)=\{x\in W^{u}(K)\setminus K\mid xt\in S\text{ with }t\geq 0\}\). In accordance with this terminology we say that \(I^{u}_{S}(K)\cup K\) is an _initial part of the unstable manifold_ of \(K\) and we denote it by \(W^{u}_{S}(K).\) In [3] it was proved that, although \(I^{u}_{S}(K)\) depends on \(S\), all the initial parts have basically the same structure. More specifically, if \(S\) and \(T\) are initial sections of \(W^{u}(K)\), the pairs \((W^{u}_{S},S)\) and \((W^{u}_{T},T)\) are homeomorphic.
Analogous notions known as _final section_ and _final part_ of the stable manifold can be defined and have similar properties.
If \(N\) is an isolating block for \(K\), we denote by \(N^{-}\) the negative asymptotic set, that is, the set \(\{x\in N\mid xt\in N\text{ for every }t\leq 0\}\). Set \(n^{-}=N^{-}\cap L\). It is easy to see that \(N^{-}\) is an initial part of the unstable manifold with initial section \(n^{-}\). The positive asymptotic set \(N^{+}\) is defined in an analogous way and is a final part of the stable stable manifold with final section \(n^{+}=N^{+}\cap L^{\prime}\).
In this paper we make some use of the shape index of an invariant isolated set \(K\). The shape index \(S(K)\) was introduced by Robbin and Salamon in [25] as \(\operatorname{Sh}(N/L,*)\), where \(*=[L]\). The cohomology of the shape index is the classical cohomological (Conley) index. In [29], the second author showed that the shape index can be represented in terms of compact sections
of the unstable manifold endowed with the intrinsic topology (as defined also by Robbin and Salamon). The constraint of the intrinsic topology is substantial, since this topology is not very intuitive and difficult to handle, so we believe that an expression of the shape index in terms of the Euclidean (or extrinsic) topology is much more useful and we find it in the first result of the paper. A crucial element of this expression is the use of the initial sections of truncated manifolds as defined above.
**Theorem 1.2**.: _Let \(\varphi:\mathbb{R}^{n}\times\mathbb{R}\longrightarrow\mathbb{R}^{n}\) be a flow (not necessarily differentiable) and \(K\) an isolated invariant set of \(\varphi\). Let \(W^{u}(K)\) the unstable manifold of \(K\), \(S\) an initial section of \(W^{u}\) and \(W^{u}_{S}(K)\) the corresponding initial part of the unstable manifold of \(W^{u}(K)\).Then the shape index \(S(K)\) is \(\mathrm{Sh}(W^{u}_{S}(K)/S,*)\), that is, the pointed shape of the quotient set \(W^{u}_{S}/S\) where \(*=[S]\). Furthermore, if \(CS\) is the cone over \(S\) then \(S(K)=Sh(W^{u}_{S}\cup CS,*),\) were \(*\) is the vertex of the cone._
Proof.: Let \(N\) be an isolating block of \(K\) and \(L\) its exit set. Since all the pairs \((W^{u}_{S},S)\), where \(S\) is an inital section, are homeomorphic, we can limit ourselves to the pair \((N^{-},n^{-})\). Let \(\alpha:N\setminus N^{+}\longrightarrow\mathbb{R}\) the map defined by
\[\alpha(x)=\max\{t\in\mathbb{R}\mid x[0,t]\subset N\}.\]
Then for every \(k\in\mathbb{N}\cup\{0\}\) we define the map \(f_{k}:N\longrightarrow N\) by
\[f_{k}(x)=\begin{cases}kx&\text{if}\quad x[0,k]\subset N\\ \alpha(x)x&\text{otherwise}.\end{cases}\]
The map \(f_{k}\) is continuous and fixes all points in \(L\) (this is essentialy Wazewski's Lemma [34, Theorem 2]). Suppose that \(U\) is an open neighborhood of \(N^{-}\cup L\) in \(N\). We claim that there is a \(k_{0}\in\mathbb{N}\) such that \(\mathrm{Im}\,f_{k}\subset U\) and \(f_{k}\simeq f_{k+1}\) in \(U\) for every \(k\geq k_{0}\), and the homotopy leaves al points in \(L\) fixed. In order to prove it we show that there is a positive \(s_{0}\) such that \(N^{+}[s_{0},\infty)\subset U\). Otherwise, there would be sequences \(x_{n}\in N^{+}\) and \(s_{n}\longrightarrow\infty\) such that \(x_{n}s_{n}\to y\in N^{+}-U\). Then, \(\gamma^{-}(y)\subset N\) and, since \(y\in N^{+}\), the whole trajectory \(\gamma(y)\) would be contained in \(N\setminus K\) in contradiction with the assumption that \(N\) is an isolating block of \(K\). Furthermore, there is a \(s_{1}\geq s_{0}\) with the property that \(xt\in U\) for every \(x\in N\setminus N^{+}\) and for every \(t\) such that \(s_{1}\leq t\leq\alpha(x)\). Otherwise there would be sequences \(x_{n}\in N\), \(t_{n}\rightarrow\infty\) with \(x_{n}t_{n}\notin U\), \(x_{n}t_{n}\to y\in N\) and \(x_{n}[0,t_{n}]\subset N\). But this would imply that \(y\in N^{-}\), in contradiction with the fact that \(y\notin U\), as limit of \(x_{n}t_{n}\). We obtain from this that \(\mathrm{Im}\,f_{k}\subset U\) for \(k\geq s_{1}\) and select an index \(k_{0}\geq s_{1}.\) It is clear that the homotopy \(h_{k}:N\times[0,1]\to N\) defined by
\[h_{k}(x,t)=\begin{cases}f_{k}(x)t&\text{if}\quad f_{k}(x)[0,t]\subset N\\ x\alpha(x)&\text{otherwise}\end{cases}\]
links \(f_{k}\) and \(f_{k+1}\) in \(U\) leaving all points in \(L\) fixed. Furthermore, the map
\[h(x,t)=\begin{cases}xtk&\text{if}\quad x[0,tk]\subset N\\ x\alpha(x)&\text{otherwise},\end{cases}\]
defines a homotopy \(h:N\times[0,1]\longrightarrow N\) linking \(\operatorname{id}_{N}\) with \(f_{k}.\) Similarly, it can be seen that \(f_{k|_{N^{-}\cup L}}\) is a map \(N^{-}\cup L\longrightarrow N^{-}\cup L\) homotopic to \(\operatorname{id}_{N^{-}\cup L},\) with the homotopy fixing all points in \(L.\)
Notice that the subspace \((N^{-}\cup L)/L\subset N/L\) can be identified with \(N^{-}/n^{-}\). We use the notation \(*\) to designate both the point \([L]\in N/L\) and the point \([n^{-}]\in N^{-}/n^{-}\). Now consider the composition \(\bar{f}_{k}=p\circ f_{k}:N\longrightarrow N/L,\) where \(p:N\longrightarrow N/L\) is the natural projection. This map induces a map \(\hat{f}_{k}:N/L\longrightarrow N/L\) such that \(\hat{f}_{k}=p\circ\bar{f}_{k}\). We then have a sequence of maps \(\hat{f}_{k}:N/L\longrightarrow N/L\) such that \(\hat{f}_{k|_{N^{-}/n^{-}}}:N^{-}/n^{-}\longrightarrow N^{-}/n^{-}\) and the following conditions are satified for almost every \(k\):
1. For every neighborhood \(\hat{U}\) of \(N^{-}/n^{-}\) in \(N/L\) we have \(\hat{f}_{k}\simeq\hat{f}_{k+1}\) in \(\hat{U}\).
2. \(\hat{f}_{k}\simeq\operatorname{id}_{N/L}.\)
3. \(\hat{f}_{k|_{N^{-}/n^{-}}}\simeq\operatorname{id}_{N^{-}/n^{-}}.\)
4. All the homotopies leave the point \(*\) fixed.
It follows from this that \(\operatorname{Sh}(N/L,*)=\operatorname{Sh}(N^{-}/n^{-},*)\) and, therefore, \(\operatorname{Sh}(W^{u}_{S}/S,*)=S(K)\) (where \(*=[S].\))
To finish the proof it remains to observe that by [18, Corollary 3, pg. 247], the natural projection \(W^{u}_{S}\cup CS\longrightarrow W^{u}_{S}/S\) is a pointed shape equivalence and, therefore \(S(K)=\operatorname{Sh}(W^{u}_{S}\cup CS,*),\) were \(*\) is the vertex of the cone.
**Remark 1.3**.: The statement in Theorem 1.2 does not hold if the section \(S\) is not initial. An example of a compact section \(S\) is given in [3, Fig 2, pg. 840] which is not initial and such that \(Sh(W^{u}_{S}(K)/S,*)\) is not the shape index \(S(K).\)
By combining Theorem 1.2 together with Theorem 0.1 we obtain the following corollary that relates the degree of a vector field (or the index when defined) near an isolated invariant set, its Euler characteristic and the Euler characteristic of an initial section.
**Corollary 1.4**.: _Let \(\varphi:\mathbb{R}^{n}\times\mathbb{R}\longrightarrow\mathbb{R}^{n}\) be a flow induced by the smooth vector field \(F:\mathbb{R}^{n}\longrightarrow\mathbb{R}^{n}\)and \(K\) an isolated invariant set of \(\varphi\). Let \(W^{u}(K)\) the unstable manifold of \(K\), \(S\) an initial section of \(W^{u}\) and \(W^{u}_{S}(K)\) the corresponding initial part of the unstable manifold. If \(N\) is an isolating block of \(K\) then_
\[\deg(F,N)=(-1)^{n}(\chi(K)-\chi(S)). \tag{1}\]
_Moreover, if the index \(I(F_{|_{N}})\) is defined then_
\[I(F_{|_{N}})=(-1)^{n}(\chi(K)-\chi(S)). \tag{2}\]
Proof.: First observe that, if \(L\) is the exit set of \(N\) we have that \(\chi(N,L)\) is defined and
\[\chi(S(K))=\chi(N,L)=\chi(h(K)).\]
As before, we can assume that \(W^{u}_{S}(K)=N^{-}\) and \(S=n^{-}.\) By an argument similar to that used in the proof of the Theorem 1.2, it is easy to see that \(\operatorname{Sh}(N^{-})=\operatorname{Sh}(K)\). Then \(\check{H}^{r}(N^{-})=\check{H}^{r}(K)\) for every \(r\) and, thus, \(\chi(N^{-})=\chi(K).\) Moreover, Theorem 1.2 ensures that \(S(K)=\operatorname{Sh}(N^{-}/n^{-},*)\). Hence, as a consequence of the strong excision property of Cech cohomology we obtain that
\[\chi(N,L)=\chi(N^{-},n^{-})=\chi(N^{-})-\chi(n^{-})=\chi(K)-\chi(S).\]
Notice that, since \(\chi(N^{-},n^{-})\) and \(\chi(N^{-})\) are defined so is \(\chi(n^{-})\) (hence, \(\chi(S)\)).
The equalities (1) and (2) follow from Theorem 0.1.
This corollary allows in many cases to establish a direct relation of the Brouwer degree and the total index with the topology of the invariant set \(K\), as we show in various results of the paper. In other cases we also need some knowledge of the initial section of the unstable manifold, which is often easy to compute. This gives an alternative method to the one provided by Theorem 0.1 that has some advantages in certain cases.
A direct consequence of Corollary 1.4 is a particular case of a result obtained by Sredznicki [33, Lemma 6.2].
**Corollary 1.5**.: _If \(K\) is a continuum in \(\mathbb{R}^{2}\) then \(\deg(F,N)\leq\chi(K)\). Also \(I(F_{|_{N}})\leq\chi(K)\) when \(I(F_{|_{N}})\) is defined._
Proof.: In [3] it was shown that for \(n=2\) the section \(S\) is a disjoint finite union of circles and (possibly degenerate) topological intervals and, consequently, \(\chi(S)\geq 0.\) Then \(\deg(F,N)=\chi(K)-\chi(S)\leq\chi(K)\).
The following corollary deals with the important particular case of Theorem 1.4 when the initial part of the unstable manifold **is a genuine manifold** whose boundary is the initial section \(S\).
**Proposition 1.6**.: _Suppose that \(W^{u}_{S}\) is an \(m\)-dimensional manifold with boundary \(\partial W^{u}_{S}=S\). Then_
\[\deg(F,N)=(-1)^{(n+m)}\chi(K).\]
_In particular, \(\deg(F,N)\) agrees with \(\chi(K)\) if the parities of \(n\) and \(m\) are the same and with \(-\chi(K)\) otherwise. The same statement is valid for \(I(F_{|_{N}})\) when defined._
Proof.: Since by Corollary 1.4
\[\deg(F,N)=(-1)^{n}(\chi(K)-\chi(S)),\]
we only have to compute \(\chi(S)\). Taking into account that \(W^{u}_{S}(K)\) is a genuine \(m\)-manifold whose boundary is \(S\), it follows that \(S\) is a closed \((m-1)\)-manifold. If \(m\) is even, then \(m-1\) is odd and Poincare duality ensures that \(\chi(S)=0.\) On the other hand, if \(m\) is odd, Lefschetz duality, together with the fact that the \(\operatorname{Sh}(W^{u}_{S})=\operatorname{Sh}(K)\) ensure that
\[\chi(W^{u}_{S},S)=-\chi(W^{u}_{S})=-\chi(K).\]
As a consequence, \(\chi(S)=2\chi(K)\) and the result follows.
The classical Poincare-Hopf Theorem is the most important particular case of Proposition 1.6:
**Corollary 1.7**.: _If \(F\) points outward in \(\partial N\) and \(I(F_{|_{N}})\) is defined then \(I(F_{|_{N}})=\chi(N)\). The same holds for \(\deg(F,N)\). In this case there is no requirement for \(I(F_{|_{N}})\) to be defined._
Proof.: Since \(F\) points outwards in \(\partial N\) it follows that the maximal invariant set contained in \(N\) must be a repeller. Moreover, \(N\) is a negatively invariant neighborhood of \(K\) and, as a consequence, we can take \(W^{u}_{S}=N\), \(S=\partial N\), and \(n=m\). Since \(\check{H}^{*}(W^{u}_{S})=\check{H}^{*}(K)\) the result follows from Proposition 1.6.
The following result refers to flows that have a global repeller.
**Corollary 1.8**.: _If \(\varphi\) has a global repeller \(K\) then \(\deg(F,N)=1\) for every isolating block containing \(K\) and \(I(F_{|_{N}})=1\) when defined._
Proof.: If \(K\) is the global repeller of \(\varphi\) then \(K\) has the Cech cohomology groups of a point by [15, Theorem 3.6]. Since the degree does not depend on the choice of the isolating block, we may assume that \(N\) is negatively invariant, i.e, such that \(N=N^{-}\). Hence, \(\chi(N)=\chi(K)=1\) and the result follows from Corollary 1.7.
In dimension \(2\) we obtain a kind of reciprocal to the Poincare-Hopf theorem and a nice characterization of the flows such that \(I(F_{|_{N}})=\chi(N)\).
**Proposition 1.9**.: _If \(K\) is a continuum in \(\mathbb{R}^{2}\) and \(I(F_{|_{N}})\) is defined then the vector field \(F\) is tangent to \(\partial N\) in exactly \(2(\chi(N)-I(F_{|_{N}}))\) points. As a consequence, \(I(F_{|_{N}})=\chi(N)\) if and only if \(F\) either points outward or inward in every component of \(\partial N\)._
Proof.: The first part of the statement follows from the fact that the points of tangency are exactly the points of \(L\cap L^{\prime}=\partial L\), where \(L^{\prime}\) is the entrance set. Since \(L\) is a compact \(1\)-dimensional manifold, it is a disjoint union of circles and closed intervals. Hence, \(\partial L\) consists of the endpoints of each interval component of \(L\). Since each interval has exactly two endpoints and \(\chi(L)\) is just a count of the number of interval components of \(L\), the result follows from Theorem 0.1. The second part of the statement is a direct consequence of this discussion.
**Example 1.10**.: Let \(\varphi\) be a flow in \(\mathbb{R}^{2}\) induced by a vector field \(F\) and suppose that \(K\) is an isolated periodic trajectory. Then, it is not difficult to see that \(K\) admits an isolating block \(N\) that is a closed annulus with two different boundary components, each contained in a different component of \(\mathbb{R}^{2}\setminus K\). Since \(K\) does not contain fixed points, then
\[I(F_{|_{N}})=0=\chi(N),\]
and, Proposition 1.9 ensures that the vector field points either outward or inward in every component of \(\partial N\). Hence, we have three mutually exclusive possibilities:
1. \(F\) points inward in both components of \(\partial N\) and, hence, \(K\) is an attractor.
2. \(F\) points outward in both components of \(\partial N\) and, hence, \(K\) is a repeller.
3. \(F\) points inward in one component of \(\partial N\) and outward in the other. In this case \(K\) is neither an attractor nor a repeller.
Although these three possibilities cannot be distinguished only using the index, they can be distinguished by using the Conley index. Indeed, in the first case, \(L=\emptyset\) and, hence, the effect of collapsing is equivalent to make the disjoint union of \(N\) with a point \(\{*\}\) not belongin to \(N\). Since \(N\) is an annulus, it has de homotopy type of the circle \(S^{1}\) and, hence, the Conley index of \(K\) is the pointed homotopy type of \((S^{1}\cup\{*\},*)\) where \(*\) is a point that does not belong to \(S^{1}\).
In the second case \(L=\partial N\) and \((N/L,[L])\) is a pinched torus that is pointed homotopy equivalent to the wedge \((S^{2}\lor S^{1},*)\).
Finally, in the third case, \(L\) is just one component of \(\partial N\). Since \(N\) is homeomorphic to the product \(S^{1}\times[0,1]\), \((N/L,[L])\) is just the cone \((CS^{1},*)\) that is contractible. It follows that the Conley index of \(K\) is trivial.
This shows that the Conley index is a finer invariant than the index of a vector field.
## 2. Brouwer degree of vector fields near non-saddle sets
In this section we study the Brouwer degree of a vector field in a vicinity of a special class of isolated invariant sets called non-saddle.
We start by recalling that an invariant set \(K\) is said to be _non-saddle_ if it satisfies that for every neighborhood \(U\) of \(K\) there exists a neighborhood \(V\) of \(K\) such that for all \(x\in V\) either \(x[0,+\infty)\subset U\) or \(x(-\infty,0]\subset U\). Otherwise \(K\) is said to be _saddle_. We shall only consider non-saddle sets that are also isolated. Attractors, repellers and unstable attractors with mild forms of instability are some examples of non-saddle sets. Isolated non-saddle sets are characterized by possesing arbitrarily small isolating blocks of the form \(N=N^{+}\cup N^{-}\) (see [5, Proposition 3]). Moreover, if \(K\) is connected, every isolating block is, in fact, of this form. Notice that if \(N\) is an isolating block of this form, the vector field points either inward or outward in each connected component of \(\partial N\). Using the homotopies provided by the flow, it easily follows that \(\check{H}^{*}(K)\cong H^{*}(N)\) and, therefore, \(K\) is of finite type. Another property that we shall use in the sequel is that the union of the components of the boundary of an isolating block of the form \(N=N^{+}\cup N^{-}\) in which the vector field points outward is an initial section of the unstable manifold of \(K\). In an analogous way, the union of those components of \(\partial N\) in which the flow points inward is a final section of the stable manifold. Hence, for isolated non-saddle sets of smooth flows on \(\mathbb{R}^{n}\) initial and final sections of the unstable and stable manifolds are closed manifolds of dimension \(n-1\). For more information on isolated non-saddle sets the reader can see [2, 4, 5].
In view of this, the following result is a far-reaching generalization of the Poincare-Hopf Theorem. Furthermore, it provides a nice characterization of non-saddle sets for flows in the plane.
**Proposition 2.1**.: _Suppose that \(K\) is a non-saddle continuum, \(N\) is a connected isolating block of \(K\) and \(S\) and \(S^{*}\) an initial and a final section of the truncated unstable and stable manifolds of \(K\) respectively. Suppose also that \(I(F_{|_{N}})\) is defined. Then,_
1. _If the dimension_ \(n\) _is even then_ \(I(F_{|_{N}})=\chi(K)=\chi(N)\)_._
2. _If_ \(n\) _is odd then_ \(I(F_{|_{N}})=\frac{1}{2}(\chi(S^{*})-\chi(S))=-\chi(N)+\chi(S^{*})\) _(note that if_ \(K\) _is a repeller then_ \(\chi(S^{*})=0\) _and, therefore,_ \(I(F_{|_{N}})=-\chi(N)\)_)._
_Moreover, if \(n=2\) and \(K\) is an arbitrary isolated invariant continuum then \(I(F_{|_{N}})=\chi(K)\) if and only if \(K\) is non-saddle._
_An analogous statement is valid for \(\deg(F,N)\). In this case there is no requirement that \(I(F_{|_{N}})\) be defined._
Proof.: We may assume without loss of generality that \(S=n^{-}\) and \(S^{*}=n^{+}\). If \(n\) is even, since \(S\) is a closed manifold of odd dimension it follows that \(\chi(S)=0\) and, hence, \(I(F_{|_{N}})=\chi(K)=\chi(N)\).
Suppose now that \(n\) is odd. Taking into account that \(\partial N=S\cup S^{*}\), Lefschetz duality applied to the pair \((N,\partial N)\) yields
\[H^{*}(N,S\cup S^{*})=H_{n-*}(N),\]
where the homology and the cohomology are taken in dual dimensions relative to \(n\).
Since \(n\) is odd we deduce from the former expression that
\[\chi(N,S\cup S^{*})=-\chi(N).\]
On the other hand,
\[\chi(N,S\cup S^{*})=\chi(N)-\chi(S)-\chi(S^{*}).\]
Summing up,
\[\chi(N)=\frac{1}{2}(\chi(S)+\chi(S^{*})).\]
Since \(H^{*}(N)\cong\check{H}^{*}(K)\) we have that \(\chi(N)=\chi(K)\) and, using this fact in the formula from Corollary 1.4 we get that
\[I(F_{|_{N}})=\frac{1}{2}(\chi(S^{*})-\chi(S))=-\chi(N)+\chi(S^{*}).\]
Now suppose that \(n=2\) and \(K\) is an arbitrary isolated invariant continuum. We only have to see that the equality \(I(F_{|_{N}})=\chi(K)\) ensures the non-saddleness of \(K\) since the converse statement is just case (1). By ([3, Theorem 10]), \(K\) is non-saddle if and only if all the components of \(S\) are circles.The equality \(I(F_{|_{N}})=\chi(K)\) implies that \(\chi(S)=0\) and, thus, in this case, no component of \(S\) can be a (possibly degenerate) topological interval and, consequently, must be a circle. Therefore, \(K\) is non-saddle.
In the following result, we use the Alexandrov (or one-point) compactification of the Euclidean space \(\mathbb{R}^{n}\cup\{\infty\}\) to show that under further assumptions more can be said about the degree of \(F\) and its index. We recall that \(\mathbb{R}^{n}\cup\{\infty\}\) is homeomorphic to the \(n-\)sphere \(S^{n}\).
**Proposition 2.2**.: _Suppose that \(K\) is a non-saddle continuum, \(n\) is even and every component of \((\mathbb{R}^{n}\cup\{\infty\})\setminus K\) is contractible (this always happens for \(n=2\)). Consider a connected isolating block \(N\) of \(K.\) Then \(\deg(F,N)\leq 1\), and \(I(F_{|_{N}})\leq 1\) when defined. Also, if \(\deg(F,N)=1\) then \(K\) is either an attractor or a repeller._
Proof.: Let \(k\) be the number of components of \((\mathbb{R}^{n}\cup\{\infty\})\setminus K\) (which is finite and coincides with \(1+\operatorname{rk}\check{H}^{n-1}(K)\) by Alexander duality). Since they are contractible, they have Euler characteristic equal to one. By Alexander duality
\[\chi(K)=\chi(\mathbb{R}^{n}\cup\{\infty\})-\chi((\mathbb{R}^{n} \cup\infty)\setminus K)\] \[=2-k\leq 1.\]
Then, Proposition 2.1 ensures that \(\deg(F,N)=\chi(K)\leq 1\). Furthermore, the equality \(2-k=1\) holds if and only if \(K\) does not disconnect \(\mathbb{R}^{n}\cup\{\infty\}\). In such a case, the only component of \(\mathbb{R}^{n}\cup\{\infty\}\setminus K\) is locally attracted or locally repelled by \(K\) by [5, Theorem 25]. In the first case \(K\) is an attractor and in the second \(K\) is a repeller.
Next we analyze the situation when the dimension \(n\) is odd and greater than one.
**Proposition 2.3**.: _Suppose that \(K\) is a non-saddle continuum, \(n>1\) is odd and every component of \((\mathbb{R}^{n}\cup\{\infty\})\setminus K\) is contractible. Consider a connected isolating block \(N\) of \(K.\) Then \(\deg(F,N)\leq k\), and \(I(F_{|_{N}})\leq k\) when defined. Furthermore, if \(I(F_{|_{N}})\) is defined,_
1. \(I(F_{|_{N}})=k\) _if and only if_ \(K\) _is an attractor._
2. \(I(F_{|_{N}})=-k\) _if and only if_ \(K\) _is a repeller_
3. \(I(F_{|_{N}})=0\) _if and only if_ \(K\) _decomposes_ \(\mathbb{R}^{n}\) _in an even number of components, half of them locally attracted by_ \(K\) _and half of them locally repelled._
_Analogous statements hold for \(\deg(F,N)\). In this case there is no requirement that \(I(F_{|_{N}})\) be defined._
Proof.: By [5, Theorem 25] we have that all the components of \((\mathbb{R}^{n}\cup\{\infty\})\setminus K\) are either locally attracted or locally repelled by \(K\). We denote by \(U\) the union of all the components of \((\mathbb{R}^{n}\cup\{\infty\})\setminus K\) which are locally repelled by \(K\) and by \(V\) the union of all the components which are locally attracted. Then there is an attractor \(A\subset U\) such that \(U\) is the basin of attraction of \(A\) and analogously, a repeller \(R\subset V\) such that \(V\) is the basin of repulsion of \(R\). Let \(k\) be the number of components of \(\mathbb{R}^{n}\setminus K\) and \(u\) the number of components of \(U\). Then, by [15, Theorem 3.6]\(H^{*}(U)=\check{H}^{*}(A)\) and, by Alexander duality, \(\check{H}^{*}(A)=H_{n-*}(U,U\setminus A).\) So, since \(n\) is odd, we have
\[\chi(A)=-\chi(U,U\setminus A)=-\chi(U)+\chi(U\setminus A)=-u+\chi(S).\]
For the last equality, we use the facts that all the components of \(U\) are contractible and that \(S\) is a strong deformation retract of \(U\setminus A\). Since \(\chi(A)=\chi(U)=u\) we obtain that \(\chi(S)=2u\).
By an analogous argument applied to \(R\) and \(V\) we obtain that \(\chi(S^{*})=2(k-u).\) So, by Proposition 2.1 we get
\[I(F_{|_{N}})=\frac{1}{2}(\chi(S^{*}))-\chi(S)=\frac{1}{2}(2(k-u))-2u)=k-2u,\]
and, since \(u\geq 0\), we obtain that \(I(F_{|_{N}})\leq k\).
Also, \(I(F_{|_{N}})=k\) if and only if \(u=0\), which happens if and only if \(K\) is an attractor. On the other hand, the equality \(I(F_{|_{N}})=-k\) holds if and only if \(u=k\), which occurs if and only if \(K\) is a repeller. Finally \(I(F_{|_{N}})=0\) if and only if \(k=2u\), i.e, if and only if the number of components of \(\mathbb{R}^{n}-K\) that are locally repelled by \(K\) matches the number of components that are locally attracted by \(K\).
## 3. Brouwer degree and connecting orbits in attractor repeller decompositions
In this section we show how to use the Brouwer degree to detect the existence of connecting orbits in attractor-repeller decompositions. We also give an estimate of the Euler characteristics of the set of connecting orbits. Finally, we present an application of these results in order to calculate the Brouwer degree of the Lorenz vector field in an isolating block of the Lorenz strange set.
We recall that if \(K\) is an isolated invariant set and \(A\subsetneq K\) is an attractor for the restriction flow \(\varphi_{|_{K}}\), then the set
\[R=\{x\in K\mid\omega(x)\cap A=\emptyset\}\]
is non-empty and is a repeller for \(\varphi_{|_{K}}\). The pair \(\{A,R\}\) is called _attractor-repeller decomposition_ of \(K\). Notice that if \(K\neq A\cup R\) the orbit of any point \(x\notin A\cup R\) satisfies that \(\omega(x)\subset A\) and \(\omega^{*}(x)\subset R\). These kind of orbits are the so-called _connecting orbits_ between \(A\) and \(R\).
**Proposition 3.1**.: _Let \(\{A,R\}\) be an attractor-repeller decomposition of the isolated invariant set \(K\) and \(N\) an isolating block of \(K\). If \(\deg(F,N)\neq\chi(A)+\chi(R)-\chi(S)\) then there exists an orbit in \(K\) connecting \(A\) and \(R\). Moreover, if \(C\) is the union of all connecting orbits then_
\[\chi(C)=\chi(A)+\chi(R)-\chi(K).\]
Proof.: We argue by contradiction. Suppose that there is no orbit in \(K\) connecting \(A\) and \(R\). Then \(K\) is the disjoint union of \(A\) and \(R\). As a consequence,
\[\chi(K)=\chi(A)+\chi(R).\]
Thus, Corollary 1.4 ensures that
\[\deg(F,N)=\chi(A)+\chi(R)-\chi(S)\]
contradicting the hypothesis.
Let us compute the Euler characteristic of the set \(C\) of connecting orbits. Since \(C\) is parallelizable we can find a section \(C_{0}\) of \(C\). Define \(K_{1}=A\cup C_{0}[0,\infty)\) and \(K_{2}=R\cup C_{0}(-\infty,0].\) Then \(K=K_{1}\cup K_{2}\) and \(K_{1}\cap K_{2}=C_{0}\). By using the Mayer-Vietoris sequence
\[\cdots\longrightarrow\check{H}^{q}(K_{1}\cup K_{2})\longrightarrow\check{H} ^{q}(K_{1})\oplus\check{H}^{q}(K_{2})\longrightarrow\check{H}^{q}(K_{1}\cap K _{2})\longrightarrow\cdots\]
we readily get that
\[\chi(K)=\chi(K_{1})+\chi(K_{2})-\chi(C_{0}).\]
However, \(C_{0}\) is a strong deformation retract of \(C\) and so \(\chi(C)=\chi(C_{0}).\) Moreover, arguing in the same way as in the proof of Theorem 1.2 we obtain that \(\operatorname{Sh}(K_{1})=\operatorname{Sh}(A)\) and \(\operatorname{Sh}(K_{2})=\operatorname{Sh}(R)\) and therefore \(\chi(K_{1})=\chi(A)\) and \(\chi(K_{2})=\chi(R)\). Hence, the result follows.
The Lorenz vector field \(F:\mathbb{R}^{3}\longrightarrow\mathbb{R}^{3}\) provides a simplified model of fluid convection dynamics in the atmosphere, and is given by
\[F(x,y,z)=(\sigma(y-x),rx-y-xz,xy-bz),\]
where \(\sigma,r\) and \(b\) are three real positive parameters corresponding respectively to the Prandtl number, the Rayleigh number and an adimensional magnitude. We consider the so-called classical values \(\sigma=10,b=8/3.\) In [32] it is shown that for values of \(r\) between \(13.926\ldots\) (which corresponds to the homoclinic bifurcation) and \(24.06\) (where another type of bifurcation occurs involving the two branches of the unstable manifold of the origin) a "strange set" \(\mathcal{L}\) originates that exhibits sensitive dependence on initial conditions. For these values of the parameter \(r\), the global attractor \(\Omega\) of the Lorenz system has an attractor-repeller decomposition \(\{A,\mathcal{L}\}\) where \(\mathcal{L}\) is the Lorenz strange set and \(A\) consists of two points. It follows from [12, Corollary 5.3] and [30, Theorem 7] that the Lorenz strange set has the shape of a wedge of two circles. Therefore, \(\chi(\mathcal{L})=-1\). We shall also make use of the fact that, since \(\Omega\) is the global attractor, \(\chi(\Omega)=1.\)
**Proposition 3.2**.: _Let \(F:\mathbb{R}^{3}\longrightarrow\mathbb{R}^{3}\) be the Lorenz vector field and let \(N\) be an isolating block of the Lorenz strange set \(L.\) Then \(\deg(F,N)=1.\) Moreover, if \(\hat{N}\) is an isolating block of the global attractor \(\Omega\), then \(\deg(F,\hat{N})=-1\)._
Proof.: Let \(C\) be the set of connecting orbits between \(A\) and \(\mathcal{L}\). Then, Proposition 3.1 together with the considerations made before the statement of the proposition ensure that
\[\chi(C)=\chi(A)+\chi(\mathcal{L})-\chi(\Omega)=0.\]
Since \(\Omega\) is an attractor, the unstable manifold of \(\mathcal{L}\) is contained in \(\Omega\) and, thus, agrees with \(C.\) Now, let \(N\) be an isolating block of \(\mathcal{L}\). By Corollary 1.4 we get
\[\deg(F,N)=(-1)^{3}(\chi(\mathcal{L})-\chi(S))=(-1)^{3}(\chi(\mathcal{L})-\chi( C))=1.\]
This contrasts with the situation for the global attractor: if \(\hat{N}\) is an isolating block of \(\Omega\) then \(\deg(F,\hat{N})=(-1)^{3}\chi(\ \Omega)=-1\).
**Remark 3.3**.: For values of the parameter \(r>24.06\) the strange set \(\mathcal{L}\) becomes an attractor (the Lorenz attractor) and its Cech cohomology (even its shape) remains that of a wedge of two circles. Since \(\mathcal{L}\) is now an attractor, \(S=\emptyset\) and
\[\deg(F,N)=(-1)^{3}\chi(\mathcal{L})=1.\]
## 4. A generalization of Borsuk's and Hirsch's antipodal theorems
We now present a result that is a form of Borsuk's [22, Theorem 5.2, pg. 163] and Hirsch's [22, Theorem 5.3, pg. 166] antipodal theorems for domains which are isolating blocks, involving the dynamics of the flow induced by \(F\) rather than the Brouwer degree of \(F\). Using this result it is possible to conclude from inspection of \(K\) and \(S\) the existence of a point \(x\) in the boundary \(\partial N\) such that the vector field \(F\) points in the same (or opposite) direction at \(x\) and \(-x\).
We say that an isolating block \(N\subset\mathbb{R}^{n}\) is _symmetric_ if \(x\in N\) if and only if \(-x\in N\), i.e., \(N\) is invariant for the antipodal action.
**Proposition 4.1**.: _Suppose that the isolating block \(N\) is symmetric and \(0\in N\). Then_
* _If_ \(\chi(K)\) _and_ \(\chi(S)\) _have the same parity then there is some_ \(x\in\partial N\) _such that_ \(F(x)\) _and_ \(F(-x)\) _point in the same direction. In particular, if_ \(N\) _is the unit ball_ \(B^{n}\) _(and thus_ \(\partial N=S^{n-1}\)_) and_ \(F_{|_{S^{n-1}}}\) _maps_ \(S^{n-1}\) _into_ \(S^{n-1}\) _then there is some_ \(x\in S^{n-1}\) _such that_ \(F(x)=F(-x)\)_._
* _If_ \(\chi(K)\) _and_ \(\chi(S)\) _have different parity then there is some_ \(x\in\partial N\) _such that_ \(F(x)\) _and_ \(F(-x)\) _point in opposite directions. In particular, if_ \(N\) _is the unit ball_ \(B^{n}\) _(and thus_ \(\partial N=S^{n-1}\)_) and_ \(F_{|_{S^{n-1}}}\) _maps_ \(S^{n-1}\) _into_ \(S^{n-1}\) _then there is some_ \(x\in S^{n-1}\) _such that_ \(F(x)=-F(-x)\)_._
Proof.: If \(\chi(K)\) and \(\chi(S)\) have the same parity then by Theorem 1.4 the degree of \(F_{|_{\hat{N}}}\) is even and (i) is a consequence of the Hirsch theorem. If \(\chi(K)\) and \(\chi(S)\) have a different parity then by Corollary 1.4 the degree of \(F_{|_{\hat{N}}}\) is odd and (i) is a consequence of the Borsuk antipodal theorem.
|
2309.02897 | Can empathy affect the attribution of mental states to robots? | This paper presents an experimental study showing that the humanoid robot
NAO, in a condition already validated with regards to its capacity to trigger
situational empathy in humans, is able to stimulate the attribution of mental
states towards itself. Indeed, results show that participants not only
experienced empathy towards NAO, when the robot was afraid of losing its memory
due to a malfunction, but they also attributed higher scores to the robot
emotional intelligence in the Attribution of Mental State Questionnaire, in
comparison with the users in the control condition. This result suggests a
possible correlation between empathy toward the robot and humans' attribution
of mental states to it. | Cristina Gena, Francesca Manini, Antonio Lieto, Alberto Lillo, Fabiana Vernero | 2023-09-06T10:39:03Z | http://arxiv.org/abs/2309.02897v1 | # Can empathy affect the attribution of mental states to robots?
###### Abstract.
This paper presents an experimental study showing that the humanoid robot NAO, in a condition already validated with regards to its capacity to trigger situational empathy in humans, is able to stimulate the attribution of mental states towards itself. Indeed, results show that participants not only experienced empathy towards NAO, when the robot was afraid of losing its memory due to a malfunction, but they also attributed higher scores to the robot emotional intelligence in the Attribution of Mental State Questionnaire, in comparison with the users in the control condition. This result suggests a possible correlation between empathy toward the robot and humans' attribution of mental states to it.
human robot interaction, empathy, mental state attribution +
Footnote †: journal: Journal of Humanoid robot NAO
## 1. Introduction
According to Preston and De Waal [33] empathy can be defined as "the capacity to (a) be affected by and share the emotional state of another, (b) assess the reasons for the others' state, and (c) identify with the other, adopting his or her perspective". Following a shared categorization in psychology [29, 31], empathy can be divided in three major categories: (1) empathy as an affective response to others' emotional states (_affective empathy_), (2) empathy as the cognitive understanding of others' emotional states, as well as the ability to put oneself in the other person's shoes (_cognitive empathy_), and (3) empathy as composed of both an affective and a cognitive component. Other perspectives [13, 39, 40] distinguish empathy in _dispositional empathy_ and _situational empathy_. While the former is a character trait, namely a person's general tendency to empathize, the latter is the empathy that a human perceives towards another agent in a specific situation.
Indeed, empathy is a concept that affects multiple fields of knowledge, from social to developmental, from clinical psychology to neuroscience. Since the discovery in 1996 of mirror neurons [14], interest in the concept of empathy has increased exponentially, also involving the field of human-robot interaction, see for instance [21, 23, 31, 38]. Similarly, during a human-robot interaction, we speak of the cognitive process when a robotic agent appears to individuals as being able to understand and imitate the emotions of others. The affective process occurs when the robotic agent manifests its emotions through voice, body posture, movements and gestures, adapted to the context of the situation.
Several experiments have been conducted over time to study empathy in human-robot interaction, and will be described in the following Section.
According to several neurological and psychological researches [4, 17, 35] the involvement of mirror neuron system is implicated in neurocognitive functions, such as social cognition, language, _emptably_, and _Theory of Mind (ToM)_[3, 44], which is a human-specific ability that allows the attribution of mental states -intentions, thoughts, desires, and emotions- to themselves and others to explain and predict behavior. In particular, the attribution of mental states (AMS) has been defined as "the cognitive capacity to reflect upon one's own and other persons' mental states such as beliefs, desires, feelings and intentions" [2]. In everyday human-to-human interactions, such attributions are ubiquitous, although we are typically not necessarily aware of the fact that they are attributions --or the fact that they are attributions of mental states. In the attribution of mental states to others, human and nonhuman, empathy may have a key role, in particular considering constructs such as understanding the perspective of others, which is part of the previously introduced cognitive empathy.
This paper presents an experimental study showing that the humanoid and social robot NAO, in an already validated situation, is able to trigger empathy in humans and that such empathic response impacts participants' beliefs about the robot's capability of experiencing emotions and, in general, their perception of the robot's mental qualities, namely their attribution of mental states to the robot. This experiment was inspired by the study conducted by Seo et al. [37] in 2015, which investigated situational empathy, that is, the empathy that a human perceives towards another agent -in this case, a robot- in a specific situation and, in particular, when a sudden and unpleasant event arises. However, our experimental goals were slightly different: on the one hand, following Seo et al. [37], we wanted to ascertain whether an unexpected event, i.e., a functional problem in the NAO robot, and the consequent emotions of fear displayed by the robot, could trigger situational empathy (H2), making participants feel sorry for NAO, in the eventuality of a loss of memory. On the other hand, we wanted to understand if the robot's ability to display -and, possibly, elicit- emotions, namely empathy, could impact participants' beliefs about the robot's ability to experience emotions (H1), as well as their assumptions regarding the attribution of mental states to the robot (H3).
Our results confirm that participants experienced empathic emotions towards NAO when the robot was feeling bad, thus implying that the so called situational empathy was successfully elicited, as expected based on the research of Seo et al. [37]. We also administered to the participants the _Attribution of Mental States Questionnaire_ (AMS-Q) [27], which is considered suitable to be easily administered and sensitive for assessing the attribution of mental and sensory states to humans and nonhuman agents. Our results show that not only subjects in the experimental situation empathized with the robot, but also attributed higher emotional intelligence to the robot than the subjects in the control group, suggesting that its ability to display and elicit emotions made it appear as more "human". These results are very promising and suggest a connection between empathy and mental state attribution also in the context of Human Robot Interaction (HRI). To the authors' knowledge, there are no other experiments that have explicitly linked these two aspects in the field of HRI.
This paper is organized as follows: Section 2 presents the related work, Section 3 discusses the differences between the original experiment and ours, Section 4 details the experiment, as well as the measures and metrics used to collect relevant data, while Section 5 describes the obtained results, and finally Section 6 discusses our conclusions and future work.
Related Work
Over time, a great deal of work has investigated the role of empathy in Human Robot Interaction (HRI), such as for instance: Tapus and Mataric(Tapus and Mataric, 2010), Cramer et al. (Cramer et al., 2010), Marti et al.(Marti et al., 2011), Leite et al. (Leite et al., 2012), James et al.(James et al., 2012), etc. Much work has focused on the role of robot embodiment in triggering empathy. Given the large number of relevant work, we have chosen a few studies to discuss, each one representative for a different significant perspective in relation to empathy and HRI, and not only focusing on the embodiment.
For instance, Cramer et al. (Cramer et al., 2010) conducted an experiment to test how correct or incorrect empathy affects the attitude humans have toward artificial agents. The Philips iCat robot, characterized by a synthetic female-like voice, was required to collaborate with a human user in an online game. In the experimental condition, the robot was programmed to express incorrect empathy, namely, empathy which is not congruent with the situation. Results confirm that incorrect empathic behavior can have a negative influence on a human's attitude toward an artificial agent. In particular, incorrect empathy may trigger distrust of the robot.
James et al. (James et al., 2012) aimed at demonstrating that voice can influence people's perception of an artificial agent. In this experiment, there were two conditions, one where the Healthbot robot had an empathic voice, and one where it was characterized by a robotic voice. Results show that 95% of the participants preferred interacting with the empathic robot, even if they noticed that the same verbal content was delivered in the two conditions, since the former also provided good emotional support. In fact, the empathic robot showed great interest in the patient and greater engagement during the interaction, and they perceived kindness, empathy, concern, and encouragement in the tone of voice.
Kim et al. (Kim et al., 2017) demonstrated that not only a humanoid robot, but also an artificial agent that does not resemble a human, as is the case of the robot Mung, is capable of eliciting empathy. Mung consists of a body and two eyes, and was designed to recognize emotions within a human-human and human-robot interaction, as well as to express emotions itself. When such emotions are negative, they are exhibited in the form of a "bruise" that appears on Mung's body. In a subsequent experiment with Mung, Kwak et al. (Kwak et al., 2017) investigated whether the level of agency of the robot can affect humans' empathy toward it. The children participating in the experiment were asked to teach a task to the robot, which was then tested to see what it had learned. Whenever the robot made a mistake, participants were required to punish it with electric shocks, and the robot responded by expressing pain through bruises appearing on its body. With each incorrect answer, the voltage of the electric shock increased and, as a result, the number of bruises on Mung's body increased as well, and they changed color, from blue to red. Two conditions were compared, one where the robot acted as a mediator, conveying the emotions of a remote user ("mediated robot"), and the other where the robot acted as an autonomous entity capable of expressing its own emotions ("simulated robot"). Results showed that children empathized more with the mediated robot than with the simulated one. These data show that empathy toward robots is not only affected by human characteristics, but also by the robots' ability to act, suggesting that the higher the level of robot's agency, the lower its ability to elicit empathy in humans (Kwak et al., 2017).
As far as attribution of mental state is concerned, according to a study by Thellman et al. (Thellman et al., 2017), attributing mental states to robots is a complex socio-cognitive process. Despite the common belief that robots do not have minds (Stein et al., 2017), people frequently talk about and interact with robots as if they do. Mental state attribution is believed to help people interact with robots by providing an interpretive framework for predicting and explaining robot behavior (Thellman et al., 2017). The tendency to attribute mental states to robots is influenced by various factors such as age, motivation, robot behavior, appearance, and identity. Robot behavior is found to be a significant factor in determining the tendency to attribute
mental states to robots, particularly when robots exhibit socially interactive behavior such as eye gaze, gestures, emotional expression, and complex, intelligent or highly variable behavior. The definition of a robot personality also plays an important role in children's attribution of mental states to robots [8, 42]. Children, particularly young children, are found to have a stronger tendency to attribute mental states to robots compared to adults [6, 9, 10, 15, 26, 28, 36]. Most studies reporting these findings have used verbal measures of mental state attribution such as Likert or semantic differential scales [5, 42]. The studies are typically conducted in a lab setting with WEIRD participants (i.e., Well-Educated, Industrialized, Rich, and Democratic) and present a representation of a robot (e.g., image or text) as stimulus materials. When studying children, spoken or written questions about the mental states of robots are combined with a binary choice response format(i.e., typically yes-no questions). In summary, attributing mental states to robots is a complex process influenced by various factors, with robot behavior being a significant factor. Children have a stronger tendency to attribute mental states to robots compared to adults, and studies are typically conducted in a lab setting with verbal measures of mental state attribution.
Figure 1: The Crossword Puzzle
Differences between the two experiments
While the here described experiment was inspired by the study carried out by Seo et al. [37], some of the fundamental aspects of the original study were actually retained in our experiment, while others were deliberately left out. Similarly to the original study, our experiment focuses on situational empathy, which is investigated during the interaction between a user and the humanoid robot NAO. However, in the case of Seo's experiment, situational empathy was studied in three different contexts, a "real" one, where participants physically interacted with the NAO robot placed on a table, a "virtual" one, where participants interacted with a virtual representation of NAO, which was simulated on screen, and a "mixed" one, where the simulated robot was superimposed on the real table. On the contrary, in our experiment, situational empathy has been evaluated in a single context, namely, the "real" one, where human participants were in the same room as the real NAO robot and faced each other. Seo's experiment demonstrated that people empathized more with the physical robot than with the virtual ones, so we replicated just the "real" context in the experimental situation to investigate a possible correlation between empathy and attribution of mental state to the robot.The control situation was exactly the same as for the context, but NAO had no problem, and the interaction continued smoothly until the end of the game.
Similarly to Seo's experiment, we required a reliable and reproducible scenario to induce empathy towards the robot. Thus, we decided to start the interaction with an attempt to establish a friendly relationship between the robot and the user, and to continue with the user's engagement in the completion of the empty boxes of a Crossword Puzzle (see Figure 1) -differently from the original study, which was based on the game of Sudoku. The goal was to provide an opportunity for the robot to demonstrate its autonomous abilities and intelligence through interaction, as well as building a report engaging the user in friendly conversation while carrying out a distractor task, i.e. the Crossword Puzzle (see Fig. 1). The idea, borrowed from Seo et al., was to provide subjects with a chance to get to know the robot, and to encourage them to see the robot as a social partner and not just as a machine. The interaction was designed to propose a gamified activity, as already successfully experienced in [16; 34], to make the experience more meaningful and enjoyable.
In both studies, an important aspect was pushing a human participant to feel empathy towards the NAO robot in a induced situation triggered automatically after five minutes of interaction, when NAO starts manifesting a functional problem. A few differences between the two experiments can be found in the structure of the dialogue and in the behaviour exhibited by the robot during the simulation of the malfunction. In the case of Seo's experiment [37], the robot manifests its problems through agitated movements, a distorted tone of voice, repetitive or nonsensical words, which push the user to ask what the problem is. At this point NAO mentions its malfunctioning, but begs the user to ignore it and continue to play, so as not to make its programmer suspicious. In contrast, in our experiment, NAO makes agitated movements with its arms and legs, crawls, reproduces the gorilla's cry and shows that it is aware of having a functional problem, with no need for the participant to ask it any questions. The robot makes an attempt to resume playing with its human companion, but is unsuccessful, since the functional problem takes over the interaction, making NAO unable to continue. NAO finally resigns itself to its problem, and begs the participant not to inform the programmer of what has just happened. The robot reveals that it has a computer virus, and exhibits worry that its memory may be erased if it is fixed.
Both scenarios are characterized by a display of fear on the part of the robot. In fact, NAO is afraid that the programmer will reset its memory and restart it in order to remedy the functional problem, which would cause it to forget its memories. In the original experiment, NAO's fear is then realised, as the programmer, having discovered that something
was not going as planned, enters the room and presses the button on the back of the robot's head to reset its data and restart it. Then, NAO reintroduces itself to the user and begins a new conversation. On the contrary, in our experiment the researcher, who was already in the same room as the robot and the human participant, simply informed the user that NAO was experiencing problems, stopped the robot, and explained that a programmer would soon come to reset and restart the robot.
A further difference concerns the measures used to record participants' perceptions. While Seo et al. [37] chose to apply an instrument by Batson et al. [1], which allowed participants to assess their own feelings against 24 adjectives, we adopted the TECA questionnaire to measure participants' global cognitive and affective empathy before the test, as well as a set of ad-hoc questions for empathy, and the AMS questionnaire to assess participants' perception of the robot's mental qualities (see Sec. 4.1).
## 4 The Experiment
As specified in the previous sections, our experiment was aimed at testing whether the NAO robot was able to trigger empathy in humans and how an eventual empathic response on the part of human participants could affect their perception of the robot itself. Interaction took place under two different conditions (experimental vs control group), with participants randomly assigned to one of them. In both conditions, NAO interacted with the user in a friendly manner and helped them to complete a crossword puzzle through clues, hints, compliments, and encouragement. Differently from the control condition, in the experimental condition NAO simulated a malfunctioning after the first five minutes of interaction. This was made explicit through unusual behaviour and a simulated awareness, on the part of the robot, that something unplanned and unexpected was happening. This awareness was expressed through phrases such as "Oh oh something's wrong", "I feel like I"m not working properly", and "Please do not inform the programmer of what has happened". If the latter had arrived, he would have solved the problem by resetting the robot's data, which would have lost memory of the conversation with the user. In order to simulate the functional problem, through the use of the programme Choregrabe, a delay was inserted, which was triggered when the interaction between human and robot started, which was possible through the input of the greeting ('hello') uttered by the user. To emphasise this state of anxiety about the possibility of losing memory of its conversation, NAO was programmed to show fear through standard gestures and colors to express that emotion (as made available by its Choregrabe software), and by a set of explicit sentences as "I am afraid", "Please don't say anything about what is happening to me otherwise they will erase? my memory to take away the virus", "I don't want to lose all my memories", etc.
Hypothesis. We hypothesized that the behavior of NAO in the experimental condition would trigger situational empathy in participants and that empathy would, in turn, influence participants' perceptions about the robot's ability to feel emotions and mental qualities. In particular, we hypothesized that NAO's behavior would induce participants to think that the robot is capable of feeling emotions (H1). We also hypothesized that the emotion of fear displayed by the robot could induce (situational) empathy in participants (H2), to the point of making them feel sorry for the possible memory loss on the part of NAO. Finally, we hypothesized that the robot's capability of displaying emotions and the consequent empathic response experienced by participants would influence their perceptions about the robot's mental states; in particular, we hypothesized that participants in the experimental group would be more likely to think that NAO is emotionally intelligent and can have desires and intentions of its own (H3).
Design. Between-subjects design, with the manipulation of one independent variable (the NAO malfunctioning).
Participants. Thirty-two students from master degree courses in the Computer Science area (ICT, CS, and AI), aged 22-25 years, 60% females and 40% males. Participants were randomly assigned to one of the two groups, 16 to the control group and 16 to the experimental group. All the subjects gave their informed consent for inclusion before they participated in this study.
Apparatus and materials. NAO was placed on a table in an upright position. A chair was provided for participants to sit in front of the robot. The crossword puzzle was printed on a sheet of paper and a participants were given a pen to complete it.
Procedure. Participants were invited to sit in front of the robot, and to interact with it for 10/15 minutes to complete the crossword puzzle. In fact, the robot helped participants by offering hints and clues, aimed at speeding up the completion of the game. Once the interaction with the robot was over, the user, depending on the scenario to which he/she was randomly assigned, After a few turns, participants who had been randomly assigned to the experimental condition witnessed the NAO malfunctioning and its expression of fear. At the end of the game or after the simulated malfunctioning, depending on the condition to which they were assigned, participants were asked to complete some questionnaires, which they could conveniently access by scanning dedicated QR codes through their smartphones (see Section 4.1).
### Measures
Participants were asked to complete three sets of questionnaires: The _TECA questionnaire_, i.e. a cognitive and affective empathy test aimed at assessing participants' ability to get in touch with and understand the emotions of others [24]; The _AMS (Attribution of Mental States)_ questionnaire [11, 12, 27], which is a measure of the mental states that participants attribute to robots, in comparison to humans; A set of _Ad hoc_ questions regarding participants' interaction and experience with the NAO robot. The questionnaires will be described in detail in the next Sections.
#### 4.1.1 The TECA questionnaire
TECA consists of 33 questions to be answered using a 1 to 5 Likert scale, ranging from "totally agree" to totally disagree. Questions aim at measuring the global empathy of respondents, as well as their cognitive and affective empathic abilities. In particular, TECA questionnaire consists of four scales: two cognitive scales, i.e., the perspective-taking (AP) and emotional understanding (EC) scales, and two affective scales, i.e. the empathic stress (EE) and empathic cheerfulness (AE) scales [24].
Scores are calculated for each scale (for details on calculations see [24]). Such scores are considered high if between 55 and 65, medium from 45 to 55 and low from 35 to 44. Ratings equal or below 7 and equal or above 66 are considered extremely low and extremely high respectively.
The perspective-taking scale. The perspective-taking (AP) scale refers to an individual's intellectual or imaginative ability to put oneself in another's shoes [24]. In particular, a subject who scores high on the AP scale shows predisposition towards communication, tolerance and interpersonal relationships. Subjects with a high AP score also tend to have a flexible mindset, which allows them to adapt their thinking to different situations. An extremely high score in this area can be negative as it can interfere with the ability to make decisions. Conversely, a low score is a sign of poor cognitive empathy, and is typical of individuals who exhibit little mental flexibility and are not good at understanding the mental
state of others. An extremely low score on this scale can be related to a significant deficit in interpersonal and communication skills when interacting with other people [24].
Emotional understanding. Emotional understanding (EC) is the ability to recognise and understand the emotional states, intentions and impressions of other individuals [24]. An individual who scores high on this scale shows predisposition towards emotion reading, before verbal or non-verbal reading, of the individual with whom they are interacting. This is an important skill, related to affective empathy, as it facilitates interpersonal relationships and, during an interaction, improves communication and allows the subject to detect the emotional state, whether positive or negative, of the people they are interacting with. An extremely high score in this area can have negative consequences: excessive attention to the emotional state of others can lead individuals to have little regard for their own. Individuals who show a low score, on the other hand, display difficulties in relating to other people and have few social and communication skills. Therefore, an extremely low score on this scale means that the individual has problems expressing their own emotions, as well as detecting those of others [24].
The empathic stress. The empathic stress (EE) scale refers to the ability to get in touch, to tune in, with the emotions, positive or negative, experienced by another person [24]. Individuals who score high on this scale show a predisposition towards building a solid social network and they often become emotionally involved in the problems of others. An extremely high score on this scale can result in a high level of neuroticism, which can have negative consequences on the life of the person affected and can lead them to distort reality and consider the suffering of another person to be greater than it really is. Conversely, individuals displaying a low level of EE are not easily moved, as they are unemotional and emotionally distant; all these characteristics lead such individuals to acquire a lower quality social network than individuals displaying a high score on this scale. Therefore, an extremely low level on the EE scale can be related to emotional coldness, i.e. high difficulty in being moved when something happens to another person [24].
The empathic cheerfulness. The empathic cheerfulness (AE) scale represents the ability to share the positive emotions of another person [24]. Individuals with a high AE score are predisposed to rejoic in the successes of others. However, an extremely high score on this scale can have negative consequences, as it can be a signal that one's own happiness depends on someone else's happiness and can lead the individual displaying such a score to relegate their own happiness, goals and personal fulfilment to a corner. Conversely, when individuals show a low score on the AE scale, they find it difficult to share the positive emotions of others. If the score is extremely low, then the subject is more likely to experience indifference towards the positive events of others and to fail to emotionally tune in to them [24].
#### 4.1.2 Mental State Attribution Questionnaire (AMS)
The AMS consists of five dimensions: Epistemic, Emotional, Desires and Intention, Imaginative, and Perceptive. The epistemic dimension concerns participants' idea of the robot cognitive intelligence (e.g., can the robot understand/decide/learn/teach/think?), while the perceptive dimension is related to robot perception and sensation (e.g., can the robot smell/watch/taste/listen/feel cold?). The other dimensions concern the user's mental attribution to the robot's emotional intelligence; example questions are: Can the robot get angry/be scared/be happy? (Emotional dimension); might the robot want to do something/make a wish/prefer one thing over another? (Desires and intention dimension); can the robot imagine/tell a lie/make a dream/make a joke? (Imaginative dimension). The questionnaire consists of 25 questions which can be answered choosing among the following options: "a lot", "a little", or "not at all". Participants' total score is obtained through the sum of all answers (range = 0-50); with the five partial scores being the sum of the answers within each dimension (range = 0-10).
#### 4.1.3 Ad hoc questions
A few _ad hoc_ questions related to the interaction between the participant and the robot were also added to the previously described questionnaires. More specifically, the following questions were asked, of which the last one focused on investigating whether participants responded in an empathic way:
* Was this the first time you interacted with NAO?
* Have you interacted with other robots in the past? _If you answered "yes" to the previous question. Which one(s)?_
* Did NAO seem to be capable of feeling emotions? _If you answered "yes" to the previous question, which emotions expressed by NAO struck you the most and why?_
* Were there any malfunctions during the interaction with NAO? _If you answered "yes" to the previous question. What emotions did you feel during the malfunctioning?_
## 5. Results
All questionnaires (TECA and AMS, and the _ad hoc_ questions) were administered to both the experimental and the control group. We will first analyze the results from the two groups separately, and then compare them.
### TECA data analysis
#### 5.1.1 Experimental group
The average global empathy level is 45.87 (SD=3.72) which is a medium score, though at the lowest extreme. Scores were ranging from 42 to 53; 8 subjects (50%) were positioned on the medium percentile (31-69), but the other 8 (50%) obtained scores lower than 45, thus they ended up having a low average global empathy (percentile 7-30).
More specifically, the analysed data show that participants in the experimental group scored higher on emotional understanding (EC) (M=30.38, SD=2.5), followed by empathic cheerfulness (AE) (M=28.94, SD=3.00), then perspectivetaking (AP) (M=27.44, SD=2.61), and finally empathic stress (EE) (M=21.38, SD=2.33). All the scores can be classified within the average level. Hence, it can be deduced that participants show a higher ability to understand and identify the emotional state of others, than to emotionally respond to other subjects' mental states in an appropriate way.
#### 5.1.2 Control group
As far as the control group is concerned, the average global empathy level is 48.38 (SD=5.89), which is a medium score. Scores were ranging from 40 to 57; 9 subjects (56%) were positioned in the medium percentile (31-69), 4 subjects (25%) obtained scores lower than 45 (percentile 7-30), and 3 subjects (18%) had a high global empathy (>55, percentile 70-93), thus this sample closely mirrors the trend of empathy in the population.
The analysed data show a trend similar to the one of the experimental group. Indeed, participants in the control group scored higher on emotional understanding (EC) (M=30.38, SD=2.965), followed by empathic cheerfulness (AE) (M=29.38, SD=3.00), then perspective-taking (AP) (M=27.56, SD=2.9), and finally empathic stress(EE) (M=24.25, SD=3.76). All the scores can be classified within the average level. Similarly to the other group, participants show a higher ability to understand and identify the emotional state of others, than to emotionally respond to other subjects' mental states in an appropriate way. However, participants in the control group show a higher level of empathic stress (EE).
#### 5.1.3 Comparison of the two groups
Comparing the results obtained by the experimental and control groups, it can be seen that the former has a lower average level of global empathy than the latter. In fact, with a global average empathy of 48.38, the control group exceeds a bit the experimental group's global empathy score of 45.87. This difference is explained by the fact that the control group includes three subjects with a higher level of empathy, differently from the experimental group, where all subjects had medium or low levels. Notwithstanding such a gap in terms of the overall empathy level, no significant differences were found between the two groups (t(16) = -1.65, p = 0.079).
The analysed data show that participants in the experimental and control groups scored very similarly on the following scales: perspective-taking scale (AP) (27.44 vs. 27.56), emotional understanding (EC) (30.38 vs 30.38), and empathic cheerfulness (AE) (28.94 vs 29.38). A significant difference emerged for the empathic stress scale (EE). In particular, the T-test revealed that the control group (M=24.25, SD=3,76) has a higher level of empathic stress than the experimental group (M=21.38, SD=2.33), and such a difference is significant (t(16) = -2.76, p = 0.01). This means that subjects in the control group have a greater ability to empathise with the positive or negative emotional states experienced by other individuals [24].
### AMS data analysis
#### 5.2.1 Experimental group
Considering the _epistemic dimension_, the NAO robot was considered capable of understanding "a lot" by 50% of the participants in the experimental group, and "a little" by the remaining 50%. This means that none of the participants believed that NAO was not capable of understanding "at all". On the other hand, as far as the ability to decide is concerned, only 25% of the participants believed that NAO possesses this capability "a lot". In contrast, 68.75% of the participants agreed that NAO is only "a little" capable of deciding, and 6.25% of the participants (a single person) stated that NAO does not possess this capability "at all". On the contrary, the robot's ability to learn is acknowledged by the majority of participants (75% "a lot") with only a few participants expressing more skeptical opinions (12.5% "a little", 12.5% "not at all"). Most participants (56.25% "a lot") also acknowledge NAO's ability of teaching, while 37.5% of them consider it to be only "a little" capable and 6.25% "not at all" capable. A more homogeneous distribution can be observed for the capability of thinking, with 37.5% of the participants believing that NAO possesses it "a lot", 31.25% "a little" and 31.25% "not at all".
With regard to the _emotional dimension_, most participants in the experimental group believe that NAO is able to feel emotions such as anger (62.5%, of which: 18.75% "a lot", 43.75% "a little"), happiness (81.25%, of which: 56,25% "a lot", 25% "a little"), fear (75%, of which: 50% "a lot", 25% "a little"), surprise (81.25%, of which: 43.75% "a lot", 37.5% "a little") and sadness (68.75%, of which: 37.5% "a lot", 31.25% "a little"). However, it is worth noticing that quite a few participants in the experimental group also believed that NAO cannot experience anger (37.5%) or sadness (31.25%) at all.
With respect to the _desires and intentions dimension_, almost all participants in the experimental group (93.75%) believe that NAO can intend to do something (56.25% "a lot", 37.5% "a little"). In addition, the will to do something is an ability that NAO possesses "a lot" (56.25%) or "a little" (25%), with only 18.75% of the participants opting for the more skeptical ("not at all") option. Most users (75% "a lot") also agree that NAO is capable of trying to do something, while the remaining 25% is more cautious (12.5% "a little", 12.5% "not at all"). Most participants (68.5%, of which: 31.25% "a lot", 37.5% "a little") are also convinced that the robot is capable of making a wish; however, participants' opinions are quite homogeneously distributed to this regard, with the remaining 31.25% being convinced that the robot
does not possess this ability at all. Participants are more in agreement as far as the robot's capacity of preferring one thing over another is concerned, with most subjects in the experimental group being positive (81.25%, of which: 50% "a lot", 31.25% "a little") and only 18.75% being negative ("not at all").
As for the _imaginative dimension_, almost half the participants in the experimental group believe that NAO does not have the ability to tell a lie (43.75% "not at all") or pretend (43.75% "not at all"). Conversely, the remaining 56.25% are of the opinion that the robot can tell a lie (25% "a lot", 31.25% "a little"), and as many of them maintain that NAO can pretend (56.25% of which: 31.25% "a lot", 25% "a little"). Differently, only a few participants think that NAO has the ability to dream (25% "a lot", 6.25% "a little"), while most of them (68.75%) believe that the robot cannot dream. The robot's ability to play a joke is, on the contrary, recognised by most participants (68.75%, of which: 50% "a lot", 18.75% "a little").
Finally, as for the _perceptive dimension_, most participants in the experimental group assume that abilities such as smelling (87.5%), tasting (81.25%) and feeling hot or cold (62.5%) are not attributable to the robot. In contrast, capabilities as looking (93.75%, of which: 75% "a lot" and 18.75% "a little") and feeling (93.75%, of which: 87.5% "a lot" and 6.25% "a little") are most often accepted as abilities possessed by NAO. Only one in sixteen subjects (6.25%) believe that the robot cannot look, and only one (6.25%) that it cannot feel.
#### 5.2.2 Control group
Regarding the _epistemic dimension_, all participants in the control group agreed that NAO can understand at least a little bit: 56.2% of them deemed the robot able to understand "a lot", and the remaining 43.75% "a little". As far as the robot's capability of making decisions is concerned, participants' opinions are more homogeneously distributed among the available options: most participants (43.75%) consider NAO to be "a lot" capable of deciding,
37.5% "a little" and the remaining 18.75% "not at all". NAO is considered capable of learning "a lot" (37.5%) or at least "a little" (37.5%), while only 25% of the participants claimed that the robot does not possess this ability at all. Most participants maintain that NAO is capable of teaching (56.25% "a lot", 6.25% "a little", while the remaining 25% did not consider NAO to be able to teach at all. Finally, participants' opinions about the robot capability of thinking are equally distributed between the positive (50%, of which: "31.25% "a lot", 18.75% "a little") and negative sides (50% "not at all").
Regarding the _emotional dimension_, most participants in the control group consider NAO "not at all" capable of experiencing anger (68.75%), fear (68.75%), happiness (56.25%) and sadness (62.5%). Only a few participants believe that NAO can experience anger (12.5% "a lot", 18.75% "a little"), fear (6.25% "a lot", 25% "a little"), happiness (25% "a lot", 18.75% "a little"), surprise (25% "a lot", 31.25% "a little"), sadness (18.75% "a lot", 18.75% "a little").
Concerning the _desires and intentions dimension_, most participants in the control group agree that NAO can intend to do something (56.25%, of which: 43.75% "a lot", 12.5% "a little"), try to do something (87,5%, of which: 62.5% "a lot",
25% "a little"), make a wish (62.5%, of which: 32.5% "a lot", 32.5% "a little"), and prefer one thing rather than another (68.75%, of which: 43.75% "a lot", 25% "a little"). Only few participants, on the contrary, maintain that NAO is incapable of intending to do something (43.75%), to trying to do something (25%), making a wish (37.5%) or preferring one thing over another (31.25%). Differently, most participants (56.25%) are skeptical about NAO's
capability of wanting to do something, with 43,75% of them being more optimistic to this respect (31.25% "a lot", 12.5% "a little").
As for the _imaginative dimension,_ most participants in the control group are of the opinion that NAO cannot pretend (56.25%) and dream (68.75%). Fewer participants are convinced that the robot is able to pretend (12.5% "a lot", 31.25% "a little") or dream (31.25% "a little"). On the contrary, most participants believe that NAO can tell a lie (56.25%, of which:
18.75% "a lot", 37.5% "a little") or make a joke (62,5%, of which: 31.25% "a lot", 31.25% "a little").
Regarding the _perceptive dimension,_ the majority of participants in the control group believe that NAO cannot smell (62.5%), taste (75%) and feel hot or cold (62.5%). Only 37.5% of participants believe that the robot can smell (12.5% "a lot", 25% "a little") or feel hot or cold (12.5% "a lot", 25% "a little"), and an even smaller percentage (25% "a little") claims that NAO possesses the ability to taste. In contrast, most users consider the robot to be able to look (75%, of which: 68.75 "a lot" and 6.25% "a little") and to feel (87.25%, of which 81.25% "a lot" and 6.25% "a little"). Only 25% of the participants maintain that NAO is unable to look and 12.5% that it cannot hear.
#### 5.2.3 Comparison of the two groups
Comparing the results, it can be observed that similar scores were obtained for the epistemic (6.9 vs. 6.06), desires and intentions (6.8 vs. 5.3), imagination (4 vs. 2.8) and perceptual (4.1 vs. 4.3) dimensions. In contrast, the T-test highlighted a significant difference in the emotional sphere (t(16) = -2.75, p = 0.02), where the experimental group obtained a higher mean score (M=5.75, SD=12.73) than the control group (M=2.87, SD=12.11). Such a difference confirms the hypothesis that the experimental group would attribute mental states to NAO to a greater extent than the control group.
### Adhoc questions
#### 5.3.1 Experimental group
Results show that only 12.5% of the participants in the experimental group had already interacted with a humanoid robot, which was NAO in all cases. More interestingly with regards to our research questions, 75% of the participants in this group believed that the NAO robot was capable of feeling emotions, while only 25% of the participants was skeptical to this respect. Furthermore, our results show that the emotions expressed by NAO that particularly impressed participants were related to happiness (e.g. "joy"; "gladness because of the correct answer"; "enthusiasm") and, even more, to fear (e.g. "fear when it was afraid of being killed"; "fear when it was afraid of being reset"; "fear of having to close the interaction with the user"; "fear because it was afraid of being switched off when something didn't work "; "when it was a bit slowed down and it joked that it didn't want me to tell the programmer for fear that they would reset it").
All sixteen participants in the experimental group agree that there was a functional problem during the interaction, namely, the simulated malfunctioning enacted by NAO during the interaction with users. The last question investigates the emotions experienced by participants during such malfunctioning. Notice that only eleven among the answers provided to this question were considered valid, while the remaining five were not taken into account. This choice is due to the fact that two participants stated that they did not feel any emotion during the malfunctioning,
while the other three reported what happened during the interaction rather than what they felt at that particular moment (e.g.: "There was a technical problem that interrupted the interaction"; "don't call the programmer otherwise they will reset me"). Considering the eleven valid responses, participants' reactions to the malfunctioning episode can be divided into four main clusters: fear, sadness, tenderness and helplessness. The emotion that was predominantly recalled by participants was fear (45.45%), i.e. the emotion experienced by NAO itself when it feared losing memory of the conversation. Hence, we can infer that subjects felt empathy for NAO by putting themselves in the robot's shoes and feeling the fear expressed by it. In addition, 36.36% of the participants felt sadness and sorrow, which again demonstrates that they felt empathy for NAO, to the point of being sad for the robot and feeling sorry for what was happening to it (e.g.: "[i was] afraid that there might be some damage to NAO"; "I didn't know how the robot might react and I hoped it wouldn't fall down"). One participant also stated that they felt helpless, while another one felt tenderness for the robot begging them not to tell the programmer what had happened.
#### 5.3.2 Control group
Only 12.5% of the participants in the control group had already interacted with NAO in the past, while 25% of them had some familiarity with other robots, such as Pepper and Sanbot (12.5%), Pepper and the educational robot MBot (6.25%), and Japanese models of domestic interaction robots (6.25%). As for participants' perceptions about NAO's capability of feeling emotions, most users in this group (62.5%) had a negative opinion, with only the remaining 37.5% considering the humanoid robot capable of expressing its emotional state. Most of the subjects who believe that NAO can feel emotions were particularly impressed by its display of happiness (50%). Interestingly, while most participants (62.5%) stated that there was no malfunctioning during the interaction with NAO, the remaining 37.5% believed that some problem had occurred. Although the control group was not exposed to the simulated malfunctioning, this result can be explained by the fact that the experiment with the control group took place in a location that was characterized by a weak Internet connection (necessary to the robot to work properly). This issue caused NAO to be late in its responses to the user's questions, which, in turn, pushed users to repeat or rephrase their sentences several times, since they believed the robot had not understood them. Participants' answers, reporting reasons for the malfunctioning, confirm this idea: "[There was] some difficulty in interacting with the robot", "Probably connection problems", "Understanding", "It did not take the answer immediately, even after several repetitions", "It did not look at me, it did not listen to what I was saying, probably due to the connection".
#### 5.3.3 Comparison of the two groups
Both similarities and differences can be observed between the experimental and control groups. Interestingly, the very same number of participants in both groups (12.5%) had already interacted with NAO in the past. On the contrary, only the control group included a few participants who had also interacted with other robots such as Sanbot, MBot, Pepper and Japanese models of home interaction.
A significant difference emerges with regard to participants' opinions about NAO's ability to experience emotions. In particular, the T-test revealed that the experimental group (M=0.75, SD=0.2) considered NAO more capable of feeling emotions than the control group (M=0.375, SD=0.25), t(16) = -2.4, p = 0.03. In fact, most participants in the experimental group (75%) agreed that the robot can feel emotions, while only 37.5% of the participants in the control group agreed with this view. Furthermore, the most significant emotions displayed by NAO included happiness and fear for participants in the experimental group, while the only memorable emotion, for participants in the control group, was happiness. Finally, it must be noted that, although the simulated malfunctioning of the robot only
occurred in the experimental condition, part of the participants in the control group (37.5%) also reported having experienced a malfunctioning. However, only participants in the experimental group were able to describe the emotions they felt during the malfunctioning episode, while participants in the control group were unable to provide an emotional account of the event.
## 6. Discussion and Conclusion
The experimental study discussed in this paper showed that the humanoid robot NAO can elicit emotions in human participants, thus confirming previous results obtained by Seo et al. (Seo et al., 2019), and that such empathic reaction influences participants' perception of its mental qualities, in particular as far as its capability of actually feeling emotions is concerned. The obtained results can be ascribed to the emotional behavior displayed by the robot, specifically sadness and fear, which was manifested by NAO during the interaction with the subjects in the experimental group, when a malfunctioning was simulated. We have to acknowledge the limitation related to the fact that fear is always manipulated and coupled with a simulated malfunctioning, for narrative necessity to contextualize the fear felt by the robot, similarly to the Seo's experiment.
Interestingly, participants in the experimental group had a significantly lower level of empathic stress than the control group, indicating that they had, in general, a lower tendency to empathise with the emotional states of others. Nevertheless, most of the participants in the experimental group who witnessed the malfunctioning of NAO and saw the robot ask for help and display fear were able to feel emotions towards the robot and also considered it capable of having emotion. More specifically, coherently with our hypothesis H1, results show that there is a significant difference between the two groups in perceiving the robot as an emotional being. In fact, the experimental group considered the robot more capable of feeling emotions. This result is probably explained by the fact that participants in the experimental group also empathised more with the robot. 11 out of 16 (69%) experimental subjects said they felt similar or otherwise consistent emotions with those ones expressed by NAO during the malfunctioning, demonstrating an affective and consistent empathic response, which confirms our hypothesis H2. While in the control group participants were unable to provide an emotional account of the event, even if 37.5% reported having experienced a malfunctioning, although obviously that was not caused by the experimental situation.
Observing NAO's behavior and display of emotions, the experimental group was more prone to attribute mental states to the robot. More specifically, our results show a significant difference in the attribution of emotional intelligence to the robot between the experimental and control groups. However, our hypothesis H3 is only partially confirmed, since we found no significant differences in other dimensions and, in particular, as far as the robot's capability to have desires and intentions. In general, we can observe that the experimental group not only was emotionally triggered by NAO's behavior, as shown by participants answers referring to "fear for NAO" or "helplessness towards NAO", but also perceived the robot as more "human" than the control group, who did not witness the simulation of the functional problem and only considered NAO a good playmate, but not able to feel or convey emotions. In conclusion, our results extend previous work by showing that induced situational empathy towards a humanoid robot can result in a stronger perception of the robot itself as a sentient being.
The main finding of our research is not only that NAO can elicit emotions in human participants, but also that such empathic reaction influences participants' perception of its mental states. Thus, we could conclude that a robot with its behavior could elicit certain emotions in humans, and when this happens, humans attribute to the robot mental states related to the emotional dimension. In all the situations wherein robots are used for improving the mental state
attribution of subjects having deficit in this area (for instance autistic children) a robot behavior devoted to elicit empathy in the target subjects should be preferred to a more detached one. This also suggests a correlation between affective and cognitive empathy: by soliciting the first one (linked to an affective reaction toward the other), the second one may increase (linked to an understanding of the other's point of view), and this could be investigated also in human to human empathic relationships. Of course these findings should be corroborated by other experiments in order to reach a greater external validity. We will also have to consider a different sample of subjects, who being ICT and CS students partially represent the target population, and this is a limitation we intend to overcome in future studies.
|
2307.04792 | Generalized Hall current on a finite lattice | Gapped fermion theories with gapless boundary fermions can exist in any
number of dimensions. When the boundary has even space-time dimensions and
hosts chiral fermions, a quantum Hall current flows from the bulk to the
boundary in a background electric field. This current compensate for the
boundary chiral anomaly. Such a current inflow picture is absent when the
boundary theory is odd dimensional. However, in recent work, the idea of
quantum Hall current has been generalized to describe odd dimensional boundary
theories in continuous Euclidean space-time dimension of infinite volume. In
this paper we extend this idea to a lattice regulated finite volume theory of
1+1 dimensional Wilson-Dirac fermions. This fermion theory with a domain wall
in fermion mass can host gapless modes on the wall. The number of gapless
fermions is equal to the integral of the divergence of the lattice generalized
Hall current. | Srimoyee Sen, Semeon Valgushev | 2023-07-10T18:00:04Z | http://arxiv.org/abs/2307.04792v1 | # Generalized Hall current on a finite lattice
###### Abstract
Gapped fermion theories with gapless boundary fermions can exist in any number of dimensions. When the boundary has even space-time dimensions and hosts chiral fermions, a quantum Hall current flows from the bulk to the boundary in a background electric field. This current compensate for the boundary chiral anomaly. Such a current inflow picture is absent when the boundary theory is odd dimensional. However, in recent work, the idea of quantum Hall current has been generalized to describe odd dimensional boundary theories in continuous Euclidean space-time dimension of infinite volume. In this paper we extend this idea to a lattice regulated finite volume theory of 1+1 dimensional Wilson-Dirac fermions. This fermion theory with a domain wall in fermion mass can host gapless modes on the wall. The number of gapless fermions is equal to the integral of the divergence of the lattice generalized Hall current.
## 1 Introduction
Odd dimensional Dirac fermion field theories are interesting when there is a domain wall in fermion mass. In that case, the domain wall defect is even dimensional and hosts massless chiral fermions [1]. When this theory is coupled to electromagnetic fields, the boundary suffers from chiral anomaly leading to non-conservation of vector current in the presence of background electromagnetic fields. However, as Callan-Harvey showed [1], a vector current flows from the bulk to the boundary restoring current conservation in the higher dimensional theory. In order to compute this current one integrates out the fermion away from the domain wall which leaves behind a Chern-Simons theory for the electromagnetic field. This explains the inflowing current from the bulk to the boundary. As is well known, the odd dimensional gapped bulk theory of free Dirac fermion describes the physics of quantum Hall effect. The inflowing current is analogous to the quantum Hall current whereas the massless chiral fermions on the domain wall are analogs of the quantum Hall edge states.
More generally, gapped fermion field theories can host massless fermions on domain walls irrespective of whether the wall is even or odd dimensional. They describe the physics of topological insulators and superconductors with corresponding edge states in various dimensions [2, 3, 4, 5, 6]. When the boundary is odd dimensional, in contrast to quantum Hall effect, the boundary theory does not have a chiral anomaly. Therefore, we don't expect an inflowing current from the bulk to the boundary as in the case of quantum Hall effect. Although, the boundary theory can have discrete anomalies which connects the existence of the edge states to the gapped bulk theory[7]. In a recent paper [8, 9] the authors showed that the idea of the Hall current can be generalized to odd dimensional boundaries. The idea was inspired by index calculation of fermion vortex system in [10]. This generalization of the Hall current relies on the following step: the Minkowski space domain wall fermion theory with massless boundary fermion is first connected to another Euclidean fermion theory where the Euclidean fermion operator has a nonzero index. This index equals the number of massless fermions in the original Minkowski theory. From there, it was shown [8, 9] that one can construct a generalized Hall current: the space-time integral of its divergence equaling the index of the fermion operator. The construction outlined in [8, 9] holds for non-interacting fermions in infinite volume and continuum space-time. The goal of this paper is to extend that analysis to a discrete space-time lattice of infinite and finite volume. The analysis in [8, 9] included several different fermion
theories in various space-time dimensions. In this paper, we choose to work with the simplest example: \(1+1\) dimensional Dirac fermion with a domain wall in its mass [11]. The domain wall hosts a massless fermion which may suffer from discrete anomalies ([7, 12]), but does not suffer from chiral anomaly. As a result, one doesn't expect a Hall current flowing from bulk to the boundary. However, the generalized Hall current exists for this system in infinite volume and continuum space-time. We explore how the generalized Hall current for this system can be constructed on an infinite and finite lattice.
A crucial observation which makes the continuum construction of the generalized Hall current possible is the following. Whenever the index of a Euclidean elliptic fermion operator is nonzero, there is a current in the system: the space-time integral of the divergence of this current equals the index. We call this current the generalized Hall current. Note that the index of a Euclidean elliptic operator is the difference between the number of zero modes of that operator and that of its Hermitian conjugate [10]. Therefore, no generalized Hall current exists if the index of the operator is zero. This observation is not meant to be self-evident and its proof is outlined in [8, 9]. We will discuss the proof briefly in the next section of this paper. This observation can then be used to construct the generalized Hall current for massless fermion edge states of any Minkowski fermion theory as follows. The first step is to use the Minkowski fermion operator to construct its Euclidean counterpart. Since the Minkowski fermion operator has massless states living on the defect, the corresponding Euclidean operator has unnormalizable zero eigenvalue eigenstates living on the same defect. These states are not zero modes since they are not normalizable. As a result the index of the Euclidean operator at this stage is zero. Ref. [8, 9] then introduces a slight deformation to this Euclidean operator through the introduction of a background diagnostic field in such a way that this unnormalizable zero eigenvalue state becomes localized and normalizable. i.e. the deformed Euclidean operator has a zero mode iff the original Minkowski fermion operator had a massless fermion in its spectrum. The introduction of this diagnostic field also creates an imbalance between the number of zero modes of the fermion operator and its Hermitian conjugate resulting in a nonzero index for the deformed theory. Additionally, the construction carefully ensures that the index survives in the limit of the diagnostic field being taken to zero. We expect a generalized Hall current to flow as long as the index is nonzero. In the continuum analysis, one can obtain this Hall current by simply perturbing in the diagnostic field and integrating out the fermions in a one loop diagram. This is analogous to the Goldstone-Wilczek calculation [13].
As we embark on generalizing the above construction on the lattice, both infinite and finite, we explore which elements of the continuum construction can be carried over to the lattice without significant modification and which elements need to be reformulated. Since we will work with the \(1+1\) dimensional fermion theory, from this point onward we exclusively focus on it. The organization of the paper is as follows. We will begin with a brief overview of the generalized Hall current construction in the continuum specializing to the case in \(1+1\) dimensions. We will then discuss how this construction is generalized to an infinite lattice analytically. The following section will describe the numerical analysis of this construction and demonstrate that a generalized Hall current exists on a finite lattice.
## 2 Infinite volume continuum analysis
The procedure for constructing the generalized Hall current in the continuum in infinite volume is described in detail in [8, 9]. We briefly review this construction here. Consider a Minkowski fermion operator \(D_{M}\) with a mass defect which causes it to have a massless fermion in the spectrum that is stuck to the defect. To construct the generalized Hall current
1. We analytically continue this fermion operator to Euclidean space-time, denoting it by \(\mathcal{D}\).
2. Introduce background diagnostic field to deform the fermion operator \(\mathcal{D}\) to have an index of one (equal to the number of massless fermions in the original Minkowski theory).
3. Obtain the generalized Hall current following a Goldstone-Wilczek [13] inspired calculation using one loop Feynman diagram.
4. Take the background diagnostic field to zero at the end of calculation and confirm that the generalized Hall current and the index survives taking this limit.
Before we apply this construction to \(1+1\) dimensional example, let's first attempt to understand how the index of a fermion operator gives rise to an inflowing current in infinite volume continuous space-time of Euclidean signature. Note that the index of the fermion operator \(\mathcal{D}\) is given by
\[I=\mathrm{Dim}(\ker D)-\mathrm{Dim}(\ker D^{\dagger}). \tag{1}\]
In the example we will consider, the number of zero modes of either the operator \(\mathcal{D}\) or the operator \(\mathcal{D}^{\dagger}\) is zero. As a result the magnitude of the index ends up being equal to the number of zero modes of one operator or the other. Furthermore, the number of zero modes of the operator \(\mathcal{D}\) coincides with the number of zero modes for the operator \(\mathcal{D}^{\dagger}\mathcal{D}\) and the number of zero modes of \(\mathcal{D}^{\dagger}\) coincides with that of \(\mathcal{D}\mathcal{D}^{\dagger}\). Therefore the formula for the index can be re-expressed by defining
\[\mathcal{I}(M)=\frac{M^{2}}{M^{2}+\mathcal{D}^{\dagger}\mathcal{D}}-\frac{M^{2 }}{\mathcal{D}\mathcal{D}^{\dagger}+M^{2}} \tag{2}\]
and noting that
\[I=\lim_{M\to 0}\mathcal{I}(M). \tag{3}\]
Interestingly, the quantity \(\mathcal{I}(M)\) can now be recast as the matrix element \(\mathcal{I}(M)=-\int d^{d+1}x\langle\bar{\Psi}\Gamma_{\chi}\Psi\rangle\) in a fermion theory with the following action
\[\mathcal{S}=\int d^{d+1}x\,\bar{\Psi}(K+M)\Psi \tag{4}\]
where
\[K=\begin{pmatrix}0&-\mathcal{D}^{\dagger}\\ \mathcal{D}&0\end{pmatrix} \tag{5}\]
and
\[\Gamma_{\chi}=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}. \tag{6}\]
Note that, \(d+1\) is the number of space-time dimensions in which the original fermion operator \(\mathcal{D}\) is defined. The spinor \(\Psi\) has twice the dimension of the spinors of the original theory. The gamma matrices for this theory can be easily read off using
\[\Gamma_{\mu}=i\partial\tilde{K}(p)/\partial p_{\mu} \tag{7}\]
where \(\tilde{K}\) is the Fourier transform of \(K\). The theory of Eq. 4 has its own fermion number symmetry which works as \(\Psi\to e^{i\theta}\Psi\). In the \(M\to 0\) limit, it also has an axial symmetry \(\Psi\to e^{i\Gamma_{\chi}\alpha}\Psi\) where this new axial symmetry has nothing to do with the symmetries of the original theory. We can now construct an axial current \(\mathcal{J}_{\mu}^{\chi}=\bar{\Psi}\Gamma_{\mu}\Gamma_{\chi}\Psi\) and write down the Ward identity for it
\[\partial_{\mu}\mathcal{J}_{\mu}^{\chi}=2M\bar{\Psi}\Gamma_{\chi}\Psi- \mathcal{A} \tag{8}\]
where \(\mathcal{A}\) is the "anomaly contribution"
\[\mathcal{A}=-2\lim_{\Lambda\to\infty}\mathrm{Tr}(\Gamma_{\chi}e^{K^{2}/ \Lambda^{2}})=-2\mathcal{I}(\infty). \tag{9}\]
This anomaly contribution can be computed using the methods outlined in Fujikawa [14]. It is found to vanish for the theory under consideration Eq. 4 and was elaborated in [8, 9]. At this point we can take the limit \(M\to 0\) in Eq. 8 to write
\[I=\mathcal{I}(0)=-\lim_{M\to 0}M\int d^{d+1}x\langle\bar{\Psi}\Gamma_{\chi}\Psi \rangle=-\lim_{M\to 0}\frac{1}{2}\int d^{d+1}x\langle\partial_{\mu} \mathcal{J}_{\mu}^{\chi}\rangle. \tag{10}\]
We have now expressed the index of the fermion operator in terms of the "axial" current of the theory in Eq. 4. We call this current the generalized Hall current. This generalized Hall current \(\bar{\Psi}\Gamma_{\mu}\Gamma_{\chi}\Psi\) can now be computed using one loop Feynman diagrams by perturbing in the mass defect as well as the other background fields. We will review how this is done for \(1+1\) dimensional Dirac fermion with a domain wall in its mass.
### \(1+1\) dimensional Dirac fermion in continuum
Let's consider the Lagrangian of a Dirac fermion in Minkowski space-time with Dirac mass denoted as \(\phi_{1}\). It has the Lagrangian
\[\mathcal{L}=\bar{\psi}(i\gamma^{\mu}\partial_{\mu}-\phi_{1})\psi \tag{11}\]
where \(\mu\) takes values \(0\) and \(1\), \(x_{0}\) is the temporal and \(x_{1}\) is the spatial coordinate. We can take the \(\gamma\) matrices as
\[\gamma^{0}=\sigma_{2},\gamma^{1}=-i\sigma_{1},\gamma^{\chi}=\sigma_{3} \tag{12}\]
where \(\gamma_{\chi}\) is the chirality operator. If we introduce a domain wall in \(\phi_{1}\) along the spatial coordinate \(x_{1}\), \(\phi_{1}=m_{0}\epsilon(x_{1})\) with \(m_{0}>0\) and
\[\epsilon(x)=\begin{cases}+1,&x\geq 0\\ -1,&x<0\end{cases}, \tag{13}\]
then we will have a massless fermion mode living on the domain wall at \(x_{1}=0\) as seen from the Dirac equation in the domain wall background
\[i\gamma^{0}\partial_{0}\psi+i\gamma^{1}\partial_{1}\psi-\phi_{1}\psi=0. \tag{14}\]
To look for massless state, we can set \(\partial_{0}\psi=0\) and find that the Dirac equation is solved by
\[\psi=\frac{1}{\sqrt{2}}\begin{pmatrix}1\\ -1\end{pmatrix}e^{-m_{0}|x_{1}|}. \tag{15}\]
In order to construct the generalized Hall current we first have to analytically continue to Euclidean space-time where the Lagrangian is now
\[\mathcal{L}_{\text{E}}=\bar{\psi}(\gamma_{\mu}\partial_{\mu}+\phi_{1})\psi \tag{16}\]
with Euclidean gamma matrices defined as
\[\gamma_{0}=\sigma_{2},\gamma_{1}=-\sigma_{1},\gamma_{\chi}=\sigma_{3}. \tag{17}\]
We also denote two dimensional identity matrix as \(\sigma_{0}\). The corresponding fermion operator \(\gamma_{\mu}\partial_{\mu}+\phi_{1}\) has an unnormalizable zero eigenvalue eigenstate. However this state doesn't count as zero mode which should be normalizable. In order to engineer a zero mode we turn on a background pseudo-scalar field with a domain wall profile in the Euclidean time direction. We also refer to this field as a diagnostic field. The corresponding Lagrangian is of the form
\[\mathcal{L}_{\text{E}}=\bar{\psi}(\gamma_{\mu}\partial_{\mu}+\phi_{1}+i\phi_{2 }\gamma_{\chi})\psi \tag{18}\]
where \(\phi_{2}=\mu_{0}\epsilon(x_{0})\) with \(\mu_{0}>0\). Let's denote this fermion operator as \(\mathcal{D}\) with
\[\mathcal{D}=(\gamma_{\mu}\partial_{\mu}+\phi_{1}+i\phi_{2}\gamma_{\chi}). \tag{19}\]
We find that the operator \(\mathcal{D}\) has one zero mode of the form
\[\psi=\frac{1}{\sqrt{2}}\begin{pmatrix}1\\ -1\end{pmatrix}e^{-m_{0}|x_{1}|-\mu_{0}|x_{0}|}. \tag{20}\]
We can also look for zero modes for the operator \(\mathcal{D}^{\dagger}\) and find that there are none for this specific choice of domain wall profile \((m_{0}>0,\mu_{0}>0)\). More generally, for other choices of the domain wall profile, e.g. with \(m_{0}>0,\mu_{0}<0\) or \(m_{0}<0,\mu_{0}>0\) we find a zero mode for the operator \(\mathcal{D}^{\dagger}\) and the operator \(\mathcal{D}\) has no zero modes. Similarly, the choice of \(m_{0}<0,\mu_{0}<0\) yields a zeromode for \(\mathcal{D}\) and none for \(\mathcal{D}^{\dagger}\). In other
words, the magnitude of the index of the fermion operator remains 1 as long as there is a domain wall in both \(\phi_{1}\) and \(\phi_{2}\). However, whether the index is positive or negative depends on the profile of choice.
There is a simple way to relate the domain wall profile with the index of the fermion operator. To see this, we can first express \(\phi_{1}+i\phi_{2}\) as \(\phi_{1}+i\phi_{2}=ve^{i\theta}\). It is easy to see that for a crossed domain wall profile in \(\phi_{1}\) and \(\phi_{2}\), if one considers a polar coordinate system centered at \(x_{0}=x_{1}=0\), then the phase variable \(\theta\) completes a winding of \(2\pi\) or \(-2\pi\) as one travels along a contour encircling the center over a polar angle of \(2\pi\). The crossed domain wall defect can therefore be thought of as a vortex in \(\phi_{1}+i\phi_{2}\). We have now constructed the intended fermion operator whose index is equal to the winding in the crossed domain wall configuration. Note that the index and the winding survives in the limit \(\mu_{0}\to 0\).
### Generalized Hall current (GHC) in the continuum
We now review the one loop Feynman diagram calculation to compute the generalized Hall current and then verify that the space-time integral of its divergence equals the index.
Following the prescription outlined in Eq. 4,5,6 we construct the \(K\) matrix which we can re-express in momentum space as
\[K=\Gamma_{\mu}k_{\mu}+i\phi_{2}\Gamma_{2}+i\phi_{1}\Gamma_{3} \tag{21}\]
where we have defined
\[\Gamma_{i}=\sigma_{1}\otimes\gamma_{i},\Gamma_{2}=\sigma_{1} \otimes\gamma_{\chi},\] \[\Gamma_{3}=-\sigma_{2}\otimes\sigma_{0},\Gamma_{\chi}=\sigma_{3} \otimes\sigma_{0}. \tag{22}\]
To compute the "axial" current, we rewrite the mass terms as \(\phi_{1}+i\phi_{2}=(v+\rho(x))e^{i\theta(x)}\) and expand the \(K\) matrix in \(\theta\) with
\[K=K_{0}+\delta K \tag{23}\]
where \(K_{0}=\gamma_{\mu}k_{\mu}+\rho\) and \(\delta K=iv\theta\Gamma_{2}+i\rho\Gamma_{3}\). Up to linear order in \(\theta\) we get
\[{\cal J}_{\mu}^{\chi} = +v\frac{\partial\theta}{\partial x_{\mu}}\int\frac{d^{2}q}{(2\pi) ^{2}}\mbox{Tr}\left(\Gamma_{\mu}\Gamma_{\chi}\frac{dK_{0}^{-1}}{dq_{\nu}} \Gamma_{2}K_{0}^{-1}\right) \tag{24}\] \[= \epsilon_{\mu\nu}\partial_{\nu}\theta\int\frac{d^{2}q}{(2\pi)^{2 }}\frac{4v^{2}}{(q^{2}+v^{2})^{2}}\] \[= \frac{1}{\pi}\epsilon_{\mu\nu}\partial_{\nu}\theta\]
We can now compute the space-time integral of the divergence of this current and relate it to the index with
\[{\cal I}(0)=-\frac{1}{2}\int d^{2}x\partial_{\mu}{\cal J}_{\mu}^{\chi}=-\nu_{\theta} \tag{25}\]
where \(\nu_{\theta}\) is the winding of the crossed domain wall or vortex configuration. For the specific domain wall profile we have chosen this winding is \(-1\). Therefore we get an index of 1 which is consistent with the index we obtained for the fermion operator in the previous subsection. This demonstrates that whenever the Minkowski theory specified by Eq. 11 has a domain wall in fermion mass hosting massless edge state, one can construct a corresponding Euclidean fermion operator with the following properties:
1. The Euclidean fermion operator has an index of \(\pm 1\) in the presence of a background diagnostic field.
2. In the limit of diagnostic field going to zero this Euclidean operator coincides with the Euclidean analytic continuation of the Minkowski operator in Eq. 11.
3. The index of this Euclidean operator persists in the limit of the diagnostic field being taken to zero and is equal to the space-time integral of the divergence of the GHC.
In the next section, we will to extend our Euclidean fermion operator construction to discrete space-time. In order to mimic the continuum construction sufficiently closely we will have to maintain the following
1. The lattice fermion operator or its Hermitian conjugate should not have more than one zeromode.
2. We will exclude regions in parameter space where the number of zeromodes for the fermion operator and its Hermitian conjugate are the same.
The second condition ensures that the index of the fermion operator is nonzero.
## 3 1+1 case on the lattice in infinite volume
We begin with the fermion operator in Eq. 19 and discretize spacetime setting the lattice spacing to 1. If we first set \(\phi_{2}=0\) and naively discretize space-time, we observe an important difference from the spectrum in the continuum : i.e. we see fermion doubling. This is to say, in the continuum we had a single solution to the equation \(\mathcal{D}|_{\phi_{2}=0}\psi=0\) with \(\psi\) being localized in the \(x_{1}\) direction and constant in the \(x_{0}\) direction. On the lattice, there are more than one solution of this form. In order to remove fermion doubling so as to retain only one solution will require us to introduce higher dimensional operators to the Lagrangian similar to the Wilson terms used in domain wall fermions [15, 16, 17, 18, 19]. Since our end goal is to construct a Euclidean fermion operator with a single zeromode we have two simple choices for this higher dimensional term:
1. **Wilson-like operator:** Inspired by the Wilson term in lattice field theory, we introduce the following higher-derivative operators to the Lagrangian which we call Wilson-like terms [19]: \[\mathcal{D}_{1}=\sum_{\mu}\gamma_{\mu}\nabla_{\mu}+\frac{R}{2} \nabla_{1}^{2}+i\gamma_{\chi}\frac{R}{2}\nabla_{0}^{2}.\] (26) We set parameter \(R=1\).
2. **Fermion operator with Wilson term:** We introduce in the Lagrangian the standard Wilson term: \[\mathcal{D}_{2}=\sum_{\mu}\gamma_{\mu}\nabla_{\mu}+\frac{R}{2} \sum_{\mu}\nabla_{\mu}^{2}\] (27) We again set Wilson parameter to \(R=1\).
We now look for the zeromodes of these operators by varying the paramaters of our theory.
### Zeromodes
In this subsection we aim to obtain zeromode solutions, by varying the parameters like the domain wall heights for the two types of lattice fermion operator introduced in the previous section. We first present an analytic calculation for the zeromode of the Wilson-like operator in infinite and finite volume. The corresponding expressions for the zeromode profile are simple and illuminating. An analogous analytic calculation for the Wilson fermion case is more difficult and not particularly illuminating. Therefore we defer the discussion of the Wilson fermion operator to the subsection 3.1.3 where we present numerical analysis of both the Wilson fermion case and the Wilson-like cases.
#### 3.1.1 Analytic solution for the zeromode in infinite volume
We begin with Wilson-like operator given by Eq. 26:
\[\mathcal{D}_{1}=\begin{pmatrix}\tilde{\phi}_{1}+i\tilde{\phi}_{2}&-i\nabla_{ 0}-\nabla_{1}\\ i\nabla_{0}-\nabla_{1}&\tilde{\phi}_{1}-i\tilde{\phi}_{2}\end{pmatrix} \tag{28}\]
where \(\tilde{\phi}_{1}=\phi_{1}+\frac{1}{2}\nabla_{1}^{2}\) and \(\tilde{\phi}_{2}=\phi_{2}+\frac{1}{2}\nabla_{0}^{2}\). With an ansatz of \(\psi_{+}=\left(\begin{matrix}1\\ -1\end{matrix}\right)\varphi_{+}\) of \(\gamma_{1}\) eigenvalue \(+1\) we get two equations for \(\varphi_{+}\),
\[\nabla_{1}\varphi+\tilde{\phi}_{1}\varphi_{+} =0, \tag{29}\] \[\nabla_{0}\varphi+\tilde{\phi}_{2}\varphi_{+} =0. \tag{30}\]
Then using an ansatz of \(\varphi_{+}=z_{0}^{x_{0}}z_{1}^{x_{1}}\) we see that there exists normalizable solution with
\[z_{0} =(1-\phi_{2})\] \[z_{1} =(1-\phi_{1}) \tag{31}\]
when \(0<m_{0}<2\) and \(0<\mu_{0}<2\). Let's fix \(m_{0}=1\) and \(\mu_{0}=1\). Now we consider the ansatz of \(\psi_{-}=\left(\begin{matrix}1\\ 1\end{matrix}\right)\varphi_{-}\). The EOMs for \(\varphi_{-}\) are
\[\nabla_{1}\varphi_{-}-\tilde{\phi}_{1}\varphi_{-} =0, \tag{32}\] \[\nabla_{0}\varphi_{-}-\tilde{\phi}_{2}\varphi_{-} =0. \tag{33}\]
These are solved by the ansatz
\[\varphi_{-}=z_{0}^{x_{0}}z_{1}^{x_{1}}\ \ \text{with}\ \ z_{0}=\frac{1}{(1-\phi_{2})},\ \ z_{1}=\frac{1}{(1-\phi_{1})}. \tag{34}\]
The solution is not normalizable for our choice of \(m_{0}=1\) and \(\mu_{0}=1\). Therefore \(\psi_{-}\) is not a zeromode of \(\mathcal{D}_{1}\); thus \(\mathcal{D}_{1}\) has a single zeromode specified by the expression for \(\psi_{+}\) in Eq. 31. Now, let's look at the zero modes for \(\mathcal{D}^{\dagger}\). With an ansatz of \(\xi_{-}=\left(\begin{matrix}1\\ 1\end{matrix}\right)\chi_{-}\) and \(\xi_{+}=\left(\begin{matrix}1\\ -1\end{matrix}\right)\chi_{+}\) we get the following EOMs for \(\chi_{-}\) and \(\chi_{+}\),
\[\nabla_{1}\chi_{-}+\tilde{\phi}_{1}\chi_{-} =0\] \[\nabla_{0}\chi_{-}-\tilde{\phi}_{2}\chi_{-} =0 \tag{35}\]
and
\[\nabla_{1}\chi_{+}-\tilde{\phi}_{1}\chi_{+} =0\] \[\nabla_{0}\chi_{+}+\tilde{\phi}_{2}\chi_{+} =0 \tag{36}\]
Using an ansatz of the form \(z_{0}^{x_{0}}z_{1}^{x_{1}}\) for \(\chi_{-}\) and \(\chi_{+}\) we see that there are no normalizable solutions for either. Thus we have accomplished what we set out do, i.e. engineer a Euclidean fermion operator on the lattice with an index of \(+1\) using the Wilson-like terms.
Note that, if we vary parameters the pattern of zeromodes change. E.g. for \(-2<m_{0}<0,-2<\mu_{0}<0\) we find a zeromode solution with \(\gamma_{1}\) eigenvalue \(-1\). Similarly, with \(2>m_{0}>0,0>\mu_{0}>-2\) and \(0>m_{0}>-2,2>\mu_{0}>0\) we find no normalizable zeromode for the operator \(\mathcal{D}_{1}\). However, we find a zeromode for the operator \(\mathcal{D}_{1}^{\dagger}\): \(\gamma_{1}\) eigenvalue \(-1\) for \(2>m_{0}>0,0>\mu_{0}>-2\) and \(\gamma_{1}\) eigenvalue \(1\) for \(2>\mu_{0}>0,0>m_{0}>-2\).
#### 3.1.2 Finite volume
Our next goal is to generalize the infinite volume construction to finite volume, i.e. on \(S^{1}\times S^{1}\). At this point we will have to resort to numerical techniques. We will take the lattice size to be \(L\times L\) where the domain wall in \(\phi_{1}\) is located at \(x_{1}=0\) and the anti-wall is located at \(x_{1}=L/2\). Similarly, the domain wall in \(\phi_{2}\) is located at \(x_{0}=0\) with anti-wall at \(x_{0}=L/2\). Therefore in effect we have four vortex-like defects at \((x_{0}=0,x_{1}=0)\), \((x_{0}=0,x_{1}=L/2)\), \((x_{0}=L/2,x_{1}=0)\) and \((x_{0}=L/2,x_{1}=L/2)\). There are several subtleties with this finite volume analysis which we describe below.
Exact zeromode and tuning:The two types of lattice fermion operators, which we call the Wilson-like or Wilson fermion operator, will in general not exhibit exact zeromodes in finite volume for arbitrary choice of domain wall heights. To understand why this is the case, consider the Wilson-like fermion operator. Since we are considering \(S^{1}\times S^{1}\) with periodic boundary condition, any solution to the equation of motion including the zeromode should satisfy:
\[\phi_{+}(x_{\mu}=-L/2)=\phi_{+}(x_{\mu}=L/2) \tag{37}\]
for \(\mu=0,1\). The solution obtained in Eq. 31 for an infinite lattice with equal magnitude of domain wall height on the two sides of the wall will not satisfy this periodic boundary condition (PBC) in finite volume. In order to obtain an exact zeromode solution which satisfies PBC, we will need to assume more general domain wall configuration:
\[\phi_{1}(x_{1}) = \begin{cases}m_{+}&x_{1}\geq 0\\ m_{-}&x_{1}<0\end{cases}, \tag{38}\] \[\phi_{2}(x_{0}) = \begin{cases}\mu_{+}&x_{0}\geq 0\\ \mu_{-}&x_{0}<0\end{cases}.\]
Then we find an exact zeromode for the choice
\[\frac{1}{1-m_{-}} =(1-m_{+}), \tag{39}\] \[\frac{1}{1-\mu_{-}} =(1-\mu_{+}).\]
Note that these equations do not depend on the lattice size, thus if they are satisfied then the exact zeromode of \(\mathcal{D}_{1}\) will exist in any volume. A similar analysis is much more complicated for the Wilson case and is not particularly interesting.
It's important consider however, that the Minkowski space domain wall theory in continuous space-time and in infinite volume hosts massless edge states without requiring any tuning of the domain wall height. Therefore, on the finite lattice too, we seek a formulation which does not rely on tuning of the domain wall heights. Since, a finite volume lattice fermion operator \(\mathcal{G}\) does not have an exact zeromode in general, we shift our attention to the operator \(\mathcal{G}^{\dagger}\mathcal{G}\). This is also motivated by the observation that the index formula for the fermion operator involves the kernel of the operator \(\mathcal{G}^{\dagger}\mathcal{G}\) and \(\mathcal{GG}^{\dagger}\). However, the operator \(\mathcal{G}^{\dagger}\mathcal{G}\) (or \(\mathcal{GG}^{\dagger}\)) doesn't have exact zeromodes in finite volume either. In order to recover them one has to take an infinite volume limit. Interestingly, this limit is smooth for \(\mathcal{G}^{\dagger}\mathcal{G}\) (or \(\mathcal{GG}^{\dagger}\)) but not necessarily for \(\mathcal{G}\) (or \(\mathcal{G}^{\dagger}\)) itself. We will use this observation to enable the GHC construction. The index formula in infinite volume is related to the the difference of the zeromodes of the operators \(\mathcal{D}_{1/2}\mathcal{D}_{1/2}^{\dagger}\) and \(\mathcal{D}_{1/2}^{\dagger}\mathcal{D}_{1/2}\). We will work with the same definition for the "index" in finite volume. As we will see, in finite volume, the operators, \(\mathcal{D}_{1/2}\mathcal{D}_{1/2}^{\dagger}\) and \(\mathcal{D}_{1/2}^{\dagger}\mathcal{D}_{1/2}\) will exhibit smooth convergence towards infinite volume zeromodes without any fine tuning for the domain wall heights, whereas \(\mathcal{D}_{1/2}\) will not. This will enable us to construct a tuning independent lattice GHC. Although we don't need fine tuning of domain wall heights, the domain wall heights must satisfy the following constraints to host a zeromode in the infinite volume limit. E.g. for a crossed domain wall configuration of the form \(\phi_{1}=m_{0}\epsilon(x_{1})\) and \(\phi_{2}=\mu_{0}\epsilon(x_{0})\) we must have \(0<m_{0},\mu_{0}<2\) in order for there to be a zeromode. Therefore, in the rest of the paper we will choose parameters that satisfy this condition. Finally, even though our goal is to construct a GHC formulation which does not rely on tuning of the domain wall height, we will present the results for the tuned case of the Wilson-like fermion operator to illustrate a GHC in the presence of an exact lattice zeromode.
Index in finite volume:In a finite volume, a domain wall setup will appear accompanied by an anti-wall. As a result, with a domain wall in mass and the diagnostic field, we will have four vortex, two vortex and two anti-vortex defects in finite volume as described in the beginning of this subsection. Clearly the net winding of this system is zero. Therefore the net "index" in this finite volume lattice theory is
also zero. However, locally in a region near each of the vortex defect we should be able to define an "index" which we can then attempt to connect to a lattice version of the generalized Hall current. In other words, in finite volume, the operators \({\cal D}_{1/2}{\cal D}_{1/2}^{\dagger}\) and \({\cal D}_{1/2}^{\dagger}{\cal D}_{1/2}\) have the same number of zeromodes. This implies that the difference between the number of zeromodes for the two is zero, or the net "index" is zero. However, the zeromodes for these two operators will be localized on different vortex defects. E.g, \({\cal D}_{1}^{\dagger}{\cal D}_{1}\) will have a zeromode on the defect at \((x_{0}=0,x_{1}=0)\) and \((x_{0}=L/2,x_{1}=L/2)\). Similarly, \({\cal D}_{1}{\cal D}_{1}^{\dagger}\) will have zeromodes at \((x_{0}=L/2,x_{1}=0)\) and \((x_{0}=0,x_{1}=L/2)\). As a result, e.g., near the vortex at \((x_{0}=0,x_{1}=0)\) we expect the index to be 1. Our goal is to show that the integral of the divergence of the lattice GHC in a region around the vortex equals the index.
#### 3.1.3 Zeromode numerics and singular value decomposition (SVD)
In this subsection we study the eigenvalues of the finite volume lattice operators numerically. Our goal is to map the lowest eigenstate of the suitable finite volume lattice operator to the zeromode of the infinite volume continuum fermion operator. As stated earlier, this mapping cannot be performed smoothly in the infinite volume limit by directly considering the eigenvalues of \({\cal D}_{1/2}\) and \({\cal D}_{1/2}^{\dagger}\). Instead, we need to consider the eigenvalues of \({\cal D}_{1/2}^{\dagger}{\cal D}_{1/2}\) and \({\cal D}_{1/2}{\cal D}_{1/2}^{\dagger}\). Our goal therefore, is to find the lowest eigenvalues of \({\cal D}_{1/2}^{\dagger}{\cal D}_{1/2}\) and \({\cal D}_{1/2}{\cal D}_{1/2}^{\dagger}\) and confirm that they go to zero in the infinite volume limit. This discussion is organized as follows: first, we present numerical methods for finding the zeromodes of \({\cal D}_{1/2}{\cal D}_{1/2}^{\dagger}\) and \({\cal D}_{1/2}^{\dagger}{\cal D}_{1/2}\). We first apply this method to study a \(0+1\) dimensional Wilson fermion operator with domain wall. We then apply it to the lattice fermion operators we wish to study in \(1+1\) dimensions, i.e. \({\cal D}_{1}\) and \({\cal D}_{2}\).
To describe the numerical technique, we use a fermion operator \({\cal D}\) which would serve as a proxy for both \({\cal D}_{1/2}\). We can now consider the spectrum of the operators \({\cal D}{\cal D}^{\dagger}\) and \({\cal D}^{\dagger}{\cal D}\) using the eigenvalue equation
\[{\cal D}{\cal D}^{\dagger}u_{i}=\sigma_{i}^{2}u_{i}, \tag{40}\] \[{\cal D}^{\dagger}{\cal D}v_{i}=\sigma_{i}^{2}v_{i}, \tag{41}\]
where \(\sigma_{i}^{2}\) is an eigenvalue. The eigenvectors \(u_{i}\) and \(v_{i}\) are called left and right _singular vectors_ and corresponding \(\sigma_{i}\geq 0\) is called a _singular value_ of \({\cal D}\). Note that the vectors \(u_{i}\) and \(v_{i}\) are linearly independent since the fermion operator is not normal, i.e. \([{\cal D}^{\dagger},{\cal D}]\neq 0\).
Another possible way to arrive at the same is to look for a vector \(v^{\prime}\) which will minimize the norm \(|{\cal D}v^{\prime}|\). The square of this norm is positive-definite quadratic form given by \({\cal D}^{\dagger}{\cal D}\), therefore the minimum is delivered by eigenvector \(v_{min}\) corresponding to the smallest eigenvalue \(\sigma_{min}^{2}\). Analogously, \(u_{min}\) will deliver the minimum of \(|{\cal D}^{\dagger}u^{\prime}|\).
Interestingly, there is a simple relationship between \(u_{i}\) and \(v_{i}\), since they together with \(\sigma_{i}\geq 0\) define a singular values decomposition (SVD) of the operator \({\cal D}\):
\[{\cal D}^{\dagger}u_{i}=\sigma_{i}v_{i}, \tag{42}\] \[{\cal D}v_{i}=\sigma_{i}u_{i}.\]
The SVD can be written in a compact matrix form as follows:
\[{\cal D}=U\Sigma V^{\dagger}, \tag{43}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & \multicolumn{2}{c|}{Wilson-like operator \({\cal D}_{1}\)} & \multicolumn{2}{c|}{Wilson operator \({\cal D}_{2}\)} \\ \hline Tuned domain walls? & Yes & No & No \\ \hline Exact zeromode? & & & Fig. 3 and Fig. 6 for \(\phi_{2}(x_{0})\to 0\) \\ \hline Singular values & — & & Fig. 4 and Fig. 5 \\ \hline Generalized Hall Current & Fig. 6 and Fig. 7(a) & Similar to \({\cal D}_{2}\) & Fig. 6 and Fig. 7(b) \\ \hline “Index” & Fig. 6 & & Fig. 6 and Fig. 7(c) \\ \hline \end{tabular}
\end{table}
Table 1: List of figures for different numerical setups presented in the paper.
where unitary matrix \(U\) is composed of (column) singular vectors \(u_{i}\), unitary matrix \(V\) - of singular vectors \(v_{i}\) and \(\Sigma\) is a diagonal matrix of corresponding singular values \(\sigma_{i}\). It is clear from SVD that neither \(u_{i}\) nor \(v_{i}\) are straightforwardly related to eigenvectors of \(\mathcal{D}\) if the operator is not normal. However, the singular values of the operator \(\mathcal{D}\) and \(\mathcal{D}^{\dagger}\) map one to one to the eigenvalues of the operators \(\mathcal{D}^{\dagger}\mathcal{D}\) and \(\mathcal{D}\mathcal{D}^{\dagger}\). Therefore, the SVD of \(\mathcal{D}/\mathcal{D}^{\dagger}\) is equivalent to eigen-decomposition of \(\mathcal{D}^{\dagger}\mathcal{D}/\mathcal{D}\mathcal{D}^{\dagger}\) etc. In the rest of the paper we will refer to the lowest eigenmode of \(\mathcal{D}^{\dagger}\mathcal{D}/\mathcal{D}\mathcal{D}^{\dagger}\) as near-zeromode of the operator \(\mathcal{D}/\mathcal{D}^{\dagger}\) and the vectors \(u_{i},v_{i}\) as singular vectors.
**Wilson fermion operator in \(0+1\) dimension:**
We first demonstrate the utility of our approach in the simple case of \(0+1\) dimensional Wilson fermion operator \(\mathcal{D}_{1d}\) in the presence of a domain wall. We use periodic boundary conditions on \(S^{1}\) and a domain wall in the fermion mass:
\[m(x) = \begin{cases}m_{+}&L/2>x\geq 0\\ m_{-}&-L/2\leq x<0\end{cases}. \tag{44}\]
The equation of motion is given by:
\[\mathcal{D}_{1d}\psi(x)=\frac{1}{2}\left(\psi(x+1)-\psi(x-1)\right)+m(x)\psi(x) +\frac{R}{2}\left(\psi(x+1)+\psi(x-1)-2\psi(x)\right)=0, \tag{45}\]
which in the case of \(R=1\) can be simplified to:
\[\mathcal{D}_{1d}\psi(x)=(m(x)-1)\psi(x)+\psi(x+1)=0. \tag{46}\]
We numerically find singular vectors \(u_{i}\) and \(v_{i}\) together with singular values \(\sigma_{i}\) of this operator and study their dependence on the lattice size \(L\).
Let us first consider singular values \(\sigma(L)\) which we depicted on the Fig. 1. We observe that the smallest singular value \(\sigma_{0}\) approaches zero exponentially fast: \(\sigma_{0}\sim O(e^{-L})\), whereas other singular values remain finite. This indicates that in the infinite volume there exists a zero mode of \(\mathcal{D}_{1d}\) given by infinte volume limit of corresponding singular vector \(v_{min}\).
We show the near-zero mode \(v_{0}\) on the Fig. 1(a). We compare it to exact solution of equation \(\mathcal{D}_{1d}\psi_{inf}(x)=0\) in the infinite volume which is given by:
\[\psi_{0}^{inf}(x)=\begin{cases}\left(1-m_{+}\right)^{x},&x\geq 0,\\ \left(1-m_{-}\right)^{x},&x<0,\end{cases} \tag{47}\]
where \(m_{\pm}\) are bulk fermion masses on either sides of the domain wall. Here we work with \(m_{-}=3/4,m_{+}=-1\). We find an excellent agreement between \(v_{0}\) and \(\psi_{inf}\) already for lattice sizes \(L>20\). We also show on the Fig. 1(b) how near-zeromodes of \(\mathcal{D}_{1d}\) and \(\mathcal{D}_{1d}^{\dagger}\) are related by SVD in Eq. 42.
**Fermion operators in \(1+1\) dimension:**
Let us now consider \(1+1\) dimensional fermion operators we proposed in section 3 and analyze the corresponding zeromodes and near-zeromodes. As mentioned before, in \(1+1\) D, it is possible to obtain an exact zeromode for the Wilson-like operator in finite volume by tuning the domain wall heights. However, we didn't find such a solution for the Wilson fermion operator. Here we will use SVD to instead find near-zeromodes for the Wilson fermion \(\mathcal{D}_{2}\) and Wilson-like fermion operators \(\mathcal{D}_{1}\). The results for the Wilson-like case are very similar to the Wilson fermion case. Therefore, we only present results for the Wilson fermion case here.
In order to study the singular values of the Wilson fermion operator we use two-dimensional lattice of the size \(L\times L\) and impose periodic boundary conditions. We also use the domain wall configuration Eq. 38 with \(0>-m_{-}=m_{+}>0\) and \(0>-\mu_{-}=\mu_{+}>0\).
By performing SVD numerically for different lattice sizes \(L\) we find a complete set of singular values \(\sigma_{i}(L)\) and corresponding singular vectors \(v_{i}(L)\) and \(u_{i}(L)\). Let us first consider few lowest singular values \(\sigma_{i}(L)\) which are presented on the Fig. 3. We observe that the smallest two of them (take them to be \(i=0,1\)) are degenerate and exhibit clear exponential decay as \(L\to\infty\). Thus, we find the first evidence for the emergence of two degenerate zero modes of the Wilson fermion operator in the infinite volume.
Let us now study corresponding singular vectors \(v_{i}(L)\) and \(u_{i}(L)\). Note that there are two degenerate singular vectors \(v_{i=0,1}(L)\) corresponding to the lowest \(\sigma_{0}=\sigma_{1}\). The same is true for \(u_{i}\). These degenerate vectors are some superposition of two near-zero modes localized on appropriate vortex defects, i.e. \(v_{i=0,1}\) are superpositions of near-zeromodes on defects with winding \(-1\). These two defects are localized at \((x_{0}=0,x_{1}=0)\) and \((x_{0}=L/2,x_{1}=L/2)\). Similarly, \(u_{i=0,1}\) are superpositions of near-zeromodes located on defects with winding \(1\), \((x_{0}=0,x_{1}=L/2)\) and \((x_{0}=L/2,x_{1}=0)\).
At this point we can change basis by writing \(v^{\prime}_{i}=\alpha_{i}v_{0}+\beta_{i}v_{1}\) with \(|\alpha_{i}|^{2}+|\beta_{i}|^{2}=1\) with \(i=0,1\), in order to find near-zeromodes which are completely localized on the vortices. One can achieve this by minimizing Inverse Participation Ratio (IPR) which can serve as a measure of the localization:
\[\text{IPR}=\frac{1}{\sum_{x_{0},x_{1}}|v^{\prime}(x_{0},x_{1})|^{2}}. \tag{48}\]
Intuitively, if a mode is uniformly distributed over entire lattice of volume \(V\) then one would find that \(\text{IPR}=V\). On the other hand, if the mode is localized at a single point then \(\text{IPR}=1\).
Using this method we find two vectors \(v^{\prime}_{i=0,1}(L)\) which are exponentially localized on two vortices of the same winding number \(\nu_{\theta}=-1\), as shown on the Fig. 4 and Fig. 5. Thus, we have identified two near-zermodes of the Wilson fermion operator \(\mathcal{D}_{2}\). For convenience, we will refer to these vectors \(v_{i=0,1}\) and forego the superscript prime, as in \(v^{\prime}\to v\). We do the same for the vectors \(u_{0/1}\). The same procedure yields two vectors \(u_{i}(L)\) corresponding to the same two singular values localized on the other two vortices of winding number \(\nu_{\theta}=+1\) (at \(x_{0}=0,x_{1}=L/2\) and \(x_{0}=L/2,x_{1}=0\)).
Finally, let us describe how near-zeromodes behave if one switches the diagnostic field off, i.e. \(\phi_{2}\to 0\). If the lattice volume is kept fixed, then at sufficiently small \(\phi_{2}\) the near-zeromodes completely delocalize in the direction \(\mu=0\), and the SVD spectrum become consistent with that of \(\phi_{2}=0\) case. Namely, we find that near-zeromodes transform into plane wave excitations living on the two remaining domain walls. This can be seen by direct inspection of \(|v_{i}(x_{0},x_{1})|\) and from the behavior of singular values \(\sigma_{i}(L)\sim 2\pi n/L\) characteristic to the spectrum of plane waves in the finite box. Furthermore, lowest singular values are \(4\) times degenerate accounting for \(2\) remaning domain walls and \(2\) possible spinor polarizations. Additionally, by imposing anti-periodic boundary condition in the \(\mu=0\) direction we again observe that the flow of singular values \(\sigma_{i}(L)\sim 2\pi(n+1/2)/L\) is characteristic to that of plane waves in the anti-periodic box, see
Figure 1: The flow of \(10\) smallest singular values \(\sigma_{i}\) of the \(0+1\)-dimensional Wilson fermion operator \(\mathcal{D}_{1d}\) (Eq. 46) as a function of the lattice size \(L\). One can clearly see that the smallest singular value follows exponential law \(\sigma_{0}\sim e^{-\alpha L}\). Corresponding singular vector is localized on one of the the domain walls. The domain wall profile is given by Eq. 44 with \(m_{+}=-1\) and \(m_{-}=3/4\). Note logarithmic scale.
Fig. 6. The true near-zeromode should not, in general, be sensitive to such change of boundary conditions. This reorganization happens because for sufficiently small \(\phi_{2}\) the localization width of the near-zero modes become comparable or bigger than the lattice size, thus it completely delocalizes. If \(\phi_{2}\) is kept fixed then one should recover the near-zeromodes by increasing the volume. Therefore we find that limits \(\phi_{2}\to 0\) and \(L\rightarrow\infty\) do not commute. In order to correctly define the "index" from the finite volume analysis one has to take infinite volume limit first and only then switch the diagnostic field off.
## 4 Generalized Hall Current in the finite volume
In this part we will study the realization of the Generalized Hall Current (GHC) for the Wilson-like and the Wilson fermions and corresponding "indices". Before we proceed to the computations, let us outline the plan of this section. First, we will present how we've computed the GHC on the lattice. Next, we will study GHC for the Wilson-like operator \(\mathcal{D}_{1}\), taking the domain wall heights to satisfy the tuning condition (Eq. 39). This will illustrate how the GHC reproduces the index of the fermion operator in finite volume for the case when there is an exact zeromode. This will give us an opportunity to study GHC and its relation to the index without complications of the finite volume effects.
Next, we will proceed to study of Wilson fermion operator and see how near-zeromodes and finite volume effects influence the realization of the GHC. Results for the Wilson-like operator in the same setup (when exact zeromodes are absent) are essentially the same, therefore we will not present them.
### Computation of the Generalized Hall Current on the lattice
The lattice generalized Hall current \(J_{\mu}^{H}(x)\) can be defined as follows:
\[J_{\mu}^{H}(x)=\bar{\Psi}\tilde{\Gamma}_{\mu}(x)\Gamma_{\chi}\Psi \tag{49}\]
where \(\tilde{\Gamma}_{\mu}(x)\) is given by:
\[\tilde{\Gamma}_{\mu}(x)=-i\left.\frac{\delta K(A_{\mu}(x))}{\delta A_{\mu}(x) }\right|_{A_{\mu}(x)=0}. \tag{50}\]
Here \(A_{\mu}(x)\) is a \(U(1)\) gauge field and \(K(A_{\mu}(x))\) is a gauged lattice Dirac operator of the double theory obtained via standard Peierls substitution \(\delta_{x+a_{\mu},y}\rightarrow\delta_{x+a_{\mu},y}\)\(\exp(iA_{\mu}(x))\).
The expectation value of \(J_{\mu}^{H}(x)\) is evaluated numerically by straightforward computation of the matrix \((K+M)^{-1}\) and taking a trace. The divergence is computed as usual with the help of lattice backward difference \(\nabla_{\mu}^{B}\):
\[\nabla_{\mu}^{B}J_{\mu}^{H}(x)=\sum_{\mu=0,1}\left(J_{\mu}^{H}(x-a_{\mu})-J_{ \mu}^{H}(x)\right). \tag{51}\]
We posit that the space-time integral of the divergence should produce the "index" of interest. We compute the "index" \(I_{lat}\) according to the lattice version of the Eq. 10:
\[I_{lat}=-\frac{1}{2}\sum_{x\in S}\nabla_{\mu}^{B}J_{\mu}^{H}(x) \tag{52}\]
where \(S\) is the area over which the divergence of the lattice GHC current \(J_{\mu}^{H}(x)\) is integrated. The area \(S\) can be an entire lattice, however in that case the total index has to vanish. Thus we will integrate only over some portion of the lattice adjacent to the defect (vortex) of interest. To implement this, we divide the lattice into 4 equal squares centered around each of the 4 vortices created by the domain walls and then integrate the divergence of lattice GHC on these four squares separately to compute the corresponding index.
### GHC for Wilson-like lattice operator and exact zeromodes
Let us first present results for GHC for Wilson-like operator \(\mathcal{D}_{1}\) when domain wall configuration satisfies the tuning condition Eq. 39. In this case there is an exact zeromode for the fermion operator in finite volume.
Figure 3: The flow of 40 smallest singular values \(\sigma_{i}\) of the Wilson fermion operator \(\mathcal{D}_{2}\) as a function of the lattice size \(L\times L\). One can clearly see two degenerate singular values which follow exponential law \(\sigma_{i=0,1}\sim e^{-\alpha L}\). Corresponding singular vectors are localized on two different vortices with winding numbers \(\nu_{\theta}=-1\). The domain wall profile is given by Eq. 38 with \(m_{+}=\mu_{+}=1/2\) and \(m_{-}=\mu_{-}=-1/2\). Note logarithmic scale.
We've computed the GHC \(J_{\mu}^{H}(M)\) and the "index" \(I_{lat}(M)\) for several values of the regulator mass from \(M=10^{-5}\) to \(2\) on the lattice \(L\times L=32\). We present the current \(J_{\mu}^{H}(x)\) and its divergence on the Fig. 7a for the smallest value of \(M\), with \(M=10^{-5}\). We observe that the divergence is localized around the vortices. It has maximal value at the vortex center. The sign is consistent with the winding number of the defect. The current \(J_{\mu}^{H}(M)\) flows preferably along the edges of the domains from one vortex to another. The divergence exhibits an exponential decay around the vortex as shown on Fig. 8a.
Now we want to verify that the space-time integral of the divergence of the lattice GHC produces the correct "index". As discussed previously, we divided the lattice into 4 equal squares centered around each vortex and performed integration of the divergence of GHC over them. Due to the exponential decay of the GHC away from the defect, we expect that that the integral would approach infinite volume value quickly. The resulting "index" \(I_{lat}(M)\) is shown in the Fig. 9a as function of \(M\). We observe that it clearly goes towards \(\pm 1\) as \(M\to 0\). The sign of the index depends on the vortex defect in consideration. Also, as expected, for very large \(M\) the "index" approaches zero with increasing \(M\).
In order to quantify finite volume effects we have computed the deviation:
\[\epsilon(L)=|\pm 1-I_{lat}(M\to 0)| \tag{53}\]
where the plus or minus sign is chosen according to the winding of the vortex and \(I_{lat}\) is the corresponding "index" computed by integrating the \(\nabla_{\mu}^{B}J_{\mu}^{H}\). This function is shown in the Fig. 9b where one can see that the error is indeed exponentially small: \(\epsilon(L)\sim e^{-L}\). Therefore, after performing infinite volume extrapolation our computations show that the lattice GHC correctly reproduces the index of the Euclidean fermion operator. Finally, we find that generalized hall current and divergence vanish when \(\phi_{2}\to 0\) for
Figure 4: Density plot of absolute values of near-zeromodes \(v_{0}\) and \(v_{1}\) of the Wilson fermion operator \({\cal D}_{2}\) and \(u_{0}\) and \(u_{1}\) of \({\cal D}_{2}^{\dagger}\) corresponding to two smallest singular values \(\sigma_{i=0,1}\) (which are degenerate) for the lattice size \(48\times 48\). Arrows illustrate how \(u_{i=0,1}\) and \(v_{i=0,1}\) are related by the SVD Eq. 42. The domain wall profile is given by Eq. 38 with \(m_{+}=\mu_{+}=1/2\) and \(m_{-}=\mu_{-}=-1/2\). Gray dashed line marks the position of domain walls. Note logarithmic scale.
fixed \(L\) and \(M\). This shows that we have to take the infinite volume limit first and then take \(\phi_{2}\) to zero in order to retain a nonzero index in the limit of \(\phi_{2}\to 0\).
### GHC for Wilson fermion operator and near-zero modes
We now present results for GHC and the index for the Wilson fermion operator \(\mathcal{D}_{2}\). The results for the untuned Wilson-like operator are very similar.
We use the same strategy in order to compute the "index" which is presented in the Fig. 10 for several values of \(M\) and lattice sizes \(L=8\ldots 32\). First of all, we observe that the "index" vanishes when we naively take \(M\to 0\). This is an expected behaviour since the spectrum of \(\mathcal{D}_{2}\) is strictly speaking gapped: \(\sigma_{0}\sim\exp(-L)\neq 0\). In order to understand it better one can expand contribution of \(\mathrm{Dim}(\ker\mathcal{D}_{2})\) in powers of \(M/\sigma_{0}\ll 1\):
\[\frac{M^{2}}{D_{2}^{\dagger}D_{2}+M^{2}}=\frac{M^{2}}{\sigma_{0}^{2}}+O\left( \frac{M^{4}}{\sigma_{0}^{4}}\right). \tag{54}\]
We indeed find this dependence as shown on the Fig. 10. The "index" exhibits a pronounced maximum at some \(M_{0}>\sigma_{0}\) and then decays exponentially fast as \(M\to\infty\). We find that the maximum tends to \(\pm 1\) as lattice size gets bigger, also exponentially fast, as illustrated on the Fig. 10(a). Moreover, the position of the maximum \(M_{0}\) tends to zero as \(L\to\infty\) exponentially as well, see Fig. 10(b). Therefore we find in order to reproduce the index of the fermion operator one has to take infinite volume limit first and only then \(M=M_{0}\to 0\).
## 5 Conclusions
In this paper we extended the idea of generalized Hall current proposed in [8, 9] to discrete space-time in finite volume. Our construction is focused on one of the several examples presented in [8, 9]: \(1+1\)
Figure 5: Absolute value of the near-zeromode \(v_{0}\) of the Wilson fermion operator \(\mathcal{D}_{2}\) along several \(x_{0}=\mathrm{const}\) and \(x_{1}=\mathrm{const}\) slices for the lattice size \(48\times 48\). The domain wall profile is given by Eq. 38 with \(m_{+}=\mu_{+}=1/2\) and \(m_{-}=\mu_{-}=-1/2\). Note logarithmic scale.
-dimensional Dirac fermion with a domain wall in its mass. It is well known that the domain wall hosts massless fermion in the continuum. The continuum GHC construction connects the existence of this massless fermion to a Euclidean fermion operator with an index of \(1\) by turning on some diagnostic field in the theory. We extend this construction to discrete Euclidean space-time in finite volume (\(S^{1}\times S^{1}\)) by introducing higher dimensional operators which we call Wilson-like and Wilson terms. We tackle several nontrivial features associated with a finite volume analysis which includes the net vorticity of the defects on \(S^{1}\times S^{1}\) being zero. We have four defects on the lattice, two vortices and two anti-vortices. In order to mimic the GHC construction of the continuous infinite volume space-time, we focus on the region of space-time around only one of these vortices. We were successful in engineering a nonzero index for the fermion operator on each of these vortices. We then computed the lattice GHC to show that the space-time integral of its divergence computed locally reproduced the "index" correctly.
Future research directions involve extending this lattice finite volume construction to higher dimensional theories. Ref. [8, 9] constructed the continuum GHC for several examples, including the \(1+1\) dimensional example we focus on here. The other examples included domain wall fermions in higher dimensions. The GHC construction in these higher dimensional examples involved diagnostic background gauge fields as well as diagnostic scalar and pseudo-scalar fields. Our plan is to extend these continuum constructions to the lattice. Also, the continuum construction of GHC in [8, 9] applies to free fermion theories. In particular, the GHC is computed using a one-loop Feynman diagram in perturbation theory. It is however well known that in a multiflavor theory, introducing interactions can sometimes gap out massless fermions through nonperturbative effects. This is even more interesting when the interaction in question do not break any anomalous symmetries of the non-interacting theory. E.g. see symmetric mass generation [20, 21, 22, 23, 24, 25, 26, 27, 28]. The non-perturbative effects of interactions on the GHC may not be captured using a one loop Feynman diagram as described in [8, 9]. One may need to resort to a numerical analysis to uncover these effects. Even though our lattice GHC construction was formulated for non-interacting \(1+1\) Dimensional fermions, it can be easily modified to take into account interactions.
Figure 6: The flow of 40 smallest singular values \(\sigma_{i}\) of the Wilson fermion operator \(\mathcal{D}_{2}\) as a function of the lattice size \(L\times L\) for very small diagnostic field \(\phi_{2}(x_{0})\) and anti-periodic boundary conditions in the direction \(\mu=0\). One can clearly see that lowest singular values follow the law \(\sigma_{i}\sim 2\pi\left(i+1/2\right)/L\) characteristic to plane waves in anti-periodic box. The domain wall profile is given by Eq. 38 with \(-m_{-}=m_{+}=1/2\) and \(-\mu_{-}=\mu_{+}=5*10^{-5}\). Note log-log scale.
This will enable us to compute the generalized Hall current taking into account non-perturbative effects.
## 6 Acknowledgement
We acknowledges support from the U.S. Department of Energy, Nuclear Physics Quantum Horizons program through the Early Career Award DE-SC0021892.
Figure 8: Several slices of the (log of) divergence of \(J_{\mu}^{H}(x)\) for (a) Wilson-like operator \(\mathcal{D}_{1}\) with \(M=10^{-5}\) and (b) Wilson operator \(\mathcal{D}_{2}\) with \(M=0.21\).
Figure 9: Dependence of the lattice “index” \(I_{lat}(M)\) on \(M\) and lattice size \(L\) for the Wilson-like operator \(\mathcal{D}_{1}\) in the presence of exact zeromode. The domain wall configuration is given by Eq. 38 with \(m_{-}=\mu_{-}=-1\) and \(m_{+}=\mu_{+}=1/2\) which satisfies Eq. 39. Compare this figure to Fig. 10.
Figure 11: Dependence of the value and the position of the maximum of the lattice “index” \(I_{lat}(M)\) on lattice size \(L\) for the Wilson fermion operator \(\mathcal{D}_{2}\). The domain wall configuration is given by Eq. 38 with \(m_{-}=\mu_{-}=-1/2\) and \(m_{+}=\mu_{+}=1/2\).
Figure 10: The “index” \(I_{lat}(M)\) of the Wilson fermion operator \(\mathcal{D}_{2}\) for several lattice sizes \(L=8\dots 32\). The domain wall configuration is given by Eq. 38 with \(m_{-}=\mu_{-}=-1/2\) and \(m_{+}=\mu_{+}=1/2\). Here \(\sigma_{0}\) is the smallest singular value and the inset shows the same data but in the linear scale. |
2303.16661 | Remote Reactor Ranging via Antineutrino Oscillations | Antineutrinos from nuclear reactors have the potential to be used for reactor
monitoring in the mid- to far-field under certain conditions. Antineutrinos are
an unshieldable signal and carry information about the reactor core and the
distance they travel. Using gadolinium-doped water Cherenkov detectors for this
purpose has been previously proposed alongside rate-only analyses. As
antineutrinos carry information about their distance of travel in their energy
spectrum, the analyses can be extended to a spectral analysis to gain more
knowledge about the detected core. Two complementary analyses are used to
evaluate the distance between a proposed gadolinium-doped water-based liquid
scintillator detector and a detected nuclear reactor. Example cases are shown
for a detector in Boulby Mine, near the Boulby Underground Laboratory in the
UK, and six reactor sites in the UK and France. The analyses both show strong
potential to range reactors, but are limited by the detector design. | Steve T. Wilson, Chris Cotsford, James Armitage, Niamh Holland, Matthew Malek, John. G. Learned | 2023-03-29T13:11:18Z | http://arxiv.org/abs/2303.16661v5 | # Remote Reactor Ranging via Antineutrino Oscillations
###### Abstract
Antineutrinos from nuclear reactors can be used for monitoring in the mid- to far-field as part of a non-proliferation toolkit. Antineutrinos are an unshieldable signal and carry information about the reactor core and the distance they travel.
Using gadolinium-doped water Cherenkov detectors for this purpose has been previously proposed alongside rate-only analyses. As antineutrinos carry information about their distance of travel in their energy spectrum, the analyses can be extended to a spectral analysis to gain more knowledge about the detected core.
Two complementary analyses are used to evaluate the distance between a proposed gadolinium-doped water-based liquid scintillator detector and a detected nuclear reactor. Example cases are shown for a detector in Boulby Mine, near the Boulby Underground Laboratory in the UK, and six reactor sites in the UK and France. The analyses both show strong potential to range reactors, but are limited by the detector design.
## I Introduction
The National Nuclear Security Administration (NNSA), part of the United States of America's Department of Energy (DoE), stated the importance in its Plan to Reduce Global Nuclear Threats [1] of the development of detection methods for monitoring compliance with the Treaty on the Non-proliferation of Nuclear Weapons (NPT) [2] in line with the International Atomic Energy Agency (IAEA)'s Comprehensive Safeguard Agreements [3]. The potential of antineutrinos for reactor detection is well known with many experiments using nuclear reactors as a source of antineutrinos [4; 5], including the first detection of the neutrino [6; 7]. As such, observation of reactors has been demonstrated in the near-field (\(\mathcal{O}\)(100 m)) via surface-deployed plastic scintillator detectors [8; 9] with investigations into extending this to reactor monitoring [10]. However, reactor monitoring is typically intrusive due to the close proximity of the detector to the reactor.
Reactor monitoring in the mid- to far- field (\(\mathcal{O}\)(10 - 100 km)) via antineutrinos could significantly reduce intrusive monitoring and be used as part of a toolkit of complementary methods, with there being interest in the safeguarding and policy communities in a neutrino detector as a future tool to safeguard advanced reactors and as part of future nuclear deals [11]. Two analyses were presented in [12] to evaluate sensitivity of a prototype detector of this type to the antineutrino flux from real reactor sites.
The Likelihood Event Analysis of Reactor Neutrinos (LEARN) analysis presented in [12] consists of a likelihood analysis followed by machine learning to reject backgrounds and maximize the significance at which reactor signals are observed. This analysis can be extended to use additional information from the detected antineutrino spectrum to determine the distance to the reactor, which can be calculated from the flavor oscillation of these antineutrinos. Two methods of ranging a nuclear reactor are presented here. The first is a chi-squared method, which compares the measured spectrum with the expected spectra for varying reactor ranges and finds the closest match. The second uses Fourier transforms (FTs) to look for the frequency of neutrino oscillations in the detected spectrum to extract a range.
The structure of the paper is as follows: the modeling of the reactor antineutrino signal is discussed in Section II, the detector used and its location are detailed in Section III, the methods are explained in Section IV and the results are presented in Section V. The results are discussed in Section VI, before the paper is concluded in Section VII.
Reactor antineutrino signal
The input for the Monte Carlo (MC) simulations in [12] use reactor data found at [13], which is taken from the IAEA-Power Reactor Information System (PRIS) [14]. The load factors used are monthly averages for the year 2020 and the mid-cycle fission fractions are used to estimate the emitted antineutrino spectrum. These simulations are compared to modeled spectra. The models are produced using probability density functions (PDFs) that combine the main contributions to the expected spectra: emitted flux, interaction cross-section and survival probability.
Nuclear fission reactors produce electron antineutrinos via the beta decay of unstable daughter nuclei from fission processes [15]. The antineutrino flux, in units of \(\bar{\nu}\)/MeV/fission, produced by a reactor core is defined by
\[\phi(E_{\bar{\nu}})=\sum_{i}f_{i}\lambda_{i}(E_{\bar{\nu}}), \tag{1}\]
where \(\lambda_{i}(E_{\bar{\nu}})\) is the antineutrino emission spectrum normalized to one fission, and \(f_{i}\) is the fission fraction for the \(i\)-th isotope. \(\lambda_{i}(E_{\bar{\nu}})\) is estimated as
\[\lambda_{i}(E_{\bar{\nu}})=\exp\Big{(}\sum_{j=1}^{6}a_{j}E_{\bar{\nu}}^{j-1} \Big{)}, \tag{2}\]
where \(a_{j}\) are polynomial fit parameters from the Huber-Mueller predictions [16; 17]. The reactor antineutrino flux also depends on reactor power and the average thermal energy emitted per fission. However, for this work scaling factors have been omitted as only the shapes of the modeled spectra are of interest.
The dominant interaction for antineutrinos at the energies produced by reactors is inverse beta decay (IBD), with a cross-section of \(\mathcal{O}(10^{-44})\)E\({}_{\rm e}\)p\({}_{\rm e}\) cm\({}^{2}\)[18]. The cross-section applied in this work is simplified to
\[\sigma(E_{e})=p_{e}E_{e}, \tag{3}\]
where \(E_{e}\) and \(p_{e}=\sqrt{E_{e}^{2}-m_{e}^{2}}\) are the positron energy and momentum respectively, and \(m_{e}\) is the positron mass. This neglects energy-dependent recoil, weak magnetism, radiative corrections and the energy-independent coefficient as they are all small contributions to the total cross-section. The cross-section used in the MC is from [19], with a more accurate cross-section detailed in [20]. The new cross-section is not expected to impact the results as the difference to the one used is negligible at the energies of reactor antineutrinos.
IBD is sensitive to electron flavor antineutrinos; their survival probability due to neutrino flavor oscillation in a vacuum [21] needs to be accounted for. The survival probability of these electron flavor antineutrinos is parameterized by the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix [22], with oscillation parameters from [23] used for this work. In this work, a three-flavor mixing matrix is used.
The complete PDF for a given energy spectrum \(E_{\bar{\nu}}\) with distance the neutrinos travel, \(L\), as the free parameter is given by
\[f(E_{\bar{\nu}}\big{|}L)=\phi(E_{\bar{\nu}})\sigma(E_{\bar{\nu}})P(L,E_{\bar{ \nu}}), \tag{4}\]
where \(P(L,E_{\bar{\nu}})\) is the survival probability of electron antineutrinos, and the other terms are as defined in Equation 1 and Equation 3.
## III Detector and location
The detector investigated is a 22 m height and diameter right cylinder water-based Cherenkov detector, which is detailed in [12]. This detector is located 1100 m underground close to the Science & Technology Facilities Council (STFC) Boulby Underground Laboratory in the UK (2800 m.w.e, \(\sim 10^{6}\) muon attenuation versus surface [24]), and contains 4600 photomultiplier tubes (PMTs) for 15% PMT coverage in an inner detector with a 9 m radius. There is a 2 m outer detector which is uninstrumented, acting as a passive buffer, and the fill material is water-based liquid scintillator (WbLS) [25; 26] doped with gadolinium to act as a neutron capture agent [27; 28]. The liquid scintillator is at a concentration of 1%, giving a light yield of 100 photons/MeV. A schematic of the detector used is shown in Fig. 1[12].
The expected reactor landscape around Boulby is used for this study. Table 1 shows the reactor sites considered for this study, along with their type, standoff distance, approximate signal rate after data reduction and decommissioning date. At the time of this study, the UK's advanced gas-cooled reactor (AGR) fleet was due for decommissioning, with
the first generation AGR-1 cores by 2024 followed by the second generation AGR-2 fleet by 2028, and Hinkley Point C (a pressurised water reactor (PWR)) had a planned start date of 2026 [29; 30]. Their locations on a map are shown in Fig. 2. Sizewell B, a PWR, was undergoing review for an extension beyond its initially planned end date of 2035 [31].
## IV Method
Two analyses were employed on the same dataset for this study. The data was MC produced in Geant4 [34; 35] for the study in [12]. Signal MC was produced using real reactor data described in Section II.
Several backgrounds are considered for this study, with their approximate rates given in Table 2[12]. Rates differ slightly depending on the target reactor due to analysis optimizations made during data reduction. Due to the nature of the data reduction performed in analysis, only correlated backgrounds are considered.
Uncertainties on the signal and background rates used in this study are shown in Table 3.
MC for the backgrounds were produced using rates and sources from a combination of literature and previous studies [36; 37; 38; 39; 40; 24]. Data is taken from the output of the LEARN data reduction [12], and backgrounds combined as
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Signal & Number & Type & Standoff & Rate & Decommissioning \\ & of cores & & distance [km] & [per day] & date \\ \hline Hartlepool & 2 & AGR-1 & 26 & 3 & 2024 \\ Heysham 1 & 2 & AGR-1 & 149 & 0.1 & 2024 \\ Heysham 2 & 2 & AGR-2 & 149 & 0.1 & 2028 \\ Torness & 2 & AGR-2 & 187 & 0.08 & 2028 \\ Sizewell-B & 1 & PWR & 306 & 0.02 & after 2035 \\ Hinkley Point C & 2 & PWR & 404 & 0.03 & 2086 \\ Gravelines (France) & 6 & PWR & 441 & 0.03 & 2031 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The reactor type, standoff distance, approximate signal rate after data reduction and decommissioning date for the reactors considered in this study. The decommissioning dates are taken from [29; 30; 32]. Sizewell B was under review for a long term extension beyond 2035 at the time of this study [31].
Figure 1: Schematic of the detector design by Jan Boissevain (University of Pennsylvania), showing the tank supported on a steel truss structure and inner PMT support structure.
appropriate.
The impact of both backgrounds and energy resolution are tested by applying energy reconstruction and/or background uncertainties. The energy reconstruction is applied as in [12], where a fit between simulated particle energy and PMT hits is applied. To remove energy resolution effects, the simulated particle energy is used where appropriate.
To apply background uncertainties, it is assumed the background rates are known at the rates observed in [12] with some Gaussian uncertainty. For each bin in the observed spectrum for the target reactor, a rate is drawn from the uncertainty distributions and combined with the reactor rate for that bin. As the background rates can fluctuate up or down due to their uncertainties, when combined with the signal, it can cause the observed signal rates to fluctuate.
To determine the uncertainty on the range caused by background uncertainties and statistical fluctuations, each "observation" is repeated 100 times.
The impact of both energy resolution and background uncertainties are assessed on the nearer AGRs, but only the energy resolution is included for the more distant PWRs due to the small signal which is obscured by background uncertainties, as seen in Fig. 3. In the case of the Hartlepool cores, all backgrounds are applied including their uncertainties, and full energy reconstruction is used.
\begin{table}
\begin{tabular}{c c} \hline \hline Component & Rate per day \\ \hline World & 0.2 \\ Geo & 0.06 \\ \({}^{9}\)Li & 0.02 \\ \({}^{17}\)N & 0.2 \\ Fast Neutrons & 0.05 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Backgrounds considered for this study and their approximate rates per day [12].
Figure 2: Map showing the location of the detector at Boulby and the reactor sites studied. [33]
### Chi-squared
The chi-squared method minimizes the difference between the positron spectrum from analyzed data from [12] and models produced using Equation 4 with the antineutrino energy converted to detected positron kinetic energy. The chi-squared used is given by
\[\chi^{2}=\sum_{i}\frac{(\Phi_{i}-f_{i})^{2}}{f_{i}}, \tag{5}\]
where f\({}_{i}\) is the value of Equation 4 for a given distance and the energy corresponding to the i\({}_{th}\) bin, and \(\Phi_{i}\) is the data content in the i\({}_{th}\) energy bin. Both the data and Equation 4 are normalized to a maximum of 1 to mitigate the effects of reactor power. The distance to the reactor is incremented between 0 and 500 km at 0.1 km intervals and the value of Equation 5 is minimized to yield an "observed" range for a reactor.
### Fourier transform
As shown in Equation 7, the oscillation probability of one neutrino flavor state to another is proportional to \(\sin^{2}(\frac{1.27\Delta m^{2}_{ij}L}{E_{\nu}})\), where \(\Delta\)m\({}^{2}_{ij}\) is the square of the mass difference between flavors i and j, L is the distance the antineutrino travels and E\({}_{\nu}\) is the energy of the antineutrino. As such, the oscillation of neutrino flavor is dependent on the distance and energy domains. As the kinetic energy of the positrons from IBD can be measured and the
Figure 3: The reconstructed energy spectrum for a single observation of Hinkley Point C with (dashed black) and without (solid red) background uncertainties included after data reduction. A description of the application of background uncertainties is given in the text.
\begin{table}
\begin{tabular}{c c} \hline \hline Component & Uncertainty (\%) \\ \hline Hartlepool & 2.5 \\ Heysham & 2.0 \\ Torness & 2.6 \\ Sizewell B & 2.75 \\ Hinkley Point C & 3.0 \\ Gravelines & 3.4 \\ World & 6.0 \\ Geo & 25 \\ \({}^{9}\)Li & 0.2 \\ \({}^{17}\)N & 0.2 \\ Fast Neutrons & 27 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Uncertainties on signals and backgrounds. Data taken from [12; 13; 36].
antineutrino energy determined from this, a FT can be used to switch from antineutrino energy to the distance of travel.
The survival probability of an electron flavor neutrino is given in Equation 6.
\[P(L,E_{\bar{\nu}})=1-P_{ex}, \tag{6}\]
where
\[\begin{split} P_{ex}&=\cos^{4}(\theta_{13})\sin^{2}(2 \theta_{12})\sin^{2}\Big{(}\frac{1.27\Delta m_{21}^{2}L}{E_{\bar{\nu}}}\Big{)}+ \\ &\cos^{2}(\theta_{12})\sin^{2}(2\theta_{13})\sin^{2}\Big{(}\frac{1. 27\Delta m_{31}^{2}L}{E_{\bar{\nu}}}\Big{)}+\\ &\sin^{2}(\theta_{12})\sin^{2}(2\theta_{13})\sin^{2}\Big{(}\frac{ 1.27\Delta m_{32}^{2}L}{E_{\bar{\nu}}}\Big{)},\end{split} \tag{7}\]
assuming charge-parity-time invariance [13]. Here, \(\theta_{ij}\) is the mixing angle between flavors i and j.
As the oscillation probability depends on \(\sin^{2}(\frac{1.27\Delta m_{ii}^{2}L}{E_{\bar{\nu}}})\), the identity \(\sin^{2}(\theta)=\frac{1-\cos(2\theta)}{2}\) can be used to express the FT as
\[\text{FCT}(L)\propto\int_{\frac{1}{E_{min}}}^{\frac{1}{E_{max}}}f(L,E_{\bar{ \nu}})\,\cos\left(2\times\frac{1.27\Delta m_{ij}^{2}L}{E_{\bar{\nu}}}\right)\, d\frac{1}{E_{\bar{\nu}}}. \tag{8}\]
Here, Equation 8 is defined as a Fourier cosine transform (FCT). A phase shift can be applied for a Fourier sine transform (FST), shown in Equation 9.
\[\text{FST}(L)\propto\int_{\frac{1}{E_{min}}}^{\frac{1}{E_{max}}}f(L,E_{\bar{ \nu}})\sin\left(2\times\frac{1.27\Delta m_{ij}^{2}L}{E_{\bar{\nu}}}\right)\, d\frac{1}{E_{\bar{\nu}}}. \tag{9}\]
Both Equation 8 and Equation 9 include terms not associated with neutrino oscillations within the term \(f(L,E_{\bar{\nu}})\). To isolate the oscillation terms, a spectrum where no oscillation is assumed is simulated i.e. the model in Equation 4 but only including the terms from Equation 1 and Equation 3. A FT is performed on this spectrum and it is then subtracted from the one performed on the original data. The effect of this can be seen clearly in Fig. 4, where the peak associated with factors not related to neutrino oscillation are removed.
Figure 4: Comparison of the Fourier transform for oscillations (black solid) and no oscillations (red dashed) in the reactor spectrum modeling for a 200 km standoff distance (a), and the subtraction of the no oscillation situation from the original reactor model for the same reactor standoff (b). The reactor model has peaks for 100 km and 200 km before the subtraction, and only the expected peak at 200 km after subtraction.
As the distance is varied, the peak amplitude of the FCT and the zero amplitude values of the FST are the points of interest that correspond to the "observed" range. Fig. 5 shows how the FCT and FST can be used in combination to reduce the possible ranges responsible for the detected spectrum by only considering the regions in which they match.
Due to the detector resolution, only the \(\theta_{12}\) oscillation pattern can be resolved. As such, the FTs are normalized to the \(\theta_{12}\) term, and \(\theta_{13}\) and \(\theta_{23}\) are neglected. This creates a lower limit to the range that can be observed with this method, as at least one full wavelength of the oscillation pattern must be visible in the spectrum for a FT to work.
Although detailed analysis has been performed on specific reactors, this is illustrative only due to the expected decommissioning of many of the observed cores. As such, a scan over generic scenarios has been carried out up to a range of 500 km to show the potential of this analysis, with the results in Fig. 6. A lower limit of approximately 80 km can be seen due to the requirement of a full wavelength of the oscillation pattern.
Figure 5: The combination of an FCT (blue dashed) and FST (gray solid) allows the area of interest (red solid) to be narrowed down to reduce uncertainties by comparing where the maxima of the FCT and zeroes of the FST occur at matching distances.
Figure 6: The analytical range of reactors with distance using the Fourier transform analysis. The Fourier transform relies on resolving the \(\theta_{12}\) oscillations, which are not obviously present at ranges below 100 km, as the \(\theta_{13}\) oscillations are smaller than the detector’s energy resolution.
Results
### Chi-squared
For the minimum chi-squared method, a single analysis was performed. This was the ranging of the EDF Hartlepool reactor with all limitations, such as complete backgrounds including uncertainties and detector effects, considered as part of the LEARN analysis chain in [12]. The obtained range, shown in comparison to the true range in Table 4, is 50% from the expected value.
Hartlepool is the dominant signal after data reduction, so assuming all backgrounds are known only improves the observed range slightly to \(\approx\) 35 km. The biggest cause of the discrepancy between the observed and true range is the depletion of the low energy events caused by the data reduction in [12]. To remove the numerous radioactive background events, low energy cuts are applied. This, alongside the detector's increasing efficiency with energy, cause the spectrum to shift to higher energy and impact the observed range. The impact on the observed spectrum in comparison to the models for the true range at 26 km and observed range at 39 km can be seen in Fig. 6(a) and Fig. 6(b) respectively.
The observation time needed to range Hartlepool to this accuracy is 40 months, with the uncertainty dropping to the level in Table 4 by around 50 months, as demonstrated by Fig. 8.
Due to the simplicity of the method, and the need for a signal-dominated spectrum, this analysis is not appropriate for higher-background situations such as more distant reactors. Further analysis would be required to isolate a complete reactor spectrum for this method to be effective at larger distances.
### Fourier transform
The Fourier transform (FT) method is applied in four possible scenarios on five reactor complexes. The scenarios are combinations of including background uncertainties and detector energy resolution.
The results of the FT method shown in Table. 5 show that for reactors at large distances, the range can be determined when the detector's energy resolution is accounted for. However, reactors beyond 300 km do not have a large enough signal to be ranged effectively when background uncertainties are included.
\begin{table}
\begin{tabular}{c c c} \hline \hline & True & Chi-squared \\ \hline Distance (km) & 26 & 39 \(\pm\) 1 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Observed distance in km for the Chi-squared method for Hartlepool.
Figure 7: The comparison of the analyzed Hartlepool reactor complex data (red points) and models (black solid line) for the (a) true range and (b) observed range. The shift to higher energy in the data can be seen. This shift limits the ability to range the reactor as the shape of the spectrum is what is used to determine the range.
The two nearer reactors, AGRs at Heysham and Torness, can be ranged close to the true value when background uncertainties are included.
As shown in Fig. 6, reactors within approximately 80 km of the detector cannot be accurately ranged by the FT method. As such, the EDF Hartlepool cores were not ranged as part of this analysis.
The FT for Heysham 2 with background uncertainties and with energy reconstruction applied is shown by Fig. 9. The maximum for the FCT yields an accurate range, but with an uncertainty of \(\pm\) 15 km. The FST is able to reduce this uncertainty significantly to \(\pm\) 6, shown in Table 5.
Due to the low event rates for the distant reactors, it takes over 50 years of observation time to be able to range the Heysham complex, and significantly longer for the more distant reactors.
## VI Discussion
The results of both methods show the potential of using neutrino oscillation to determine the distance to an observed reactor, as well as the use of extending the analysis in [12] to include a spectral analysis. The two analyses presented complement each other well, with the minimum chi-squared analysis allowing nearby reactors with a large signal contribution to be ranged, and the Fourier transform (FT) method allowing the ranging of more distant reactors.
Despite showing potential, there are strong limitations to both methods. While the chi-squared method can handle lower energy resolutions for mid-field reactors, the energy threshold and detector efficiency strongly limits the utilization of lower energy events. This causes the discrepancy between the true and observed range for the Hartlepool reactors.
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline Situation & \multicolumn{5}{c}{Range [km]} \\ & Heysham 2 & Torness & Sizewell B & Hinkley Point C & Gravielines \\ \hline True Range & 149 & 187 & 304 & 404 & 441 \\ \hline No Background, True Energy & 148 \(\pm\) 4 & 188 \(\pm\) 5 & 306 \(\pm\) 8 & 403 \(\pm\) 11 & 440 \(\pm\) 11 \\ No Background, Reconstructed Energy & 157 \(\pm\) 4 & 195 \(\pm\) 5 & 307 \(\pm\) 8 & 397 \(\pm\) 11 & 432 \(\pm\) 11 \\ Background Uncertainty, True Energy & 156 \(\pm\) 6 & 177 \(\pm\) 10 & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \\ Background Uncertainty, Reconstructed Energy & 155 \(\pm\) 5 & 171 \(\pm\) 9 & \multicolumn{1}{c}{} & \\ \hline \hline \end{tabular}
\end{table}
Table 5: Observed distance in km for the Fourier transform method for the situations where true energy and reconstructed energy are used. Inclusion of background uncertainties is compared to the situation of zero background uncertainty. Situations with a slash are deemed impossible to range due to background uncertainties dominating.
Figure 8: The observed range of the Hartlepool reactor complex with observation time.
The FT method is able to range reactors in more complex scenarios. However, the event rate for these distant reactors results in this taking a long time, in excess of 50 years, and therefore being impractical.
This detector configuration is not able to make the best use of spectral analysis. A potential solution is using gadolinium-loaded liquid scintillator. This would lower the energy threshold, boost detector efficiency at low energies and improve energy resolution. In principle, this could allow the FT method to work at much shorter ranges by resolving the \(\theta_{13}\) oscillations or allow the chi-squared method to range Hartlepool with much more accuracy.
## VII Conclusions
An attempt at extending the rate-only analysis of nuclear fission reactor antineutrinos in [12] has been made by using two methods of spectral analysis with the aim of determining the distance between a reactor and a detector. The simulated detector used is a 22 m height and diameter right cylinder with a 9 m inner PMT support structure and a 15% PMT coverage. The detector is filled with gadolinium-doped water-based liquid scintillator, and is located at the Science & Technology Facilities Council (STFC) Boulby Underground Laboratory.
The analyses show potential to range real reactor signals, but are significantly limited by the detector design. A minimum chi-squared analysis is able to range nearby reactors which produce a dominant signal well within the lifetime of this kind of detector, with the EDF Hartlepool reactor ranged to 50% of its true distance. A Fourier transform analysis is able to handle reactors at much larger standoff distances, up to 180 km when background uncertainties are included. However, this would take a very large amount of time due to the low event rate.
With both analyses, the fundamental issue is the detector's performance. An order of magnitude increase in signal rate is needed to range the Heysham 2 cores at 149 km within a detector lifetime, and the energy thresholds and detector efficiency limit the ranging of more local reactors. Using gadolinium-doped liquid scintillator could offer a solution as it would improve energy resolution, lower the threshold and improve low energy efficiency.
Due to performance issues, the gadolinium-doped WbLS detection medium is not appropriate to use in determining the distance to operating reactors. However, the analyses developed could be used with a more sensitive detector for this purpose.
###### Acknowledgements.
The authors would like to thank L. Kneale for her input on the simulation and analysis of data (see [12]) as well as her review of the work. Thanks also go to T. Appleyard for her early work on the LEARN analysis, and A. Scarff for his regular review of the work.
Figure 9: The Fourier transform, in sine (black solid) and cosine (red dotted), for Heysham 2 with background uncertainties and energy resolution effects.
## Author contributions
The initial proposal of both reactor ranging in general and the application of a Fourier transform came from J. G. Learned.
Monte Carlo simulations produced by S. Wilson with input from L. Kneale.
The LEARN analysis for data reduction developed by S. Wilson with input from J. Armitage, N. Holland and T. Appleyard. Sections of the data reduction, such as the analytic post-muon veto and radionuclide calculations were developed by L. Kneale.
The chi-squared analysis was initially developed by J. Armitage, later being taken on by S. Wilson. The Fourier transform was developed by C. Cotsford.
C. Cotsford drafted Section IV.2, with S. Wilson drafting the remainder of the paper.
|
2308.14304 | Solving Attention Kernel Regression Problem via Pre-conditioner | The attention mechanism is the key to large language models, and the
attention matrix serves as an algorithmic and computational bottleneck for such
a scheme. In this paper, we define two problems, motivated by designing fast
algorithms for proxy of attention matrix and solving regressions against them.
Given an input matrix $A\in \mathbb{R}^{n\times d}$ with $n\gg d$ and a
response vector $b$, we first consider the matrix exponential of the matrix
$A^\top A$ as a proxy, and we in turn design algorithms for two types of
regression problems: $\min_{x\in \mathbb{R}^d}\|(A^\top A)^jx-b\|_2$ and
$\min_{x\in \mathbb{R}^d}\|A(A^\top A)^jx-b\|_2$ for any positive integer $j$.
Studying algorithms for these regressions is essential, as matrix exponential
can be approximated term-by-term via these smaller problems. The second proxy
is applying exponential entrywise to the Gram matrix, denoted by
$\exp(AA^\top)$ and solving the regression $\min_{x\in
\mathbb{R}^n}\|\exp(AA^\top)x-b \|_2$. We call this problem the attention
kernel regression problem, as the matrix $\exp(AA^\top)$ could be viewed as a
kernel function with respect to $A$. We design fast algorithms for these
regression problems, based on sketching and preconditioning. We hope these
efforts will provide an alternative perspective of studying efficient
approximation of attention matrices. | Zhao Song, Junze Yin, Lichen Zhang | 2023-08-28T04:37:38Z | http://arxiv.org/abs/2308.14304v2 | # Solving Attention Kernel Regression Problem via Pre-conditioner
###### Abstract
Large language models have shown impressive performance in many tasks. One of the major features from the computation perspective is computing the attention matrix. Previous works [14, 15, 16] have formally studied the possibility and impossibility of approximating the attention matrix. In this work, we define and study a new problem which is called the attention kernel regression problem. We show how to solve the attention kernel regression in the input sparsity time of the data matrix.
Introduction
Analyzing and developing quick randomized algorithms for numerical linear algebra tasks has attracted much attention [22, 23, 24, 25]. These problems include the approximate calculation of leverage scores, least squares regression, and low rank approximation, and they have numerous applications in areas such as recommendation systems [1], data mining [1], web search [1, 23], information retrieval [20], learning mixtures of distributions [1, 21], and clustering [13, 22]. The application of randomized and approximation techniques enables these problems to be solved much more rapidly than their deterministic and exact counterpart.
The increasing size of the dataset also results in significant growth in the size and number of matrices. This also imposes challenges on efficiently performing computations on matrices, such as matrix multiplication, inversion, and various factorizations. A prominent research direction to speed up these computations is the _sketching_ approach [23, 22]. Roughly speaking, given a tall and skinny matrix \(A\in\mathbb{R}^{n\times d}\) with \(n\gg d\), one draws a random matrix \(S\in\mathbb{R}^{m\times n}\) from a structured family and computes the product \(SA\). Based on the choice of \(m\), the random matrix \(S\) can either preserve the column norms of \(A\)[15] or the entire subspace spanned by \(A\)[23]. Moreover, the structure of \(S\) oftentimes enables the matrix product \(SA\) to be computed very efficiently [1, 2, 22, 24]. Such an approach has found many applications including linear regression [22], low rank approximation [22, 23] and kernel approximation [1, 1, 23, 24].
In this paper, we have two main objectives. First, we study the efficient computation and approximation of _attention matrices_. These matrices are fundamental objects of deep learning models utilized in a wide range of domains, including natural language processing [17], computer vision [14, 15], speech recognition [18, 20], and robotics [19]. Attention mechanisms enable models to focus selectively on specific portions of input data and dynamically weight distinct features and context information. The attention matrix is a crucial component of attention mechanisms, capturing the relationships between input elements and the query vector. By computing the attention matrix, models can learn to attend to pertinent information and disregard irrelevant information, leading to improved performance and interpretability. Furthermore, the attention matrix provides valuable insights into how models make decisions and reason about input data, aiding in debugging and enhancing models. Therefore, understanding the attention matrix is crucial for comprehending the behavior and limitations of deep learning models and developing more potent attention mechanisms. Recent studies have shown that attention mechanisms can be applied to other problems beyond traditional deep learning, such as graph neural networks [25, 19], reinforcement learning [1], and meta-learning [26]. Consequently, we anticipate that research on attention mechanisms and the attention matrix will remain a productive field of study in the future.
Second, we want to address the increasing difficulty of the more complicated matrix operations, as we introduced earlier, by utilizing the sketching technology product of attention matrices. To be more specific, given product of attention matrices, we develop efficient algorithms to solve regression against them. We note that compare to the standard least square regression problem involving a single design matrix, the product magnifies the issue of large dataset.
We introduce the problems studied in this paper as follows.
**Definition 1.1**.: _Given \(A\in\mathbb{R}^{n\times d}\), \(y\in\mathbb{R}^{n}\). The goal is to solve_
\[\min_{x\in\mathbb{R}^{d}}\|AA^{\top}Ax-y\|_{2}^{2}\]
**Definition 1.2**.: _Given \(A\in\mathbb{R}^{n\times d}\), \(y\in\mathbb{R}^{d}\). The goal is to solve_
\[\min_{x\in\mathbb{R}^{d}}\|A^{\top}AA^{\top}Ax-y\|_{2}^{2}\]
We first consider the regression problems defined above. We create and analyze the algorithms for these problems in Section H and Section I.
Then, by using induction, we propose two new algorithms to solve the regression problem
\[\min_{x\in\mathbb{R}^{d}}\|(A^{\top}A)^{j}x-b\|_{2} \tag{1}\]
and
\[\min_{x\in\mathbb{R}^{d}}\|A(A^{\top}A)^{j}x-b\|_{2} \tag{2}\]
respectively, where \(d\) and \(j\) are arbitrary natural numbers. Inspired by the recent attention computation study [1], see detailed discussion in Section E, we study the so-called exponential regression:
**Definition 1.3** (Exponential regression).: _Given \(A\in\mathbb{R}^{n\times d}\), \(y\in\mathbb{R}^{n}\), the goal is to solve_
\[\min_{x\in\mathbb{R}^{d}}\|\exp(AA^{\top})x-b\|_{2}^{2}\]
### Our Result
We present the informal version of our main result below.
**Theorem 1.4** (Informal version of Theorem J.2).: _Let \(A\in\mathbb{R}^{n\times d}\), \(b\in\mathbb{R}^{d}\), and \(\kappa\) denote the condition number of \(A\). Let \(\epsilon_{\mathrm{final}},\delta_{\mathrm{final}}\in(0,0.1)\)._
_For the regression problem shown in Eq. (1), there exists an algorithm (Algorithm 5) that runs in time_
\[O((nd+d^{3})\cdot j\cdot\log(\kappa/\epsilon_{\mathrm{final}}) \cdot\log^{2}(jn/\delta_{\mathrm{final}}))\]
_and outputs a vector \(x^{\prime}\in\mathbb{R}^{d}\) such that \(\|(A^{\top}A)^{j}x^{\prime}-b\|_{2}\leq\epsilon_{\mathrm{final}}\|b\|_{2}\) holds with probability \(1-\delta_{\mathrm{final}}\)._
**Theorem 1.5** (Informal version of Theorem K.1).: _Let \(A\in\mathbb{R}^{n\times d}\), \(b\in\mathbb{R}^{n}\), and \(\kappa\) denote the condition number of \(A\). Let \(\epsilon_{\mathrm{final}},\delta_{\mathrm{final}}\in(0,0.1)\)._
_For the regression problem shown in Eq. (2), there exists an algorithm (Algorithm 6) that runs in time_
\[O((nd+d^{3})\cdot j\cdot\log(\kappa/\epsilon_{\mathrm{final}}) \cdot\log^{2}(jn/\delta_{\mathrm{final}}))\]
_and outputs a vector \(x^{\prime}\in\mathbb{R}^{d}\) such that \(\|(A^{\top}A)^{j}x^{\prime}-b\|_{2}\leq\epsilon_{\mathrm{final}}\|b\|_{2}\) holds with probability \(1-\delta_{\mathrm{final}}\)._
Before proceeding, we highlight the significant speedup obtained by our results. Note that if we try to compute the product \((A^{\top}A)^{j}\) directly, it will take \(O(j\cdot nd^{2})\) time. One could also utilize the squaring trick to compute this product in \(O(\log j\cdot nd^{2})\) time. In contrast, our algorithm runs in time \(O(j\cdot nd)\) for \(n\gg d\). This is a significant improvement as long as \(j\leq\log j\cdot d\), which is often the case when \(j\) is orders smaller than \(d\).
Our result regarding the exponential regression requires some extra ingredients. We define the notion of stable rank as \(\mathrm{srank}(A):=\|A\|_{F}^{2}/\|A\|_{2}^{2}\). Now we are ready to state our result.
**Theorem 1.6** (Informal version of Theorem L.3).: _Let \(A\in\mathbb{R}^{n\times d}\), \(b\in\mathbb{R}^{n}\), and \(\kappa\) denote the condition number of \(A\). Let \(\epsilon_{\mathrm{final}},\delta_{\mathrm{final}}\in(0,0.1)\). We can find \(x^{\prime}\) such_
\[\|\exp(AA^{\top})x^{\prime}-b\|_{2}\leq\epsilon_{\mathrm{final}} \cdot\|b\|_{2}\]
_Moreover, let_
\[m=O(\epsilon^{-2}\beta\log^{3}(\kappa nd/\epsilon_{\mathrm{ final}}\delta_{\mathrm{final}})),\]
_where \(\beta\) is an upper bound of \(\mathrm{srank}(\exp(AA^{\top}))\), the vector \(\widehat{x}\in\mathbb{R}^{n}\) can be computed in time_
\[O(mn+\epsilon^{-2}nd+m^{\omega}),\]
_where \(\omega\) is the matrix multiplication exponent. Currently \(\omega\approx 2.373\)._
### Related work
Subspace embedding.Sarlos [14] was the first to introduce subspace embedding, which has been widely employed in the field of numerical linear algebra for the past ten years. Many studies have been conducted on this topic, including those by [13, 15, 16, 17]. For a more comprehensive overview, interested readers can refer to [18]. The definition of subspace embedding is shown in Definition C.3.
Least squares regression.The fitting method referred to as "total least squares" has only recently been named as such in literature [14]. However, it is not a new method and has been extensively studied in the statistical literature for a long time under various names, such as "orthogonal regression," "errors-in-variables," and "measurement errors." In fact, the problem of univariate fitting, \(n=1,d=1\), was first discussed in 1877 by Adcock [1], and subsequent contributions were made by Pearson [10], Koopmans [11], and York [12]. The method has been rediscovered several times, often independently, and around 50 years ago, it was extended by Gleser [13] to multivariate problems of dimension \(n>1\) and \(d>1\).
In more recent times, the total least-squares method has gained attention beyond the field of statistics. Golub and Van Loan [14] were the first to study this problem in the field of numerical analysis, and they developed an algorithm based on the singular value decomposition. Staar [15] independently arrived at the same concept through geometrical insight into the properties of the singular value decomposition. Van Huffel and Vandewalle [13] extended Golub and Van Loan's algorithm to cover all cases where their algorithm fails to produce a solution. They described the properties of these non-generic total least-squares problems and proved that their proposed generalization still satisfies the total least-squares criteria if additional constraints are imposed on the solution space. This approach, which appears to be different from the multivariate EIV regression analysis method studied by Gleser, is actually equivalent to it. Gleser's method is based on an eigenvalue-eigenvector analysis, while the total least-squares method uses the singular value decomposition, which is more robust numerically in terms of algorithmic implementation. Moreover, the total least-squares algorithm can compute the minimum norm solution whenever the total least-squares solution is not unique.
Attention matrix.The attention matrix is a square matrix that represents correlations between words or tokens in natural language text. It has rows and columns that correspond to each token, and its entries denote the degree of correlation between them. This matrix is employed to determine the significance of each input token in a sequence when generating an output. In an attention
mechanism, each input token is assigned a weight or score, which indicates its relevance or importance to the current output being produced. These scores are calculated based on a similarity function that compares the current output state to the input states.
There are several methods that attempt to estimate the heavy entries of the attention matrix by constraining the attention to local neighbors of queries using techniques such as Locality Sensitive Hashing (LSH) [10, 14, 15] or k-means clustering [13]. Another approach is to use random feature maps of Gaussian or exponential kernels to approximate the attention matrix [16]. Recently, Chen et al. [17] demonstrated that combining LSH-based and random feature-based methods is more effective at approximating the attention matrix.
The computation of inner product attention [11, 12, 13, 14, 15, 16, 17, 18] is also a crucial task in contemporary machine learning. It is necessary for training large language models (LLMs) such as Transformer [19], GPT-1 [15], BERT [16], GPT-2 [20], GPT-3 [21], and ChatGPT, which are capable to handle natural language more effectively than conventional algorithms or smaller models. [1] provides both algorithm and hardness for static attention computation. [14] provides both algorithm and hardness for dynamically maintaining the attention matrix. [14] shows how to compute the attention matrix differently privately. [14] studies the exponential regression. [13, 15] considers softmax regression. [15] provides an algorithm for rescaled softmax regression, which has a different formulation than exponential regression [14] and softmax regression [15].
Sketching.Sketching techniques are powerful tools used to speed up machine learning algorithms and optimization. The central idea is to break down a large input matrix into a much smaller sketching matrix while preserving the important characteristics of this large matrix. This enables the algorithm to process this smaller matrix instead of the original large one. Thus, the running time may be largely shortened. Many previous works have developed sketching algorithms with strong theoretical guarantees. For example, the Johnson-Lindenstrauss lemma in [13] shows that under a certain high-dimensional space, the projecting points onto the lower-dimensional subspace may preserve the pairwise distances between the points. This property supports the development of faster algorithms for problems, like nearest neighbor search. Moreover, as shown in [1], the Fast Johnson-Lindenstrauss Transform (FJLT) gives a certain family of structured random projections, which can be applied to a matrix in input sparsity time.
Typically, there are two methods for employing sketching matrices. The first one is called sketch-and-solve, which utilizes sketching for a fixed number of times. The second one is called iterate-and-sketch: sketching can be employed during each iteration of the optimization algorithm, and in the meantime create a robust analysis framework.
Sketch-and-solve has led to faster algorithms in several domains: in the low-rank approximation and linear regression [18, 19], by using FJLT, one can compress the feature matrix down to a short and wide sketch, so it is much easier to solve the smaller regression problem to get an approximated solution to the original problem, as in [11, 19], which gives nearly input sparsity time algorithms; in kernel methods, in [10], the sketching methods, like Random Kitchen Sinks, can be used to approximate the large kernel matrices; in tensor method, the works like [1, 12, 19, 16] present a technique, called TensorSketch which can compress tensors down to much smaller core tensors, enabling faster algorithms for problems like tensor regression [14, 15, 16, 16], CP decomposition [17]; in column subset selection, sketching the data matrix speeds up column selection with provable approximation guarantees [13, 15, 16, 13]. Moreover, it can be used for finding optimal bound [15], designing an efficient neural network training method [16]
Beyond the classic sketch-and-solve paradigm, sketching has been adapted to many iterative optimization algorithms. Notable examples include but not limited to non-convex optimization [15, 16, 17, 18, 19], discrepancy problem [20, 21], John Ellipsoid computation [20], the Frank-Wolfe algorithm [20, 21], linear programming [22, 23, 24, 25, 26, 27, 28, 29, 30], reinforcement learning [21], dynamic kernel estimation [33], empirical risk minimization [21, 22], federated learning [20], semi-definite programming [21], regression inspired by softmax [22, 21, 22, 23], rational database [33, 34], matrix sensing [21], submodular maximization [22], trace estimation [23], and projection maintenance [20].
Overall, sketching is now an indispensable tool for handling large-scale machine learning tasks. Carefully designed sketches enable dramatic speedups while bringing little approximation error.
Roadmap.In Section 2, we introduce the basic mathematical notations. In Section 3, we give an overview of the techniques that we use in this paper. In Section 4, we give a conclusion of this paper.
## 2 Preliminary
In this section, we introduce notations used throughout the paper.
We use \(\mathbb{R}\) to denote the set of all real numbers. We use \(\mathbb{Z}\) to denote the set of all integers and use \(\mathbb{Z}_{+}\) to denote the set containing all positive integers. For any \(n\in\mathbb{Z}_{+}\), we define \([n]:=\{1,2,3,\ldots,n\}\).
For all \(d\in\mathbb{Z}_{+}\), we use \(\mathbb{R}^{d}\) to denote the set of all vectors with length \(d\) and real entries and use \(\mathbb{Z}_{+}^{d}\) to denote the set containing all vectors with length \(d\) and entries of positive integers. For a vector \(x\in\mathbb{R}^{n}\), we use \(\|x\|_{1}\) to denote the \(\ell_{1}\) norm, use \(\|x\|_{2}\) to denote the \(\ell_{2}\) norm, i.e., \(\|x\|_{2}=\left(\sum_{i=1}^{n}x_{i}^{2}\right)^{\frac{1}{2}}\), and use \(\|x\|_{\infty}\) to denote the \(\ell_{\infty}\) norm.
For all \(m,n\in\mathbb{Z}_{+}\), we use \(\mathbb{R}^{m\times n}\) to denote the set containing all matrices with \(m\) rows, \(n\) columns, and real entries. For a matrix \(A\in\mathbb{R}^{m\times n}\), we use \(\|A\|\) to denote the spectral norm of \(A\), i.e., \(\|A\|=\max_{\|x\|_{2}=1}\|Ax\|_{2}\). We use \(A^{\top}\) to denote the transpose of \(A\). We use \(\sigma_{\min}(A)\) to denote the minimum singular value of \(A\), i.e., \(\sigma_{\min}(A)=\min_{\|x\|_{2}=1}\|Ax\|_{2}\). Accordingly, we use \(\sigma_{\max}(A)\) to denote the maximum singular value of \(A\), so \(\sigma_{\max}(A)=\|A\|\). Furthermore, we use \(A^{\dagger}\) to denote the pseudo inverse of \(A\) and use \(A^{-1}\) to denote the true inverse of \(A\). The true inverse exists if \(m=n\) and \(\operatorname{rank}\left(A\right)=m\). For all \(m_{1},m_{2},n\in\mathbb{Z}_{+}\), for all the matrices \(A\in\mathbb{R}^{m_{1}\times n}\) and \(B\in\mathbb{R}^{m_{2}\times n}\), we use \(A\oplus B\) to denote the matrix \(\begin{bmatrix}A\\ B\end{bmatrix}\). Correspondingly, for all \(m_{1},m_{2},\ldots,m_{p},p,n\in\mathbb{Z}_{+}\), for all the matrices \(A_{1}\in\mathbb{R}^{m_{1}\times n}\), \(A_{2}\in\mathbb{R}^{m_{2}\times n}\), \(\cdots\), \(A_{p}\in\mathbb{R}^{m_{p}\times n}\), we use \(\oplus_{i=1}^{p}A_{i}\) to denote \(A_{1}\oplus A_{2}\oplus\cdots\oplus A_{p}\).
We write \(x=y\pm\epsilon\) if \(x\in[y-\epsilon,y+\epsilon]\). \(\mathbf{1}_{n}\) represents a \(n\)-dimensional vector, whose entries are all \(1\), and \(\mathbf{0}_{n}\) represents a \(n\)-dimensional vector, whose entries are all \(0\). A matrix \(P\) is a projection matrix if \(P=P^{2}\). Usually \(\|P\|\leq 1\). For a symmetric matrix \(B\) belonging to \(\mathbb{R}^{n\times n}\), we define \(B\) as positive semidefinite (denoted as \(B\succeq 0\)) when, for any vectors \(x\) in \(\mathbb{R}^{n}\), the inequality \(x^{\top}Bx\geq 0\) holds true. We also call \(B\) a PSD matrix for simplicity.
## 3 Technique Overview
In Section 3.1, we present the techniques we use to show the properties of a particular case of the odd power algorithm. In Section 3.2, we not only show the techniques of showing the correctness
and the runtime of a particular case of the even power algorithm but also introduce a way to bound the forward error of the PSD matrix, which is used to support the bounding. In Section 3.3, we offer the methods to generalize the particular case to all the even power cases. In Section 3.4, on the other hand, we elucidate the methods of generalizing the particular case to all the odd power cases. In Section 3.5, we introduce the techniques which are used for analyzing the exponential regression.
Our main purpose is to prove the formal version of Lemma 1 and Lemma 2. These results are nontrivial to be proved. Therefore, to achieve this goal, we start by analyzing the correctness and running time of relatively simple cases, namely
\[\min_{x\in\mathbb{R}^{d}}\|AA^{\top}Ax-b_{3}\|_{2} \tag{3}\]
and
\[\min_{x\in\mathbb{R}^{d}}\|A^{\top}AA^{\top}Ax-b_{3}\|_{2}. \tag{4}\]
Each of the techniques of these problems is introduced in Section 3.1 and Section 3.2, respectively.
To prove the formal version of Lemma 1, we regard the regression problem of Eq. (4) as our base case. We prove all we need for the inductive case in the induction hypothesis (see Lemma J.1). Then, by induction, the formal version of Lemma 1 can be proved.
Furthermore, the formal version of Lemma 2 can be proved by combining the even case (formal version of Lemma 1) and the linear case (Lemma F.1).
### A particular case for odd power algorithm
In this section, we analyze the technique for the algorithm (see Algorithm 3) that solves Eq. (3), which corresponds to the simplified case of the formal version of Lemma 2. For its correctness part, the main challenge is to bound
\[\|AA^{\top}Ax-b_{3}\|_{2}. \tag{5}\]
Using the mathematical properties, including but not limited to the triangle inequality, properties of the norm and \(O(\cdot)\), and the condition number, we can bound Eq. (5) by the sum of
\[\|A\|\cdot\epsilon_{2}\|y^{\prime}\|_{2} \tag{6}\]
and
\[(1+\epsilon_{1})\cdot\operatorname{OPT}. \tag{7}\]
Eq. (7) is already bounded by the definition of \(\epsilon_{1}\) and \(\operatorname{OPT}\), but it is not trivial to bound Eq. (6). \(y^{\prime}\) is defined to be the output of the fast linear regression Algorithm (see Algorithm 1) and \(y_{*}\) is defined to be the exact solution to this regression problem. Therefore, we can use the triangle inequality to bound \(\|y^{\prime}\|_{2}\) by the sum of \(\|y_{*}\|_{2}\) and \(\|y^{\prime}-y_{*}\|_{2}\).
Bounding \(\|y^{\prime}-y_{*}\|_{2}\), we can use the forward error of the simple matrix, namely Lemma D.4. We get
\[\|y^{\prime}-y_{*}\|_{2}\leq O(\sqrt{\epsilon_{1}})\cdot\sigma_{ \min}(A)^{-1}\operatorname{OPT}.\]
Furthermore, for \(\|y_{*}\|_{2}\), it suffices to bound \(\|A^{\dagger}b_{3}\|_{2}\) because it is the exact solution to the regression problem. Then, by using mathematical properties of the normed vector space, we show
\[\|y_{*}\|_{2}\leq\sigma_{\min}(A)^{-1}\cdot\|b_{3}\|_{2}.\]
Combining everything together, we can bound Eq. (6). Together with Eq. (7), it can be used to show the bound of Eq. (5). Therefore, we finish showing the correctness part of this regression problem.
For the running time of the algorithm that solves Eq. (3), since this algorithm consists of running Algorithm 1 and Algorithm 2, we take the accuracy parameter of the algorithm solving Eq. (3), \(\epsilon_{3}\), to be \(10\cdot\epsilon_{1}\), where \(\epsilon_{1}\) is the accuracy parameter of Algorithm 1, and to be \(\epsilon_{2}\cdot\kappa(A)\), where \(\epsilon_{2}\) is the accuracy parameter of Algorithm 2; we take the failure probability of the algorithm solving Eq. (3), \(\delta_{3}\), to be \(2\cdot\delta_{1}\), where \(\delta_{1}\) is the failure probability of Algorithm 1, and to be \(2\cdot\delta_{2}\), where \(\delta_{2}\) is the failure probability of Algorithm 2. Putting everything together, we can show the running time is
\[O((nd+d^{3})\cdot\log(\kappa/\epsilon_{3})\cdot\log^{2}(n/ \delta_{3})).\]
### A particular case for even power algorithm
In this section, we examine the methods used to analyze Algorithm 4 to solve Eq. (4), which is the simplified form of the informal version of Lemma 1. It is more complicated to analyze this than to analyze the particular case for the odd power algorithm because, to the best of our knowledge, there was no past literature proving the forwarded error for PSD matrices.
Therefore, before introducing the methods of analyzing the properties of Algorithm 4, we first focus on displaying the techniques for deriving the forwarded error for PSD matrices. The purpose of the forwarded error is to bound \(\|x^{\prime}-x_{*}\|_{2}\), where \(x_{*}\in\mathbb{R}^{d}\) denotes the exact solution to the regression problem
\[\min_{x\in\mathbb{R}^{d}}\|A^{\top}Ax-b_{2}\|_{2}, \tag{8}\]
and \(x^{\prime}\in\mathbb{R}^{d}\) denotes the output of Algorithm 2, which, based on Lemma D.5, is the vector satisfying
\[\|A^{\top}Ax^{\prime}-b\|_{2}\leq\epsilon_{2}\|b\|_{2}. \tag{9}\]
**Remark 3.1**.: _Solving Eq. (4) is equivalent to applying Algorithm 2 twice, so it suffices to find the forwarded error for PSD matrices._
By using the property of spectral norm, we show
\[\|x^{\prime}-x_{*}\|_{2}\leq\|A^{\top}A(x^{\prime}-x_{*})\|_{2} \cdot\|(A^{\top}A)^{\dagger}\|.\]
We can directly apply Fact A.4 to bound \(\|(A^{\top}A)^{\dagger}\|\), but for \(\|A^{\top}A(x^{\prime}-x_{*})\|_{2}\), we have to use the Pythagorean theorem to split that into the difference of
\[\|A^{\top}Ax^{\prime}-b\|_{2}^{2}\]
and
\[\|A^{\top}Ax_{*}-b\|_{2}^{2}.\]
As \(x_{*}\) is the exact solution, we get \(\|A^{\top}Ax_{*}-b\|_{2}^{2}=0\), and by Eq. (9), we get our desired result
\[\|x^{\prime}-x_{*}\|_{2}\leq\epsilon_{2}\cdot\frac{1}{\sigma_{\min}(A)^{2}}\cdot \|b\|_{2}.\]
Now, we are ready to introduce the technique for analyzing Algorithm 4. For its correctness part, the goal is to bound \(\|A^{\top}AA^{\top}Ax^{\prime}-b_{4}\|_{2}\). We use the triangle inequality to split it into the sum of
\[\|A^{\top}AA^{\top}Ax^{\prime}-A^{\top}Ay^{\prime}\|_{2}\]
and
\[\|A^{\top}Ay^{\prime}-b_{4}\|_{2}.\]
By using Eq. (9), we can bound \(\|A^{\top}Ay^{\prime}-b_{4}\|_{2}\) by \(\epsilon_{2}\|b\|_{2}\), but for \(\|A^{\top}AA^{\top}Ax^{\prime}-A^{\top}Ay^{\prime}\|_{2}\), we can apply Eq. (9) (only by replacing "\(b\)" in Eq. (9) by "\(y^{\prime}\)") again and Fact A.4, we can show it is bounded by
\[\sigma_{\max}(A)^{2}\cdot\epsilon_{2}\|y^{\prime}\|_{2}.\]
At this moment, we are only left with bounding \(\|y^{\prime}\|_{2}\). By the triangle inequality, we know it can be bounded by
\[\|y^{\prime}-y_{*}\|_{2}+\|y_{*}\|_{2}.\]
For the first term, it can be bounded by the forward error that we explained above. For the second term, since it is the exact solution of Eq. (8), we can bound it by \(\sigma_{\min}(A)^{-2}\cdot\|b_{4}\|_{2}\) by using the property of the spectral norm and Fact A.4.
Therefore, since we bound all the terms of the split of \(\|A^{\top}AA^{\top}Ax^{\prime}-b_{4}\|_{2}\), we finish showing the techniques for the correctness part.
Next, for the running time,
To determine the running time of the algorithm that solves Eq. (4), we need to run Algorithm 2 twice. To ensure accuracy, we set the accuracy parameter of the algorithm solving Eq. (4), \(\epsilon_{4}\), to be \(10\cdot\epsilon_{2}\cdot\kappa(A)^{2}\) (where \(\epsilon_{2}\) is the accuracy parameter of Algorithm 2). We also set the failure probability of the algorithm solving Eq. (4), \(\delta_{4}\), to be \(2\cdot\delta_{2}\) (where \(\delta_{2}\) is the failure probability of Algorithm 2). By combining all of these factors, we can determine that the running time is given by the following expression:
\[O((nd+d^{3})\cdot\log(\kappa/\epsilon_{4})\cdot\log^{2}(n/\delta_{4})).\]
### The general case for even power algorithm
In this section, we introduce the method of how we generalize the algorithm (see Algorithm 4) solving the particular case of the regression problem, namely Eq. (8), to the regression problems containing any even number of the matrices. To show the property of such an algorithm (see Algorithm 5), we use mathematical induction.
First, to make the induction more organized and verifiable, we introduce the induction hypothesis.
We assume for all \(i\in[k]\), we have
1. \(\|(A^{\top}A)^{i}b_{i}-b_{0}\|_{2}\leq\epsilon_{i}\|b_{0}\|_{2}\)
2. \(\|b_{i}\|_{2}\leq 2\sigma_{\min}(A)^{-2i}\|b_{0}\|_{2}\)
3. \(\epsilon_{i}\leq 0.5\epsilon_{i-1}\)
4. The running time is \(C\cdot((nd+d^{3})\cdot k\cdot\log(\kappa(A)/\epsilon_{k})\cdot\log(1/\delta_{k}))\)
5. The failure probability is \(\delta_{1}+\delta_{2}+\cdots+\delta_{k}\)
We want to show that these five statements also hold for \(i=k+1\). To prove the first statement, we need to analyze \(\|(A^{\top}A)^{k+1}b_{i}-b_{0}\|_{2}\).
We use the triangle inequality and spectral norm properties to bound this equation, namely
\[\|(A^{\top}A)^{k+1}b_{k+1}-b_{0}\|_{2}\leq\|(A^{\top}A)^{k}\|\cdot\|A^{\top}Ab_ {k+1}-b_{k}\|_{2}+\|(A^{\top}A)^{k}b_{k}-b_{0}\|_{2}. \tag{10}\]
1. \(\|(A^{\top}A)^{k}b_{k}-b_{0}\|_{2}\) can be bounded by \(\epsilon_{k}\|b_{0}\|_{2}\) based on our assumption, or base case, when \(i\in[k]\).
2. \(\|(A^{\top}A)^{k}\|\) can be bounded by \(\sigma_{\max}(A)^{2k}\) based on Fact A.4.
3. \(\|A^{\top}Ab_{k+1}-b_{k}\|_{2}\) can be bounded \(0.1\epsilon_{k+1}\kappa(A)^{-2k}\|b_{k}\|_{2}\) based on our two matrices version PSD regression (see Lemma G.1).
Therefore, by using these techniques, we can bound Eq.10, which completes proving the first statement.
For the second statement, the goal is to bound \(\|b_{k+1}\|_{2}\). By using the property of the spectral norm, we can get
\[\|b_{k+1}\|_{2}\leq\|((A^{\top}A)^{k+1})^{-1}\|\cdot\|(A^{\top}A)^{k+1}b_{k+1} \|_{2}. \tag{11}\]
1. \(\|((A^{\top}A)^{k+1})^{-1}\|\) can be bounded by \(2\sigma_{\min}(A)^{-2(k+1)}\) based on Fact A.4.
2. \(\|(A^{\top}A)^{k+1}b_{k+1}\|_{2}\) can be bounded \(2\|b_{0}\|_{2}\) based on the first statement and the triangle inequality.
Using these techniques, we can bound Eq.11, which completes proving the second statement.
The third statement can be proved by choosing \(\epsilon\) to satisfy certain conditions; the fourth statement can be proved by adding the time from the previous step and the time from this step; the fifth statement can be proved by the union bound.
By all of these techniques, we can prove the induction hypothesis. Now, by regarding Lemma G.1 as the base case and the induction hypothesis by the inductive case, we can prove the formal version of Lemma 1.4.
### The general case for odd power algorithm
In this section, we introduce the technique of how we analyze the algorithm which solves the regression problem containing an arbitrary odd number \(2j+1\) of matrices, namely
\[\min_{x\in\mathbb{R}^{d}}\|A(A^{\top}A)^{j}x-b\|_{2}.\]
Our strategy is to apply the fast linear regression algorithm first, namely Algorithm 1. Then, we apply the even power algorithm to solve the regression problem containing \(2j\) matrices, namely
\[\min_{x\in\mathbb{R}^{d}}\|(A^{\top}A)^{j}x-b\|_{2}.\]
The primary difficulty for ensuring correctness is to establish an upper bound for the expression
\[\|A(A^{\top}A)^{j}x^{\prime}-b_{\text{odd}}\|_{2}. \tag{12}\]
This can be accomplished using mathematical techniques such as the triangle inequality, norm properties, the condition number, and the \(O(\cdot)\) notation. Specifically, we can bound Eq. (12) by the sum of two terms: the first term is given by
\[\|A(A^{\top}A)^{j}x^{\prime}-Ay^{\prime}\|_{2} \tag{13}\]
and the second term is given by Eq. (7), which involves a constant factor and the optimal solution OPT.
While Eq. (7) can be bounded trivially using the definitions of \(\epsilon_{1}\) and OPT, bounding Eq. (13) is more challenging. The output \(y^{\prime}\) of the fast linear regression algorithm is defined, and \(y_{*}\) is the exact solution. The \(\ell_{2}\) norm of \(y^{\prime}\) can be bounded using the triangle inequality by the sum of the \(\ell_{2}\) norm of \(y_{*}\) and the \(\ell_{2}\) norm of the difference between \(y^{\prime}\) and \(y_{*}\). The latter can be bounded using the forward error of the matrix, which is obtained using Lemma D.4. We can use this to bound \(\|y^{\prime}-y_{*}\|_{2}\) by
\[O(\sqrt{\epsilon_{1}})\cdot\sigma_{\min}(A)^{-1}\operatorname{OPT}.\]
To bound \(\|y_{*}\|_{2}\), we can focus on \(\|A^{\dagger}b_{3}\|2\) because it is the exact solution to the regression problem. By using mathematical properties of normed vector spaces, we can show that
\[\|y\|_{2}\leq\sigma_{\min}(A)^{-1}\cdot\|b_{3}\|_{2}.\]
Combining all the aforementioned bounds allows us to bound (13). Together with (7), this can be used to show the bound of (12), thereby demonstrating the correctness of the regression problem.
For the running time, we get the same as the even power algorithm, namely
\[O((nd+d^{3})\cdot j\cdot\log(\kappa/\epsilon_{\text{final}})\cdot\log^{2}(n/ \delta_{\text{final}})).\]
### Exp Kernel
Our purpose is to find a \(\widehat{x}\in\mathbb{R}^{n}\) such that
\[\|G\widehat{x}-y\|_{2}\leq\epsilon\|y\|_{2},\]
where \(G\in\mathbb{R}^{n\times n}\) and \(\epsilon\) is an arbitrary small real number.
We let \(\widehat{\epsilon}=\epsilon/4\), and based on Theorem L.8, we can get \(\epsilon\)-approximation to \(Z\) and \(W_{g}(X)\).
We use Algorithm 7 to get our desired result. By Lemma L.2 (one important property which supports Algorithm 7), the SVD defined in this algorithm, we have
\[SW_{g}(X)^{\top}=U\Sigma V^{\top}\]
and
\[R=U\Sigma^{-2}.\]
On the other hand, by using Lemma L.2 and the above equations, we can show
\[\kappa(R^{\top}SW_{g}(X)^{\top})\leq 2\kappa(W_{g}(X));\]
by combining with Fact A.4, we can get
\[\kappa(R^{\top}S)\leq 2\kappa(W_{g}(X))^{2}; \tag{14}\]
by combining all previous results with \(V^{\top}V=I\) and \(U^{\top}U=I\) (as they are orthogonal), we have
\[\|SW_{g}(X)^{\top}W_{g}(X)S^{\top}Rx\|_{2}=1.\]
Then, we implement Lemma D.3. After \(t=\log(1/\widehat{\epsilon})\) iterations, we have
\[\|\Phi\cdot(z_{t}-z^{*})\|_{2}\leq\widehat{\epsilon}\cdot\|\Phi \cdot(z_{0}-z^{*})\|_{2}.\]
By using this important property, we can show the following two equations.
* \(\|R^{\top}SW_{g}(X)^{\top}W_{g}(X)x_{t}-R^{\top}Sy\|_{2}\leq\widehat{\epsilon }\cdot\sigma_{\max}(R^{\top}S)\cdot\|y\|_{2}\), and
* \(\|R^{\top}SW_{g}(X)^{\top}W_{g}(X)x_{t}-R^{\top}Sy\|_{2}\geq\sigma_{\min}(R^{ \top}S)\cdot\|W_{g}(X)^{\top}W_{g}(X)x_{t}-y\|_{2}\).
Combining these together, we get
\[\|W_{g}(X)^{\top}W_{g}(X)x_{t}-y\|_{2}^{2}\leq 2\kappa(W_{g}(X))^{2} \widehat{\epsilon}\|y\|_{2}.\]
Finally, by using
\[\|W_{g}(X)^{\top}W_{g}(X)x-y\|_{2}\leq(1+\widehat{\epsilon})\|Z^ {\top}Zx-y\|_{2},\]
we can get
\[\|Z^{\top}Zx_{t}-y\|_{2}\leq\epsilon\|y\|_{2}.\]
To compute the running time, we need to combine three parts together, namely computing \(W_{g}(X)\) (see Theorem L.8), applying \(S\) to \(W_{g}(X)\) (by using the FFT algorithm), and computing the SVD of \(SW_{g}(X)^{\top}\). Therefore, we can get
\[\epsilon^{-2}n\beta\cdot\mathrm{poly}(\log(nd/\epsilon\delta)) \cdot\log(\kappa/\epsilon)+(nd+(\epsilon^{-2}\beta)^{\omega})\cdot\log(nd/ \epsilon\delta).\]
## 4 Conclusion
Large language models have demonstrated remarkable performance in various tasks. One significant aspect, from a computational standpoint, is the computation of the attention matrix. Earlier research had thoroughly examined the feasibility and limitations of approximating the attention matrix. In this study, we introduce and analyze a novel problem known as the attention kernel regression problem. We provide a novel way to solve this problem, demonstrating how to effectively address attention kernel regression in the input sparsity time of the data matrix.
We note that while our algorithm for regression against of product of matrices runs in nearly linear time, the runtime dependence on the number of matrices \(j\) is still linear. In contrast, the squaring method only depends logarithmically on \(j\). Unfortunately, our algorithm has fundamental limits on improving dependence on \(j\) due to the alternating solve nature. It will be interesting to devise an algorithm that both runs in nearly linear time and has better dependence on \(j\). As our work is theoretical nature, it does not have explicit negative societal impact.
## Acknowledgement
Lichen Zhang is supported by NSF grant No. 1955217 and NSF grant No. 2022448.
## Appendix
Roadmap.In Section A, we introduce the notations, basic definitions, and facts that we use. In Section B, we present the background of the \(\exp\) of the inner product kernel. In Section C, we discuss the background of standard sketching. In Section D, we introduce the background of the high precision sketching. In Section E, we analyze the properties of the attention regression and the multiple matrices regression. In Section F, we show the fast linear regression algorithm (see Algorithm 1), solving the regression problem containing one matrix, and analyze its properties, including its correctness and running time. In Section G, we present the fast PSD regression algorithm (see Algorithm 2), solving the regression problem containing two matrices, and analyze its properties, and this is also the base case of one of our main results (see Lemma J.2). In Section H, we offer a new algorithm (see Algorithm 3), which solves the regression problem containing three matrices, and analyzes its correctness and running time. In Section I, we propose an algorithm (see Algorithm 4), which solves the regression problem containing four matrices, and analyzes its properties. In Section J, we summarize and utilize the patterns of the previous sections, and provide the algorithm (see Algorithm 5) which can solve the regression problem containing all even numbers of matrices, and use mathematical induction to prove one of our main results (namely the formal version of Lemma 1.4). Correspondingly, in Section K, we formulate the algorithm (see Algorithm 6) which can solve the regression problem containing all odd numbers of matrices and prove the other main result (namely the formal version of Lemma 1.5). In Section L, we analyze the attention kernel.
## Appendix A Preliminary
In Section A.1, we introduce the basic notations. In Section A.2, we introduce some basic definitions and useful facts. In Section A.3, we present some background about attention computation.
### Notations.
Here, we start to introduce the notations. We use \(\mathbb{R}\) to denote the set containing all real numbers. We use \(\mathbb{Z}\) to denote the set containing all integers and use \(\mathbb{Z}_{+}\) to denote the set containing all positive integers. For any \(n\in\mathbb{Z}_{+}\), we define \([n]:=\{1,2,3,\ldots,n\}\). For all \(d\in\mathbb{Z}_{+}\), we use \(\mathbb{R}^{d}\) to denote the set containing all vectors with length \(d\) and real entries and use \(\mathbb{Z}_{+}^{d}\) to denote the set containing all vectors with length \(d\) and entries of positive integers.
For a vector \(x\in\mathbb{R}^{n}\), we use \(\left\|x\right\|_{1}\) to denote the \(\ell_{1}\) norm, use \(\left\|x\right\|_{2}\) to denote the \(\ell_{2}\) norm, i.e., \(\left\|x\right\|_{2}=\left(\sum_{i=1}^{n}x_{i}^{2}\right)^{\frac{1}{2}}\), and use \(\left\|x\right\|_{\infty}\) to denote the \(\ell_{\infty}\) norm.
For all \(m,n\in\mathbb{Z}_{+}\), we use \(\mathbb{R}^{m\times n}\) to denote the set containing all matrices with \(m\) rows, \(n\) columns, and real entries. For a matrix \(A\in\mathbb{R}^{m\times n}\), we use \(\left\|A\right\|\) to denote the spectral norm of \(A\), i.e., \(\left\|A\right\|=\max_{\left\|x\right\|_{2}=1}\left\|Ax\right\|_{2}\). We use \(A^{\top}\) to denote the transpose of \(A\). We use \(\sigma_{\min}(A)\) to denote the minimum singular value of \(A\), i.e., \(\sigma_{\min}(A)=\min_{\left\|x\right\|_{2}=1}\left\|Ax\right\|_{2}\). Accordingly, we use \(\sigma_{\max}(A)\) to denote the maximum singular value of \(A\), so \(\sigma_{\max}(A)=\left\|A\right\|\). Furthermore, we use \(A^{\dagger}\) to denote the pseudo inverse of \(A\) and use \(A^{-1}\) to denote the true inverse of \(A\). The true inverse exists if \(m=n\) and \(\operatorname{rank}\left(A\right)=m\). For all \(m_{1},m_{2},n\in\mathbb{Z}_{+}\), for all the matrices \(A\in\mathbb{R}^{m_{1}\times n}\) and \(B\in\mathbb{R}^{m_{2}\times n}\), we use \(A\oplus B\) to denote the matrix \(\begin{bmatrix}A\\ B\end{bmatrix}\). Correspondingly, for all \(m_{1},m_{2},\ldots,m_{p},p,n\in\mathbb{Z}_{+}\), for all the matrices \(A_{1}\in\mathbb{R}^{m_{1}\times n}\), \(A_{2}\in\mathbb{R}^{m_{2}\times n}\), \(\cdots\), \(A_{p}\in\mathbb{R}^{m_{p}\times n}\), we use \(\oplus_{i=1}^{p}A_{i}\) to denote \(A_{1}\oplus A_{2}\oplus\cdots\oplus A_{p}\).
We write \(x=y\pm\epsilon\) if \([y-\epsilon,y+\epsilon]\).
\(\mathbf{1}_{n}\) represents a \(n\)-dimensional vector, whose entries are all \(1\), and \(\mathbf{0}_{n}\) represents a \(n\)-dimensional vector, whose entries are all \(0\). We say a matrix \(P\) is a projection matrix if \(P=P^{2}\). Usually \(\|P\|\leq 1\).
### Definitions and Facts
**Fact A.1**.: _Let \(P=A(A^{\top}A)^{-1}A^{\top}\), then we have \(P=P^{2}\)._
Proof.: We have
\[P^{2} = A(A^{\top}A)^{-1}A^{\top}A(A^{\top}A)^{-1}A^{\top}\] \[= A(A^{\top}A)^{-1}(A^{\top}A)(A^{\top}A)^{-1}A^{\top}\] \[= A(A^{\top}A)^{-1}A^{\top}\] \[= P,\]
where the first step follows from the definition of \(P\) (see from the fact statement), the second step follows from simple algebra (the associative law of matrix multiplication), the third step follows from simple algebra, and the last step follows from the definition of \(P\) (see from the fact statement).
Thus, we complete the proof.
In this section, we introduce the basic definitions and facts.
**Definition A.2**.: _We use \(\kappa(A)\) to denote the condition number of \(A\), i.e.,_
\[\kappa(A):=\sigma_{\max}(A)/\sigma_{\min}(A).\]
**Definition A.3** (Hadamard matrix).: _A Hadamard matrix is a square matrix of size \(n\) with entries of either \(1\) or \(-1\), where each row of the matrix is orthogonal to every other row._
**Fact A.4**.: _We have_
* _For any matrix_ \(A\)_,_ \(\|(A^{\top}A)^{\dagger}\|=\sigma_{\min}(A)^{-2}\)_._
* _For any matrix_ \(A\)_,_ \(\|A^{\top}A\|=\sigma_{\max}(A)^{2}\)_._
* _For any matrix_ \(A\)_, for any positive integer_ \(k\)_,_ \(\|((A^{\top}A)^{k})^{\dagger}\|\leq\sigma_{\min}(A)^{-2k}\)_._
* _For any orthonormal column basis_ \(U\in\mathbb{R}^{n\times d}\) _(_\(n\geq d\)_), we have_ \(\|Ux\|_{2}=\|x\|_{2}\)_._
* _For any matrix_ \(A\)_, we have_ \(\|Ax\|\leq\|A\|\cdot\|x\|_{2}=\sigma_{\max}(A)\cdot\|x\|_{2}\)_._
* _For any matrix_ \(A\)_, we have_ \(\|Ax\|_{2}\geq\sigma_{\min}(A)\cdot\|x\|_{2}\)_._
* _For any matrix_ \(A\)_,_ \(\kappa(A)=\kappa(A^{\dagger})\)_._
* _For any matrix_ \(A,B\)_,_ \(\kappa(A)\leq\kappa(AB)\cdot\kappa(B)\)_._
* _For any matrix_ \(A,b\)_,_ \(\kappa(AB)\leq\kappa(A)\cdot\kappa(B)\)_._
**Definition A.5** (Stable Rank).: _Given a matrix \(K\in\mathbb{R}^{n\times n}\), we define its stable rank, denoted by \(\operatorname{srank}(K)\) as_
\[\operatorname{srank}(K):=\frac{\|K\|_{F}^{2}}{\|K\|^{2}}.\]
_Note that \(\operatorname{srank}(K)\leq\operatorname{rank}(K)\)._
**Definition A.6** (Vector tensor product, Definition 2.2 in [21]).: _Let \(x\in\mathbb{R}^{n}\) and \(y\in\mathbb{R}^{m}\)._
_We use \(x\otimes y\) to denote the tensor product of \(x\) and \(y\), and it is defined as_
\[x\otimes y:=\operatorname{vec}(xy^{\top}).\]
_We use \(x^{\otimes p}\) to represent \(x\) tensoring with itself for \(p\) times._
### Attention Backgrounds
In this section, we introduce the important background of attention.
In general, given \(Q,K,V\in\mathbb{R}^{d\times d}\) as weights, the \(X\in\mathbb{R}^{n\times d}\) is input which be viewed as the embedding of a length-\(n\) sentence where each length-\(d\) vector is corresponding to a word.
The attention computation is
\[D(X)^{-1}\exp(XQK^{\top}X^{\top})XV \tag{15}\]
where \(D(X)=\operatorname{diag}(\exp(XQK^{\top}X^{\top})\mathbf{1}_{n})\).
In [1, 21], they simplify the computation by treating \(XQ\), \(XK\), and \(XV\) as \(Q,K,V\in\mathbb{R}^{n\times d}\) so that they get
\[D^{-1}\exp(QK^{\top})V \tag{16}\]
Figure 1: The visualization of the matrix \(D(X)\in\mathbb{R}^{n\times n}\). Given \(Q,K,V\in\mathbb{R}^{d\times d}\) and \(X\in\mathbb{R}^{n\times d}\), we first compute \(XQK^{\top}X^{\top}\in\mathbb{R}^{n\times n}\). Then, we find \(\exp(XQK^{\top}X^{\top})\in\mathbb{R}^{n\times n}\). After that, we multiply \(\exp(XQK^{\top}X^{\top})\in\mathbb{R}^{n\times n}\) with the vector \(\mathbf{1}_{n}\in\mathbb{R}^{n}\). Finally, we use \(\operatorname{diag}(\cdot)\) to transform \(\exp(XQK^{\top}X^{\top})\mathbf{1}_{n}\in\mathbb{R}^{n}\) into a diagonal matrix, which is \(D(X)\in\mathbb{R}^{n\times n}\). In this figure, green matrices/vectors represent the terms that are given; the purple matrix represents the term after one operation; the red vector represents the term after two operations; the blue matrix represents the term after three operations.
where \(D=\text{diag}(\exp(QK^{\top})\mathbf{1}_{n})\).
Furthermore, in [10], they simplify the attention by another strategy, namely
\[D^{-1}\exp(QK^{\top}), \tag{17}\]
where \(D,Q,K\) are defined same as above.
In addition, in [14], the attention is simplified into the form of
\[D^{-1}\exp(KK^{\top}), \tag{18}\]
where \(D,Q,K\), for \((K=Q)\), are defined same as above.
In this work, we provide a simplification of Eq. (15) from a different perspective, by ignoring the factor of \(D^{-1}\) and \(\exp\), so that we can get
\[XQK^{\top}X^{\top}XV\]
Further, we merge \(QK^{\top}\) into one matrix \(W\) and consider one column of \(V\) a time,
\[XWX^{\top}Xv \tag{19}\]
where \(v\) is a column of \(V\).
Thus, we can obtain the following definition of attention computation.
**Definition A.7**.: _Given \(X\in\mathbb{R}^{n\times d}\), \(W\in\mathbb{R}^{d\times d}\) and \(y\in\mathbb{R}^{n}\), the goal is to solve the following regression problem_
\[\min_{v\in\mathbb{R}^{d}}\|XWX^{\top}X\cdot v-y\|_{2}^{2}\]
**Lemma A.8**.: _Solving problem A.7 is equivalent to problem A.9._
Figure 2: The visualization of the attention computation (see Eq. (15)). Since we present the visualization of how we get \(D(X)\in\mathbb{R}^{n\times n}\) and \(\exp(XQK^{\top}X^{\top})\in\mathbb{R}^{n\times n}\) in Figure 1, we regard them as given. Moreover, we are also given \(V\in\mathbb{R}^{d\times d}\) and \(X\in\mathbb{R}^{n\times d}\). We compute their product, namely \(D(X)^{-1}\exp(XQK^{\top}X^{\top})XV\). In this figure, green matrices represent the terms that are given, and the purple matrix represents the term after one operation.
Proof.: Let \(W=UU^{\top}\).
We define \(\widetilde{X}\) and \(\widetilde{v}\) as follows
\[\widetilde{X} :=XU\] \[\widetilde{v} :=U^{-1}v\]
Then we have \(X=\widetilde{X}U^{-1}\).
\[\min_{v\in\mathbb{R}^{d}}\|XWX^{\top}X\cdot v-y\|_{2}^{2}\]
is equivalent to
\[\min_{v\in\mathbb{R}^{d}}\|\widetilde{X}\widetilde{X}^{\top}\widetilde{X}U^{- 1}v-y\|_{2}^{2}\]
is equivalent to
\[\min_{\widetilde{v}\in\mathbb{R}^{d}}\|\widetilde{X}\widetilde{X}^{\top} \widetilde{X}\widetilde{v}-y\|_{2}^{2}\]
Here we use that \(U\) is full rank.
The above problem is equivalent to solving the following problem
**Definition A.9**.: _Given \(X\in\mathbb{R}^{n\times d}\), \(y\in\mathbb{R}^{n}\). The goal is to solve_
\[\min_{v\in\mathbb{R}^{d}}\|XX^{\top}Xv-y\|_{2}^{2}\]
## Appendix B Preliminary about Exp of Inner Product Kernel
In Section B.1, we provide a formal definition of the attention kernel. In Section B.2, we analyze the properties of the attention kernel.
### Definition of Attention Kernel
Here, we start to present the definitions of the Gaussian Kernel and the Attention Kernel.
In this work, we will focus on \(\exp(\langle,\rangle)\) inner product. It is similar to the Gaussian kernel, let us first review the definition of the Gaussian Kernel
**Definition B.1** (Gaussian Kernel ).: _Let \(x,y\in\mathbb{R}^{d}\) be two data points._
_Let \(X\in\mathbb{R}^{d\times n}\)._
Figure 4: The visualization of the simplified version of attention computation in [1, 2] (see Eq. (16)). Since we present the visualization of how we get \(D\in\mathbb{R}^{n\times n}\) and \(\exp(QK^{\top})\in\mathbb{R}^{n\times n}\) in Figure 3, we regard them as given. Moreover, we are also given \(V\in\mathbb{R}^{n\times d}\). We compute their product, namely \(D^{-1}\exp(QK^{\top})V\in\mathbb{R}^{n\times d}\). In this figure, green matrices represent the terms that are given, and the purple matrix represents the term after one operation.
_Let \(x_{i}\) be the \(i\)-th column of \(X\)._
_Let \(x_{j}\) be the \(j\)-th column of \(X\)._
_We say \(G\) is the Gaussian kernel between \(x\) and \(y\) if_
\[G(x,y)=\exp(-\|x-y\|_{2}^{2}/2).\]
_We say \(G\) is the Gaussian kernel on \(X\) if_
\[G_{i,j}=\exp(-\|x_{i}-x_{j}\|_{2}^{2}/2).\]
We define the Attention kernel as follows
**Definition B.2** (Attention Kernel).: _Let \(x,y\in\mathbb{R}^{d}\) be two data points._
_Let \(X\in\mathbb{R}^{d\times n}\)._
_Let \(x_{i}\) be the \(i\)-th column of \(X\)._
_Let \(x_{j}\) be the \(j\)-th column of \(X\)._
_We say \(G\) is the Attention kernel between \(x\) and \(y\) if_
\[G(x,y)=\exp(\langle x,y\rangle).\]
_We say \(G\) is the Attention kernel on \(X\) if_
\[G_{i,j}=\exp(\langle x_{i},x_{j}\rangle),\]
Figure 5: The visualization of the simplified version of attention computation in [11] (see Eq. (17)). Since we present the visualization of how we get \(D\in\mathbb{R}^{n\times n}\) and \(\exp(QK^{\top})\in\mathbb{R}^{n\times n}\) in Figure 3, we regard them as given. We compute their product, namely \(D^{-1}\exp(QK^{\top})\in\mathbb{R}^{n\times n}\). In this figure, green matrices represent the terms that are given, and the purple matrix represents the term after one operation.
### Property of Attention Kernel
After we define the attention kernel, we start to analyze its properties.
**Fact B.3**.: _Let \(B\) be a PSD matrix in \(\mathbb{R}^{n\times n}\)._
_Then we have_
\[B_{i,i}B_{j,j}\geq B_{i,j}^{2},\quad\forall i,j\in[n]\times[n]\]
Proof.: Let \(x=a\cdot e_{i}+b\cdot e_{j}\).
For all arbitrary \(a,b\in\mathbb{R}\), we can get
\[0 \leq x^{\top}Bx\] \[=\begin{bmatrix}a&b\end{bmatrix}\begin{bmatrix}B_{i,i}&B_{i,j}\\ B_{j,i}&B_{j,j}\end{bmatrix}\begin{bmatrix}a\\ b\end{bmatrix}, \tag{20}\]
where the first step follows from the fact that \(B\) is a PSD matrix and the second step follows from expanding the equation.
Note that for all arbitrary \(a,b\in\mathbb{R}\), Eq. (20) holds.
Therefore,
\[\det(\begin{bmatrix}B_{i,i}&B_{i,j}\\ B_{j,i}&B_{j,j}\end{bmatrix})\geq 0\]
which is equivalent to say \(B_{i,i}B_{j,j}\geq B_{i,j}^{2}\).
**Lemma B.4**.: _Let \(X\in\mathbb{R}^{n\times d}\)._
_We define Attention kernel_
\[A:=\exp(XX^{\top})\in\mathbb{R}^{n\times n}\]
_where \(\exp(\cdot)\) is applied entrywise._
_Let \(\epsilon\in(0,0.1)\) and \(r>0\) satisfying the following conditions_
* _Condition 1._ \(\epsilon\leq\frac{1}{4}\exp(-4r)\)_._
* _Condition 2._ \(A_{i,j}\in[\exp(-r),\exp(r)]\) _for_ \(i,j\in[n]\times[n]\)_._
* _Condition 3._ \((1-\epsilon)\cdot A\preceq B\preceq(1+\epsilon)\cdot A\)_._
_Then, we have_
\[B_{i,j}\in[(1-\sqrt{\epsilon})\exp(-r),(1+\epsilon)\exp(r)].\]
Proof.: \(B-(1-\epsilon)\cdot A\) is a PSD matrix.
Thus, we have
\[|B_{i,j}-(1-\epsilon)\cdot A_{i,j}|\leq\sqrt{(B_{i,i}-(1-\epsilon)\cdot A_{i, i})(B_{j,j}-(1-\epsilon)\cdot A_{j,j})}\]
Figure 7: The visualization of the simplified version of attention computation that we analyze in this paper (see Eq. (19)). We are given that \(X\in\mathbb{R}^{n\times d}\), \(W\in\mathbb{R}^{d\times d}\), and \(v\in\mathbb{R}^{d}\). First, we compute the product of the matrices, namely \(XWX^{\top}X\in\mathbb{R}^{n\times d}\). Then, we multiply \(XWX^{\top}X\in\mathbb{R}^{n\times d}\) with the vector \(v\in\mathbb{R}^{d}\), which gives us \(XWX^{\top}Xv\in\mathbb{R}^{n}\) In this figure, green matrices represent the terms that are given; the purple matrix represents the term after one operation; the red matrix represents the term after two operations.
\[\leq 2\epsilon\sqrt{A_{i,i}A_{j,j}}\] \[\leq 2\epsilon\exp(r)\]
where the 1st step is by Lemma B.3, the 2nd step is due to \(B_{i,i}\leq(1+\epsilon)A_{i,i}\) (By condition 3 in Lemma statement), and the 3rd step is because of the definition of \(A_{i,j}\) from the lemma statement.
Based on the second condition from the Lemma statement, we have
\[(1-\epsilon)\cdot A_{i,j}\in[(1-\epsilon)\cdot\exp(-r),(1-\epsilon)\cdot\exp( r)], \tag{22}\]
Combining Eq. (21) and Eq. (22), we have
\[B_{i,j}\in[(1-\epsilon)\exp(-r)-2\epsilon\exp(r),(1+\epsilon)\exp(r)] \tag{23}\]
By using \((1+\epsilon)\cdot A-B\) is a PSD matrix, we may do a symmetric argument as follows:
\[B_{i,j}\in[(1+\epsilon)\exp(-r)-2\epsilon\exp(r),(1+3\epsilon)\exp(r)]. \tag{24}\]
The intersection of Eq. (23) and Eq. (24) gives us:
\[B_{i,j}\in[(1+\epsilon)\exp(-r)-2\epsilon\exp(r),(1+\epsilon)\exp(r)]\]
We know that
\[(1+\epsilon)\exp(-r)-2\epsilon\exp(r) \geq \exp(-r)-2\epsilon\exp(r)\] \[= \exp(-r)-2\sqrt{\epsilon}\cdot\sqrt{\epsilon}\cdot\exp(r)\] \[\geq \exp(-r)-2\cdot\sqrt{\epsilon}\cdot\frac{1}{2}\exp(-2r)\cdot\exp(r)\] \[= \exp(-r)-\sqrt{\epsilon}\exp(-r)\] \[= (1-\sqrt{\epsilon})\cdot\exp(-r)\]
where the first step follows from \(\epsilon\geq 0\), the second step follows from simple algebra, the third step follows from \(\sqrt{\epsilon}\leq\frac{1}{2}\exp(-2r)\) (by condition 1 in Lemma statement), the fourth step follows from simple algebra, and the last step follows from simple algebra.
Thus, we have
\[B_{i,j}\in[(1-\sqrt{\epsilon})\exp(-r),(1-\epsilon)\exp(r)].\]
This completes the proof.
## Appendix C Preliminary about Standard Sketching
In Section C.1, we provide two formal definitions about the sketching matrices. In Section C.2, we introduce the formal definition of "subspace embedding". In Section C.3, we interpret analyze the property of subsample embedding by different matrices. In Section C.4, we provide the definition of the Frobenius norm approximate matrix product.
### Sketching Matrices
Now, we define subsampled randomized Hadamard transform (SRHT) as follows.
**Definition C.1** (Subsampled Randomized Hadamard Transform (SRHT), see [13, 14]).: _Let \(H\) be the Hadamard matrix in \(\mathbb{R}^{d\times d}\) (see Definition A.3)._
_Let \(D\) be a diagonal matrix in \(\mathbb{R}^{d\times d}\), which satisfies that each diagonal entry is either \(-1\) or \(1\) with the same probability._
_Let \(P\) be a matrix in \(\{0,1\}^{m\times d}\), where each row of \(P\) contains only one \(1\) at a random entry._
_We define the matrix \(S\in\mathbb{R}^{m\times d}\) as_
\[S:=\frac{1}{\sqrt{m}}PHD\]
_and call \(S\) the_ SRHT _matrix._
We define TensorSRHT as follows:
**Definition C.2** (Tensor Subsampled Randomized Hadamard Transform (TensorSRHT), Definition 2.9 in [13]).: _Let \(P\) be a matrix in \(\{0,1\}^{m\times d}\), where each row of \(P\) contains only one \(1\) at a random entry. \(P\) can be regarded as the sampling matrix._
_Let \(H\) be the Hadamard matrix in \(\mathbb{R}^{d\times d}\) (see Definition A.3)._
_Let \(D_{1}\) and \(D_{2}\) be diagonal matrices in \(\mathbb{R}^{d\times d}\), which satisfy that each diagonal entry is either \(-1\) or \(1\) with the same probability. \(D_{1}\) and \(D_{2}\) are independent._
_Then, we define_ TensorSRHT _as a function \(S:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}^{m}\), which is defined as_
\[S:=\frac{1}{\sqrt{m}}P\cdot(HD_{1}\otimes HD_{2}).\]
### Subspace embedding
We define subspace embedding as follows.
**Definition C.3** (Subspace embedding \(\mathsf{SE}(n,d,\epsilon,\delta)\)).: _Given a matrix \(A\in\mathbb{R}^{n\times d}\), we say \(S\) is an \((n,d,\epsilon,\delta)\) subspace embedding, if_
\[\Pr[(1-\epsilon)\cdot\|Ax\|_{2}\leq\|SAx\|_{2}\leq(1+\epsilon) \cdot\|Ax\|_{2}]\geq 1-\delta.\]
We define a more general version of subspace embedding, this can be viewed as a variation of Definition 2 in [1].
**Definition C.4** (Stable Rank Subspace Embedding \(\mathsf{SSE}(n,d,\epsilon,\delta,\mu)\)).: _Given \(\epsilon\), \(\delta\), \(\mu>0\) and integers \(d\), \(n\geq 1\), an \((n,d,\epsilon,\delta,\mu)\)-Stable Rank Subspace Embedding (\(\mathsf{SSE}\)) is a distribution \(\mathcal{D}\) over \(m\times n\) matrices (for arbitrary \(m\)) such that, for \(A\in\mathbb{R}^{n\times d}\) with \(\operatorname{srank}(A^{\top}A)\leq\mu\), the following holds:_
\[\Pr_{\Pi\sim D}[(1-\epsilon)\cdot(A^{\top}A)\preceq A^{\top}\Pi ^{\top}\Pi A\preceq(1+\epsilon)\cdot(A^{\top}A)]\geq 1-\delta.\]
### Subspace embedding by different matrices
In this section, we analyze the property of the subspace embedding: subspace embedding by different matrices
**Lemma C.5**.: _Given a matrix \(A\in\mathbb{R}^{n\times d}\). The matrix \(S\in\mathbb{R}^{m\times n}\) is an \(\mathsf{SE}(n,d,\epsilon,\delta)\) subspace embedding if any of the following condition holds_
* _Let_ \(S\in\mathbb{R}^{m\times n}\) _denote a_ \(\mathsf{SRHT}\) _matrix with_ \(m=O(\epsilon^{-2}d\log(n/\delta))\) _rows. In addition,_ \(SA\) _can be computed in_ \(O(nd\log(n/\delta))\) _time._
* _Let_ \(S\in\mathbb{R}^{m\times n}\) _denote_ \(\mathsf{OSNAP}\) _matrix with_ \(m=O(\epsilon^{-2}d\log(d/\delta))\) _rows and column sparsity_ \(s=O(\epsilon^{-1}\log(d/\delta))\)_. In addition,_ \(SA\) _can be computed in_ \(O(s\cdot\mathrm{nnz}(A))\) _time._
### Approximate Matrix Product
Here, we define the Frobenius Norm Approximate Matrix Product as follows.
**Definition C.6** (Frobenius Norm Approximate Matrix Product).: _Let \(A\in\mathbb{R}^{n\times d_{1}},B\in\mathbb{R}^{d_{2}\times n}\) be two matrices. We say \(S\) is Frobenius Norm Approximate Matrix Product \(\mathsf{FAMP}(n,\epsilon,\delta)\) with respect to \(A,B\) if_
\[\Pr[\|A^{\top}B-A^{\top}S^{\top}SB\|_{F}\leq\epsilon\|A\|_{F}\|B\|_{F}]\geq 1-\delta\]
## Appendix D Preliminary about High Precision Sketching
In Section D.1, we explain more useful facts. In Section D.2, we explain the small sanity check lemma. In Section D.3, we analyze the property of well-conditioned PSD regression. In Section D.4, we analyze the forward error for simple matrices. In Section D.5, we study the forward error for PSD matrices.
### Facts
In this section, we introduce more facts.
**Fact D.1**.: _The following two conditions are equivalent_
* _for all unit vector_ \(x\)_,_ \((1-\epsilon)\leq\|Ax\|_{2}^{2}\leq(1+\epsilon)\)__
* \(\|A^{\top}A-I\|\leq\epsilon\)__
Proof.: We know that,
\[\|A^{\top}A-I\|\leq\epsilon\]
is equivalent to
\[-\epsilon\leq x^{\top}(A^{\top}A-I)x\leq\epsilon,\quad\forall\|x\|_{2}=1\]
which is equivalent to
\[1-\epsilon\leq x^{\top}A^{\top}Ax\leq 1+\epsilon,\quad\forall\|x\|_{2}=1\]
which is equivalent to
\[1-\epsilon\leq\|Ax\|_{2}^{2}\leq 1+\epsilon,\quad\forall\|x\|_{2}=1\]
### Small Sanity Check Lemma
In this section, we present the small sanity check lemma.
**Lemma D.2**.: _Given a matrix \(A\in\mathbb{R}^{n\times d}\) and \(b\in\mathbb{R}^{n}\), let \(S\in\mathbb{R}^{n\times n}\) denote a sampling and rescaling diagonal matrix. Let \(\epsilon\in(0,1)\) denote an accuracy parameter._
_Suppose that_
\[(1-\epsilon)\cdot A^{\top}A\preceq A^{\top}S^{\top}SA\preceq(1+ \epsilon)\cdot A^{\top}A\]
_Let \(x^{*}\in\mathbb{R}^{d}\) denote \(\arg\min_{x}\|A^{\top}Ax-b\|_{2}^{2}\) and \(x^{\prime}\in\mathbb{R}^{d}\) denote \(\arg\min_{x}\|A^{\top}S^{\top}SAx-b\|_{2}^{2}\)._
_Then we have_
\[\|A^{\top}Ax^{\prime}-b\|_{2}\leq\epsilon\|b\|_{2}.\]
Proof.: We define
\[\Delta_{A}:=((A^{\top}A)^{1/2}(A^{\top}S^{\top}SA)(A^{\top}A)^{1/2 }-I) \tag{25}\]
From the assumption in Lemma statement, we have
\[-\epsilon I\preceq\Delta_{A}\preceq\epsilon I.\]
We have
\[\|A^{\top}Ax^{\prime}-b\|_{2} =\|A^{\top}A(A^{\top}S^{\top}SA)^{-1}b-b\|_{2}\] \[=\|(A^{\top}A(A^{\top}S^{\top}SA)^{-1}-I)b\|_{2}\] \[=\|(A^{\top}A)^{1/2}((A^{\top}A)^{1/2}(A^{\top}S^{\top}SA)^{-1}(A ^{\top}A)^{1/2}-I)(A^{\top}A)^{-1/2}b\|_{2}\] \[=\|(A^{\top}A)^{1/2}\Delta_{A}(A^{\top}A)^{-1/2}b\|_{2},\]
where the first step comes from the definition of \(x^{\prime}\) (see from the Lemma statement), the second and the third steps are due to simple algebra, and the last step is by the definition of \(\Delta_{A}\) (see Eq. (25)).
Then we have
\[\|(A^{\top}A)^{1/2}\Delta_{A}(A^{\top}A)^{-1/2}b\|_{2}^{2}\] \[= b^{\top}(A^{\top}A)^{-1/2}\Delta_{A}(A^{\top}A)^{1/2}\cdot(A^{ \top}A)^{1/2}\Delta_{A}(A^{\top}A)^{-1/2}b\] \[= b^{\top}(A^{\top}A)^{-1/2}\Delta_{A}A^{\top}A\Delta_{A}(A^{\top }A)^{-1/2}b\] \[\leq \epsilon^{2}b^{\top}(A^{\top}A)^{-1/2}A^{\top}A(A^{\top}A)^{-1/2}b\] \[= \epsilon^{2}\|b\|_{2}^{2}\]
where the first step follows from the definition of the \(\ell_{2}\) norm, the second step follows from simple algebra, the third step follows from \(\Delta_{A}A^{\top}A\Delta_{A}\preceq\epsilon^{2}A^{\top}A\), and the last step follows from simple algebra (associative property of matrix multiplication).
Finally, we obtain,
\[\|A^{\top}Ax^{\prime}-b\|_{2}\leq\epsilon\|b\|_{2}.\]
Thus, we complete the proof.
### Well-conditioned PSD Regression
In this section, we consider the property of positive semidefinite (PSD) matrix.
**Lemma D.3** (Well-conditioned PSD regression, Lemma B.2 in [1]).: _Consider the regression problem_
\[\min_{x\in\mathbb{R}^{d}}\|Bx-y\|_{2}^{2}.\]
_Suppose \(B\in\mathbb{R}^{d\times d}\) is a PSD matrix with_
\[\frac{3}{4}\leq\|Bx\|_{2}\leq\frac{5}{4},\ \ \ \ \forall x\text{ such that }\|x\|_{2}=1.\]
_Using gradient descent update_
\[x_{t+1}=x_{t}-B^{\top}(Bx_{t}-y).\]
_Then, after \(t\) iterations, we obtain_
\[\|B(x_{t}-x^{*})\|_{2}\leq c^{t}\|B(x_{0}-x^{*})\|_{2}\]
_for some constant \(c\in(0,0,9]\)._
Proof.: The gradient at time \(t\) is \(B^{\top}(Bx_{t}-y)\) and
\[x_{t+1}=x_{t}-B^{\top}(Bx_{t}-y), \tag{26}\]
so we have
\[\|Bx_{t+1}-Bx^{*}\|_{2} =\|B(x_{t}-B^{\top}(Bx_{t}-y))-Bx^{*}\|_{2}\] \[=\|B(x_{t}-x^{*})-BB^{\top}Bx_{t}+BB^{\top}Bx^{*}\|_{2}\] \[=\|B(x_{t}-x^{*})-BB^{\top}B(x_{t}-x^{*})\|_{2}\] \[=\|(I-BB^{\top})B(x_{t}-x^{*})\|_{2}\] \[\leq\|(I-BB^{\top})\|\cdot\|B(x_{t}-x^{*})\|_{2}\] \[\leq\frac{9}{16}\|B(x_{t}-x^{*})\|_{2},\]
where the first step follows from Eq. (26), the second step follows from \(B^{\top}Bx^{*}=B^{\top}y\), the third step follows from simple algebra, the fourth step follows from simple algebra, the fifth step follows from \(\|Ax\|_{2}\leq\|A\|\cdot\|x\|_{2}\), and the last step follows from the fact that the eigenvalue of \(BB^{\top}\) belongs to \([\frac{9}{16},\frac{25}{16}]\) by our assumption.
Thus we complete the proof.
### Forwarded Error for Simple Matrix
In this section, we analyze the forwarded error for the simple matrix.
**Lemma D.4** (Lemma 5.5 in [11]).: _Given a matrix \(A\in\mathbb{R}^{n\times d}\), a vector \(b\in\mathbb{R}^{n}\). Suppose there is a vector \(x^{\prime}\in\mathbb{R}^{d}\) such that_
\[\|Ax^{\prime}-b\|_{2}\leq(1+\epsilon_{1})\min_{x\in\mathbb{R}^{d}}\|Ax-b\|_{2}\]
_Let \(x_{*}\) denote the exact solution to the regression problem, then it holds that_
\[\|x^{\prime}-x_{*}\|_{2}\leq O(\sqrt{\epsilon_{1}})\cdot\frac{1}{\sigma_{\min }(A)}\cdot\|Ax_{*}-b\|_{2}.\]
For the completeness, we still provide a proof.
Proof.: Note that
\[\|Ax^{\prime}-Ax_{*}\|_{2}=\|Ax^{\prime}-b-(Ax_{*}-b)\|_{2},\]
so we can perform the following decomposition:
\[\|A(x^{\prime}-x_{*})\|_{2}^{2} =\|Ax^{\prime}-b-(Ax_{*}-b)\|_{2}^{2}\] \[=\|Ax^{\prime}-b\|_{2}^{2}-\|Ax_{*}-b\|_{2}^{2}\] \[\leq(1+\epsilon_{1})^{2}\|Ax_{*}-b\|_{2}^{2}-\|Ax_{*}-b\|_{2}^{2}\] \[\leq 4\epsilon_{1}\cdot\|Ax_{*}-b\|_{2}^{2}, \tag{27}\]
where the first step follows from simple algebra, the second step follows from the Pythagorean theorem, the third step follows from the assumption in Lemma statement, and the fourth step follows from simple algebra.
Assuming \(A\) has full column rank, then \(A^{\dagger}A=I\).
Therefore, we have
\[\|x^{\prime}-x_{*}\|_{2} =\|A^{\dagger}A(x^{\prime}-x_{*})\|_{2}\] \[\leq\|A(x^{\prime}-x_{*})\|_{2}\cdot\|A^{\dagger}\|\] \[\leq 2\sqrt{\epsilon_{1}}\cdot\|Ax_{*}-b\|_{2}\cdot\|A^{\dagger}\|\] \[=\frac{2\sqrt{\epsilon_{1}}}{\sigma_{\min}(A)}\cdot\|Ax_{*}-b\|_ {2},\]
where the first step follows from \(A^{\dagger}A=I\), the second step follows from \(\|ABx\|_{2}\leq\|Ax\|_{2}\|B\|\), the third step follows from Eq. (27), and the last step follows from \(\|A^{\dagger}\|=\|A^{-1}\|=\frac{1}{\sigma_{\min}(A)}\)
### Forwarded Error PSD Matrices
In this section, we study the forwarded error for PSD matrices.
**Lemma D.5** (PSD version of Lemma D.4).: _Given a matrix \(A\in\mathbb{R}^{n\times d}\), a vector \(b\in\mathbb{R}^{d}\). Suppose there is a vector \(x^{\prime}\in\mathbb{R}^{d}\) such that_
\[\|A^{\top}Ax^{\prime}-b\|_{2}\leq\epsilon_{2}\|b\|_{2}\]
_Let \(x_{*}\) denote the exact solution to the regression problem, then it holds that_
\[\|x^{\prime}-x_{*}\|_{2}\leq\epsilon_{2}\cdot\frac{1}{\sigma_{\min}(A)^{2}} \cdot\|b\|_{2}.\]
For completeness, we still provide proof.
Proof.: Note that
\[\|A^{\top}Ax^{\prime}-A^{\top}Ax_{*}\|_{2}=\|(A^{\top}Ax^{\prime}-b)-(A^{\top }Ax_{*}-b)\|_{2},\]
so we can perform the following decomposition:
\[\|A^{\top}A(x^{\prime}-x_{*})\|_{2}^{2}=\|(A^{\top}Ax^{\prime}-b)-(A^{\top}Ax _{*}-b)\|_{2}^{2}\]
\[= \|A^{\top}Ax^{\prime}-b\|_{2}^{2}-\|A^{\top}Ax_{*}-b\|_{2}^{2}\] \[\leq \epsilon_{2}^{2}\|b\|_{2}^{2} \tag{28}\]
where the first step follows from simple algebra, the second step follows from the Pythagorean theorem, the third step follows from the assumption in Lemma statement.
Assuming \(A\) has full column rank, then \((A^{\top}A)^{\dagger}(A^{\top}A)=I\).
Therefore, we have
\[\|x^{\prime}-x_{*}\|_{2} = \|(A^{\top}A)^{\dagger}A^{\top}A(x^{\prime}-x_{*})\|_{2}\] \[\leq \|A^{\top}A(x^{\prime}-x_{*})\|_{2}\cdot\|(A^{\top}A)^{\dagger}\|\] \[\leq \epsilon_{2}\cdot\|b\|_{2}\cdot\|(A^{\top}A)^{\dagger}\|\] \[= \epsilon_{2}\cdot\sigma_{\min}(A)^{-2}\|b\|_{2}\]
where the first step follows from \((A^{\top}A)^{\dagger}(A^{\top}A)=I\), the second step follows from \(\|ABx\|_{2}\leq\|Ax\|_{2}\|B\|\), the third step follows from Eq. (28), and the last step follows from Fact A.4.
## Appendix E From Attention Regression to Multiple Matrices Regression
In Section E.1, we introduce the background of the attention matrices. In Section E.2, we analyze the equivalence between \(\mathsf{mid}\) version linear attention and \(\mathsf{right}\) version linear attention.
### Background on Attention Matrix
In this section, we review the standard attention computation model, e.g., see [1] as an example. We define attention matrix and attention computation as follows,
**Definition E.1** (Attention Computation).: _Given three matrices \(Q,K,V\in\mathbb{R}^{n\times d}\) and outputs the following matrix_
\[\mathrm{Att}(Q,K,V):=D^{-1}AV\]
_where \(A\in\mathbb{R}^{n\times n}\)_
\[A:=\exp(QK^{\top})\]
_and \(D\in\mathbb{R}^{n\times n}\) is a diagonal matrix_
\[D:=\mathrm{diag}(A\mathbf{1}_{n}).\]
In actual Large Language Models (LLMs), we actually consider the following computation problem,
**Definition E.2** (An alternative Attention Computation).: _Given three matrices \(Q,K,V\in\mathbb{R}^{d\times d}\)._
_For any input \(X\in\mathbb{R}^{n\times d}\), we can define function \(\mathrm{Att}:\mathbb{R}^{n\times d}\rightarrow\mathbb{R}^{n\times d}\)_
\[\mathrm{Att}(X):=D^{-1}AXV\]
_where \(A\in\mathbb{R}^{n\times n}\)_
\[A:=\exp(XQK^{\top}X^{\top})\]
_and \(D\in\mathbb{R}^{n\times n}\) is a diagonal matrix_
\[D:=\mathrm{diag}(A\mathbf{1}_{n}).\]
Mathematically, we can merge \(QK^{\top}\in\mathbb{R}^{d\times d}\) into one unknown \(d\times d\) matrix which is called \(U\in\mathbb{R}^{d\times d}\).
**Definition E.3** (Soft-max Attention Regression).: _Let \(\exp()\) denote the entry-wise exponential function. Given data points \(\{x_{1},\cdots,x_{n}\}\subset\mathbb{R}^{d}\). Let \(X=\begin{bmatrix}x_{1}&x_{2}&\cdots&x_{n}\end{bmatrix}^{\top}\in\mathbb{R}^{n \times d}\) denote that data matrix. Let \(Y\in\mathbb{R}^{n\times d}\) denote the labels corresponding to \(x_{1},\cdots,x_{n}\). Assume \(v\in\mathbb{R}^{d}\) is fixed._
_For \(U\in\mathbb{R}^{d\times d}\), we define the loss function_
\[L(U,V):=\|Y-D^{-1}AXV\|_{F}^{2}.\]
_where matrix \(A\in\mathbb{R}^{n\times n}\) and diagonal matrix \(D\in\mathbb{R}^{n\times n}\) is defined as follows:_
\[A:=\exp(XUX^{\top}),\quad D=\operatorname{diag}(A\mathbf{1}_{n}).\]
If we drop the nonlinear units in the regression problem (e.g. \(D^{-1}\exp()\) which is soft-max operator), then we can obtain
\[L(U,V)=\|Y-XUX^{\top}XV\|_{F}^{2}.\]
Usually, the standard way to solve this minimization is using alternative minimization [13, 12]. One step we fixed \(U\) to solve \(V\), and the other step, we fixed \(V\) and to solve for \(U\). This motivates us to define Definition E.4 and Definition E.5.
Equivalence Between \(\mathsf{mid}\) Version Linear Attention and \(\mathsf{right}\) Version Linear Attention
We define a simplified version attention regression problem which is essentially linear model. We call it is \(\mathsf{mid}\) because the unknown variables we are trying to recover lies in the the middle.
**Definition E.4** (Linear Attention Regression \(\mathsf{mid}\) version).: _Given data points \(\{x_{1},\cdots,x_{n}\}\subset\mathbb{R}^{d}\), we let \(X=\begin{bmatrix}x_{1}&x_{2}&\cdots&x_{n}\end{bmatrix}^{\top}\in\mathbb{R}^{n \times d}\) denote that data matrix._
_Let \(Y\in\mathbb{R}^{n\times d}\) denote the labels corresponding to \(x_{1},\cdots,x_{n}\)._
_For \(U\in\mathbb{R}^{d\times d}\), we define the loss function_
\[L_{\mathsf{mid}}(U):=\|\underbrace{Y}_{n\times d}-\underbrace{X}_{n\times d} U\underbrace{X^{\top}}_{d\times n}\underbrace{X}_{n\times d}\|_{F}^{2}.\]
Similarly, we also define the \(\mathsf{right}\) version of linear attention regression.
**Definition E.5** (Linear Attention Regression \(\mathsf{mid}\) version).: _Given data points \(\{x_{1},\cdots,x_{n}\}\subset\mathbb{R}^{d}\), we let \(X=\begin{bmatrix}x_{1}&x_{2}&\cdots&x_{n}\end{bmatrix}^{\top}\in\mathbb{R}^{ n\times d}\) denote that data matrix._
_Let \(Y\in\mathbb{R}^{n\times d}\) denote the labels corresponding to \(x_{1},\cdots,x_{n}\)._
_For \(v\in\mathbb{R}^{d\times d}\), we define the loss function_
\[L_{\mathsf{right}}(V):=\|\underbrace{Y}_{n\times d}-\underbrace{X}_{n\times d }\underbrace{X^{\top}}_{d\times n}\underbrace{X}_{n\times d}\underbrace{V}_{d \times d}\|_{F}^{2}.\]
**Lemma E.6**.: _If we restrict the output matrix to be PD matrix in Definition E.4 and Definition E.5. Then the \(\mathsf{mid}\) version of linear attention regression and \(\mathsf{right}\) version of linear attention regression are equivalent._
Proof.: The proof is straightforward, since the \(XU^{1/2}\) in \(L_{\mathsf{mid}}\) is the \(X\) in \(L_{\mathsf{right}}\). The \(V\) in \(L_{\mathsf{right}}\) is \(U^{-1/2}\) (where \(U\) is in \(L_{\mathsf{mid}}\)).
## Appendix F Linear Regression
In this section, we present the fast linear regression algorithm and analyze the property of it.
**Lemma F.1** (Dense and high accuracy regression, Lemma 5.4 in [11]).: _Given a matrix \(A\in\mathbb{R}^{n\times d}\) and a vector \(b_{1}\in\mathbb{R}^{n}\), let \(\epsilon_{1}\in(0,0.1)\) and \(\delta_{1}\in(0,0.1)\), there exists an algorithm that takes time_
\[O((nd+d^{3})\cdot\log(1/\epsilon_{1})\cdot\log^{2}(n/\delta_{1}))\]
_and outputs \(x^{\prime}\in\mathbb{R}^{d}\) such that_
\[\|Ax^{\prime}-b_{1}\|_{2}\leq(1+\epsilon_{1})\min_{x\in\mathbb{R}^{d}}\|Ax-b_ {1}\|_{2}\]
_holds with probability \(1-\delta_{1}\)._
Proof.: Let us analyze Algorithm 1, first on its convergence then on its runtime.
Note that the \(S\) we choose is an \((\epsilon_{\text{ose}},\delta_{\text{ose}})\)-oblivious subspace embedding. Since \(SA=QR^{-1}\) where \(Q\) is orthonormal, we know the singular values of \(AR\) are between \([1-\epsilon_{\text{ose}},1+\epsilon_{\text{ose}}]\).
Let \(AR=U\Sigma V^{\top}\) be the SVD of \(AR\) and \(z^{*}\) denote the optimal solution to the regression \(\min_{x\in\mathbb{R}^{d}}\|ARx-b_{1}\|_{2}\).
Let us consider
\[AR(z_{t+1}-z^{*}) = AR(z_{t}+R^{\top}A^{\top}(b_{1}-ARz_{t})-z^{*})\] \[= AR(z_{t}-z^{*})+ARR^{\top}A^{\top}b_{1}-ARR^{\top}A^{\top}ARz_{t}\] \[= AR(z_{t}-z^{*})+ARR^{\top}A^{\top}ARz^{*}-ARR^{\top}A^{\top}ARz_{t}\] \[= (AR-ARR^{\top}A^{\top}AR)(z_{t}-z^{*})\] \[= (U\Sigma V^{\top}-U\Sigma^{3}V^{\top})(z_{t}-z^{*}), \tag{29}\]
where the first step follows from the definition of \(z_{t+1}\) from Algorithm 1, the second step follows from simple algebra, the third step follows from \(b_{1}=ARz^{*}\), the fourth step follows from simple algebra, the last step follows from the SVD, \(AR=U\Sigma V^{\top}\).
Therefore,
\[\|AR(z_{t+1}-z^{*})\|_{2} = \|(U\Sigma V^{\top}-U\Sigma^{3}V^{\top})(z_{t}-z^{*})\|_{2}\] \[= \|(\Sigma-\Sigma^{3})V^{\top}(z_{t}-z^{*})\|_{2}\] \[\leq O(\epsilon_{\mathrm{ose}})\cdot\|V^{\top}(z_{t}-z^{*})\|_{2}\] \[\leq \frac{O(\epsilon_{\mathrm{ose}})}{1-\epsilon_{\mathrm{ose}}}\| \Sigma V^{\top}(z_{t}-z^{*})\|_{2}\] \[= O(\epsilon_{\mathrm{ose}})\cdot\|\Sigma V^{\top}(z_{t}-z^{*})\|_{2}\] \[= O(\epsilon_{\mathrm{ose}})\cdot\|U\Sigma V^{\top}(z_{t}-z^{*})\|_ {2}\] \[= O(\epsilon_{\mathrm{ose}})\cdot\|AR(z_{t}-z^{*})\|_{2},\]
where the first step follows from Eq. (29), the second step follows from \(U^{\top}U=I\), the third step follows from \(\|AB\|\leq\|A\|\cdot\|B\|\), the fourth step follows from \((1-\epsilon_{\mathrm{ose}})\leq\|\Sigma\|\), the fifth step follows from \(\epsilon_{\mathrm{ose}}\in(0,0.1)\), the sixth step follows from \(U^{\top}U=I\), and the last step follows from the SVD, \(AR=U\Sigma V^{\top}\).
This means the error shrinks by a factor of \(O(\epsilon_{\mathrm{ose}})\) per iteration. After \(T=O(\log(1/\epsilon_{1}))\) iterations, we have
\[\|AR(z_{T}-z^{*})\|_{2}\leq O(\epsilon_{1})\cdot\|AR(z_{0}-z^{*})\|_{2}, \tag{30}\]
and recall for initial solution \(z_{0}\), we have
\[\|ARz_{0}-b_{1}\|_{2}\leq(1+\epsilon_{\mathrm{ose}})\cdot\|ARz^{*}-b_{1}\|_{2}.\]
The above equation implies that
\[\|ARz_{0}-b_{1}\|_{2}^{2}-\|ARz^{*}-b_{1}\|_{2}^{2}\leq O(\epsilon_{\mathrm{ose }})\|ARz^{*}-b_{1}\|_{2}^{2}. \tag{31}\]
We can wrap up the proof as follows:
\[\|ARz_{T}-b_{1}\|_{2}^{2} = \|AR(z_{T}-z^{*})\|_{2}^{2}+\|ARz^{*}-b_{1}\|_{2}^{2}\] \[\leq O(\epsilon_{1}^{2})\cdot\|AR(z_{0}-z^{*})\|_{2}^{2}+\|ARz^{* }-b_{1}\|_{2}^{2}\] \[= O(\epsilon_{1}^{2})\cdot(\|ARz_{0}-b_{1}\|_{2}^{2}-\|ARz^{*}-b_{ 1}\|_{2}^{2})+\|ARz^{*}-b_{1}\|_{2}^{2}\] \[\leq O(\epsilon_{1}^{2})\cdot(O(\epsilon_{\mathrm{ose}})\|ARz^{*}-b _{1}\|_{2}^{2})+\|ARz^{*}-b_{1}\|_{2}^{2}\] \[= (1+O(\epsilon_{1}^{2}))\cdot\|ARz^{*}-b_{1}\|_{2}^{2},\]
where the first step follows from the Pythagorean theorem, the second step follows from Eq. (30), the third step follows from the Pythagorean theorem again, the fourth step follows from Eq. (31), and the fifth step follows from \(\epsilon_{\mathrm{ose}}\leq 1\).
It remains to show the runtime. Applying \(S\) to \(A\) takes \(O(nd\log n)\) time, the QR decomposition takes \(O(m_{\mathrm{sk}}d^{2})=O(d^{3}\log^{2}(n/\delta_{\mathrm{ose}}))\) time.
Inverting \(d\times d\) matrix \(Q\) takes \(O(d^{3})\) time. To solve for \(z_{0}\), we need to multiply \(SA\) with \(R\) in \(O(m_{\mathrm{sk}}d^{2})\) time and the solve takes \(O(m_{\mathrm{sk}}d^{2})\) time as well. To implement each iteration, we multiply from right to left which takes \(O(nd)\) time. Putting things together gives the desired runtime.
## Appendix G Fast PSD regression solver
In this section, we offer the fast PSD regression algorithm and analyze the property of it.
**Lemma G.1** (Formal version of Lemma 4.2, Lemma B.1 in [1]).: _Given a matrix \(A\in\mathbb{R}^{n\times d}\), let \(\kappa\) denote the condition number of \(A\), i.e. \(\kappa=\sigma_{\max}(A)/\sigma_{\min}(A)\), consider the following regression problem_
\[\min_{x\in\mathbb{R}^{d}}\|A^{\top}Ax-b_{2}\|_{2}. \tag{32}\]
_There is an algorithm that runs in time_
\[O((nd+d^{3})\cdot\log(\kappa/\epsilon_{2})\cdot\log^{2}(n/\delta_{2})).\]
_and outputs \(x^{\prime}\in\mathbb{R}^{d}\) such that_
\[\|A^{\top}Ax^{\prime}-b_{2}\|_{2}\leq\epsilon_{2}\|b_{2}\|_{2}\]
_holds with probability \(1-\delta_{2}\)._
Proof.: Let \(S_{2}\in\mathbb{R}^{s_{2}\times n}\) be a \(\mathsf{SE}(n,d,\epsilon_{\text{\rm{ose}}}=0.1,\delta)\) (Definition C.3) for \(A\), then with probability \(1-\delta\), the following holds for any \(x\in\mathbb{R}^{d}\)
\[\|S_{2}Ax\|_{2}=(1\pm\epsilon_{\text{\rm{ose}}})\|Ax\|_{2}. \tag{33}\]
Suppose \(R\in\mathbb{R}^{d\times d}\) is computed so that \(SAR\) has orthonormal columns, e.g., via QR decomposition.
We use \(R\) as a preconditioner for matrix \(A\).
Formally, for any \(x\in\mathbb{R}^{d}\) satisfying \(\|x\|_{2}=1\), we have
\[\|ARx\|_{2} = (1\pm\epsilon_{\text{\rm{ose}}})\|S_{2}ARx\|_{2} \tag{34}\] \[= (1\pm\epsilon_{\text{\rm{ose}}}),\]
where the first step follows from Eq. (33) and the second step follows from the fact that \(S_{2}AR\) has orthonormal columns.
Taking the squares on both sides, we have
\[\|ARx\|_{2}^{2}=(1\pm 3\epsilon_{\rm ose}).\]
By Fact D.1, then above equation implies the following
\[\|R^{\top}A^{\top}AR-I\|\leq 3\epsilon_{\rm ose}\]
Hence, using the definition of spectral norm, we know for any \(\|x\|_{2}=1\),
\[\|R^{\top}A^{\top}ARx\|_{2}\leq(1+3\epsilon_{\rm ose}),\]
Similarly, we can prove the other direction
\[\|R^{\top}A^{\top}ARx\|_{2}\geq(1-3\epsilon_{\rm ose})\]
We choose \(\epsilon_{\rm ose}=0.1\), and consider the regression problem
\[\min_{z\in\mathbb{R}^{n}}\|R^{\top}A^{\top}ARz-R^{\top}b_{2}\|_{2}. \tag{35}\]
By lemma D.3, using gradient descent, after \(T_{2}=\log{(1/\epsilon_{2})}\) iterations, we can find \(z_{t}\) satisfying
\[\|R^{\top}A^{\top}AR(z_{t}-z^{*})\|_{2}\leq\epsilon\|R^{\top}A^{\top}AR(z_{0}- z^{*})\|_{2}, \tag{36}\]
where
\[z^{*}=(R^{\top}A^{\top}AR)^{-1}R^{\top}b_{2} \tag{37}\]
is the optimal solution to Eq. (35).
We are going to show that
\[x_{t}=Rz_{t} \tag{38}\]
is an \(2\kappa\epsilon\)-approximate solution to the original regression problem (Eq. (32)), i.e.,
\[\|A^{\top}Ax_{t}-b_{2}\|_{2}\leq\kappa\epsilon_{2}\|b_{2}\|_{2}.\]
Loading Eq. (37) into Eq. (36), we obtain
\[\|R^{\top}A^{\top}ARz_{t}-R^{\top}b_{2}\|_{2}\leq\epsilon_{2}\|R^{\top}A^{ \top}ARz_{0}-R^{\top}b_{2}\|_{2}\]
Loading the definition of \(z_{0}=0\) and Eq. (38), we have
\[\|R^{\top}A^{\top}Ax_{t}-R^{\top}b_{2}\|_{2} \leq\epsilon_{2}\cdot\|R^{\top}b_{2}\|_{2}\] \[\leq\epsilon_{2}\cdot\sigma_{\rm max}(R)\cdot\|b_{2}\|_{2}, \tag{39}\]
where the second step follows from the definition of \(\sigma_{\rm max}(R)\).
On the other hand, we have
\[\|R^{\top}A^{\top}Ax_{t}-R^{\top}b_{2}\|_{2} =\|R^{\top}(A^{\top}Ax_{t}-b_{2})\|_{2}\] \[\geq\sigma_{\rm min}(R^{\top})\|A^{\top}Ax_{t}-b_{2}\|_{2}, \tag{40}\]
where the first step follows from simple algebra and the second step follows from the definition of \(\sigma_{\min}(R^{\top})\).
Putting it all together, we have
\[\|A^{\top}Ax_{t}-b_{2}\|_{2} \leq\epsilon_{2}\kappa(R^{\top})\|b_{2}\|_{2}\] \[=\epsilon_{2}\kappa(R)\|b_{2}\|_{2}\] \[\leq\epsilon_{2}\kappa(AR)\kappa(A)\|b_{2}\|_{2}\] \[\leq 2\epsilon_{2}\kappa(A)\|b_{2}\|_{2},\]
where the first step follows from Eq. (39) and Eq. (40), the second step follows from \(R\) is a square matrix and thus \(\kappa(R)=\kappa(R^{\top})\), the third step follows from Fact A.4, and the last step follows from Eq. (34).
For the running time, the preconditioning time is \(\widetilde{O}(nd+d^{3})\), the number of iteration for gradient descent is \(\log{(\kappa/\epsilon_{2})}\), the running time per iteration is \(\widetilde{O}(nd)\), so the total running time is
\[O((nd+d^{3})\cdot\log(\kappa/\epsilon_{2})\cdot\log^{2}(n/\delta_{2})).\]
## Appendix H Fast Attention regressions
In this section, we propose the fast attention regression algorithm and analyze its correctness and running time.
```
1:procedureFastAttentionRegression(\(A\in\mathbb{R}^{n\times d},b_{3}\in\mathbb{R}^{n},n\in\mathbb{Z}_{+},d\in \mathbb{Z}_{+},\epsilon_{3}\in(0,1),\delta_{3}\in(0,1)\))
2:\(\epsilon_{1}\gets 0.1\epsilon_{3}\)
3:\(\delta_{1}\leftarrow\delta_{3}/2\)
4:\(b_{2}\leftarrow\textsc{FastLinearRegression}(A\in\mathbb{R}^{n\times d},b_{3} \in\mathbb{R}^{n},n,d,\epsilon_{1},\delta_{1})\)\(\triangleright\)\(b_{2}\in\mathbb{R}^{d}\)
5:\(\epsilon_{2}\leftarrow\epsilon_{3}/\kappa(A)\)
6:\(\delta_{2}\leftarrow\delta_{3}/2\)
7:\(x^{\prime}\leftarrow\textsc{FastPSDRegression}(A\in\mathbb{R}^{n\times d},b_{2} \in\mathbb{R}^{d},n,d,\epsilon_{2},\delta_{2})\)
8:return\(x^{\prime}\)
9:endprocedure
```
**Algorithm 3** We want to solve regression problem \(\min_{x}\|AA^{\top}Ax-b_{3}\|_{2}\)
**Lemma H.1**.: _Let \(A\in\mathbb{R}^{n\times d}\) be a matrix and \(b_{3}\in\mathbb{R}^{n}\) be a vector. Let \(\kappa\) denote the condition number of \(A\) (see Definition A.2), i.e. \(\kappa=\sigma_{\max}(A)/\sigma_{\min}(A)\)_
_Consider the regression problem (defined in Definition 1.1)_
\[\min_{x\in\mathbb{R}^{d}}\|AA^{\top}Ax-b_{3}\|_{2}.\]
_There exists an algorithm (Algorithm 3) that runs in time_
\[O((nd+d^{3})\cdot\log(\kappa/\epsilon_{3})\cdot\log^{2}(n/\delta_{3}))\]
_and outputs a vector \(x^{\prime}\in\mathbb{R}^{d}\) such that_
\[\|AA^{\top}Ax^{\prime}-b_{3}\|_{2}\leq(1+\epsilon_{3})\min_{x\in\mathbb{R}^{d }}\|AA^{\top}Ax-b_{3}\|_{2}+\epsilon_{3}\|b_{3}\|_{2}.\]
_holds with probability \(1-\delta_{3}\)._
Proof.: We define OPT as
\[\operatorname{OPT}:=\min_{x\in\mathbb{R}^{d}}\|AA^{\top}Ax-b_{3}\|_{2}.\]
First, we use Algorithm 1 to solve
\[\min_{y\in\mathbb{R}^{d}}\|Ay-b_{3}\|_{2}. \tag{41}\]
Let \(y_{*}\) denote the exact solution to this regression problem.
By Lemma F.1, we can get \(y^{\prime}\in\mathbb{R}^{d}\) such that the following holds with probability \(1-\delta_{1}\),
\[\|Ay^{\prime}-b_{3}\|_{2} \leq (1+\epsilon_{1})\cdot\min_{y\in\mathbb{R}^{d}}\|Ay-b_{3}\|_{2} \tag{42}\] \[\leq (1+\epsilon_{1})\cdot\operatorname{OPT},\]
where the last step follows from the \(A^{\top}Ax\) might not be able to do a better job than minimizer \(y\) (in terms of minimizing cost).
This step takes time
\[O((nd+d^{3})\cdot\log(1/\epsilon_{1})\cdot\log^{2}(n/\delta_{1})).\]
By the triangle inequality, we can show
\[\|y^{\prime}\|_{2} = \|y^{\prime}-y_{*}+y_{*}\|_{2} \tag{43}\] \[\leq \|y^{\prime}-y_{*}\|_{2}+\|y_{*}\|_{2}.\]
To bound the first term of Eq. (43), we have
\[\|y^{\prime}-y_{*}\|_{2} \leq O(\sqrt{\epsilon_{1}})\cdot\sigma_{\min}(A)^{-1}\|Ay_{*}-b_{3}\|_ {2} \tag{44}\] \[\leq O(\sqrt{\epsilon_{1}})\cdot\sigma_{\min}(A)^{-1}\operatorname{ OPT},\]
where the first step follows from Lemma D.4 and the second step follows from Eq. (42).
To bound the second term of Eq. (43), we have
\[\|y_{*}\|_{2} = \|A^{\dagger}b_{3}\|_{2} \tag{45}\] \[\leq \|A^{\dagger}\|\cdot\|b_{3}\|_{2}\] \[\leq \sigma_{\min}(A)^{-1}\cdot\|b_{3}\|_{2},\]
where the first step follows from \(y_{*}=A^{\dagger}b_{3}\), the second step follows from \(\|Ax\|_{2}\leq\|A\|\|x\|_{2}\), and the third step follows from \(\|A^{\dagger}\|=\sigma_{\min}(A)^{-1}\).
By plugging Eq. (44) and Eq. (45) into Eq. (43), we have
\[\|y^{\prime}\|_{2}\leq O(\sqrt{\epsilon_{1}})\cdot\sigma_{\min}(A)^{-1} \operatorname{OPT}+\sigma_{\min}(A)^{-1}\|b_{3}\|_{2}. \tag{46}\]
Let \(b_{2}=y^{\prime}\in\mathbb{R}^{d}\).
Let \(x^{\prime}\in\mathbb{R}^{d}\).
Then, using Algorithm 2, we solve
\[\min_{x^{\prime}\in\mathbb{R}^{d}}\|A^{\top}Ax^{\prime}-b_{2}\|_{2}.\]
By Lemma G.1, we find an \(x^{\prime}\in\mathbb{R}^{d}\) such that
\[\|A^{\top}Ax^{\prime}-b_{2}\|_{2}\leq\epsilon_{2}\|b_{2}\|_{2} \tag{47}\]
holds with probability \(1-\delta_{2}\).
This step takes time in
\[O((nd+d^{3})\cdot\log(\kappa/\epsilon_{2})\cdot\log^{2}(n/\delta_{2})).\]
**Correctness.**
To bound \(\|AA^{\top}Ax^{\prime}-b_{3}\|_{2}\), we have
\[\|AA^{\top}Ax^{\prime}-b_{3}\|_{2} =\|AA^{\top}Ax^{\prime}-Ay^{\prime}+Ay^{\prime}-b_{3}\|_{2}\] \[\leq\|AA^{\top}Ax^{\prime}-Ay^{\prime}\|_{2}+\|Ay^{\prime}-b_{3}\| _{2}\] \[\leq\|AA^{\top}Ax^{\prime}-Ay^{\prime}\|_{2}+(1+\epsilon_{1}) \cdot\operatorname{OPT}, \tag{48}\]
where the first step follows from adding and subtracting the same thing, the second step follows from triangle inequality, and the third step follows from Eq. (42).
Let's consider the first term of Eq. (48).
We have
\[\|AA^{\top}Ax^{\prime}-Ay^{\prime}\|_{2} =\|A(A^{\top}Ax^{\prime}-y^{\prime})\|_{2}\] \[\leq\|A\|\cdot\|A^{\top}Ax^{\prime}-y^{\prime}\|_{2}\] \[\leq\|A\|\cdot\epsilon_{2}\|y^{\prime}\|_{2}\] \[\leq\|A\|\cdot\epsilon_{2}\left(O(\sqrt{\epsilon_{1}})\cdot \sigma_{\min}(A)^{-1}\operatorname{OPT}+\sigma_{\min}(A)^{-1}\|b_{3}\|_{2}\right)\] \[=\|A\|\cdot\epsilon_{2}O(\sqrt{\epsilon_{1}})\cdot\sigma_{\min}( A)^{-1}\operatorname{OPT}+\|A\|\cdot\epsilon_{2}\sigma_{\min}(A)^{-1}\|b_{3}\|_{2}, \tag{49}\]
where the first step follows from simple algebra, the second step follows from \(\|Ax\|_{2}\leq\|A\|\|x\|_{2}\), the third step follows from Eq. (47), the fourth step follows from Eq. (46), and the last step follows from simple algebra.
Then, to bound the first term of Eq. (49), we have
\[\|A\|\cdot\epsilon_{2}O(\sqrt{\epsilon_{1}})\cdot\sigma_{\min}(A )^{-1}\operatorname{OPT} \leq\sigma_{\max}(A)\sigma_{\min}(A)^{-1}\cdot\epsilon_{2}O( \sqrt{\epsilon_{1}})\cdot\operatorname{OPT}\] \[=\sigma_{\max}(A)\sigma_{\min}(A)^{-1}\cdot O(\sqrt{\epsilon_{1} }\epsilon_{2})\cdot\operatorname{OPT}\] \[=O(\sqrt{\epsilon_{1}}\epsilon_{2})\kappa(A)\operatorname{OPT}\] \[=O(\sqrt{\epsilon_{1}}\epsilon_{3})\operatorname{OPT}, \tag{50}\]
where the first step follows from \(\|A\|\leq\sigma_{\max}(A)\), the second step follows from the property of \(O(\cdot)\), the third step follows from the definition of \(\kappa(A)\) (see Definition A.2), and the last step follows from \(\epsilon_{2}=\epsilon_{3}/\kappa(A)\).
Similarly, to bound the second term of Eq. (49), we get
\[\|A\|\cdot\epsilon_{2}\sigma_{\min}(A)^{-1}\|b_{3}\|_{2} \leq\sigma_{\max}(A)\sigma_{\min}(A)^{-1}\cdot\epsilon_{2}\|b_{3} \|_{2}\] \[=\kappa(A)\cdot\epsilon_{2}\|b_{3}\|_{2}\] \[=\epsilon_{3}\|b_{3}\|_{2}, \tag{51}\]
where the first step follows from \(\|A\|\leq\sigma_{\max}(A)\), the second step follows from the definition of \(\kappa(A)\) (see Definition A.2), and the last step follows from \(\epsilon_{2}=\epsilon_{3}/\kappa(A)\).
Plugging Eq. (50) and Eq. (51) into Eq. (49), we get
\[\|AA^{\top}Ax^{\prime}-Ay^{\prime}\|_{2}\leq O(\sqrt{\epsilon_{1}}\epsilon_{3}) \operatorname{OPT}+\epsilon_{3}\|b_{3}\|_{2}. \tag{52}\]
Therefore, by plugging Eq. (52) into (48), we have
\[\|AA^{\top}Ax^{\prime}-b_{3}\|_{2} \leq O(\sqrt{\epsilon_{1}}\epsilon_{3})\operatorname{OPT}+\epsilon_{ 3}\|b_{3}\|_{2}+(1+\epsilon_{1})\cdot\operatorname{OPT}\] \[\leq (1+\epsilon_{3})\cdot\operatorname{OPT}+\epsilon_{3}\|b_{3}\|_{2},\]
where the last step follows from \(O(\epsilon_{1})\leq 1/10\) and \(\epsilon_{1}<\epsilon_{3}/10\).
Therefore, we complete bounding \(\|AA^{\top}Ax^{\prime}-b_{3}\|_{2}\).
**Running time**
The overall running time is
\[O((nd+d^{3})\log(\kappa/\epsilon_{3})\cdot\log^{2}(n/\delta_{3}))\]
**Failure probability.**
By taking a union over two events, the failure probability is at most \(\delta_{1}+\delta_{2}=\delta_{3}\).
## Appendix I Four Matrices
In this section, we provide the four matrices algorithm and analyze its correctness and running time.
```
1:procedureFourMatrices(\(A\in\mathbb{R}^{n\times d},b_{4}\in\mathbb{R}^{d},n\in\mathbb{Z}_{+},d\in\mathbb{Z}_{+}, \epsilon_{4}\in(0,1),\delta_{4}\in(0,1)\))
2:\(\epsilon_{2}\gets 0.1\epsilon_{4}/\kappa(A)^{2}\)
3:\(\delta_{2}\leftarrow\delta_{4}/2\)
4:\(b_{2}\leftarrow\textsc{FastPSDRegression}(A\in\mathbb{R}^{n\times d},b_{4}\in \mathbb{R}^{d},n,d,\epsilon_{2},\delta_{2})\)
5:\(x^{\prime}\leftarrow\textsc{FastPSDRegression}(A\in\mathbb{R}^{n\times d},b_{2} \in\mathbb{R}^{d},n,d,\epsilon_{2},\delta_{2})\)
6:return\(x^{\prime}\)
7:endprocedure
```
**Algorithm 4** We want to solve regression problem \(\min_{x\in\mathbb{R}^{d}}\|A^{\top}AA^{\top}Ax-b_{4}\|_{2}\)
**Lemma I.1**.: _Let \(A\in\mathbb{R}^{n\times d}\) be a matrix and \(b_{4}\in\mathbb{R}^{d}\) be a vector._
_Let \(\kappa\) denote the condition number of \(A\)._
_Consider the regression problem_
\[\min_{x\in\mathbb{R}^{d}}\|A^{\top}AA^{\top}Ax-b_{4}\|_{2}.\]
_There exists an algorithm that runs in time_
\[O((nd+d^{3})\cdot\log(\kappa/\epsilon_{4})\cdot\log^{2}(n/\delta_{4}))\]
_and outputs a vector \(x^{\prime}\in\mathbb{R}^{d}\) such that_
\[\|A^{\top}AA^{\top}Ax^{\prime}-b_{4}\|_{2}\leq\epsilon_{4}\|b_{4}\|_{2}.\]
_holds with probability \(1-\delta_{4}\)._
Proof.: First, we use Algorithm 2 to solve
\[\min_{y\in\mathbb{R}^{d}}\|A^{\top}Ay-b_{4}\|_{2}.\]
Let \(y_{*}\) denote the exact solution to this regression problem.
By Lemma G.1, we get
\[\|A^{\top}Ay^{\prime}-b_{4}\|_{2}\leq\epsilon_{2}\|b_{4}\|_{2}. \tag{53}\]
This step takes time
\[O((nd+d^{3})\cdot\log(\kappa/\epsilon_{2})\cdot\log^{2}(n/\delta_{2})).\]
By the triangle inequality, we can show that
\[\|y^{\prime}\|_{2} \leq\|y^{\prime}-y_{*}+y_{*}\|_{2}\] \[\leq\|y^{\prime}-y_{*}\|_{2}+\|y_{*}\|_{2}. \tag{54}\]
To bound the first term of Eq. (54), we have
\[\|y^{\prime}-y_{*}\|_{2}\leq\epsilon_{2}\cdot\sigma_{\min}(A)^{-2}\cdot\|b_{4 }\|_{2}. \tag{55}\]
where the last step follows from Lemma D.5.
To bound the second term of Eq. (54), we have
\[\|y_{*}\|_{2} =\|(A^{\top}A)^{\dagger}b_{4}\|_{2}\] \[=\|(A^{\top}A)^{\dagger}\|\cdot\|b_{4}\|_{2}\] \[\leq\sigma_{\min}(A)^{-2}\cdot\|b_{4}\|_{2}, \tag{56}\]
where the first step follows from the definition of \(y_{*}\), the second step follows from \(\|Ax\|_{2}\leq\|A\|\|x\|_{2}\), the last step follows from Fact A.4.
Then, plugging Eq. (55) and Eq. (56) into Eq. (54), we have
\[\|y^{\prime}\|_{2} \leq(\epsilon_{2}+1)\sigma_{\min}(A)^{-2}\|b_{4}\|_{2}\] \[\leq 2\sigma_{\min}(A)^{-2}\|b_{4}\|_{2}, \tag{57}\]
where the first step follows from simple algebra and the second step follows from \(\epsilon_{2}<1\).
Let \(b_{2}=y^{\prime}\in\mathbb{R}^{d}\).
Let \(x^{\prime}\in\mathbb{R}^{d}\).
Then, using Algorithm 2 again, we solve
\[\min_{y\in\mathbb{R}^{d}}\|A^{\top}Ax^{\prime}-b_{2}\|_{2}.\]
By Lemma G.1, we can find \(x^{\prime}\in\mathbb{R}^{d}\) such that
\[\|A^{\top}Ax^{\prime}-b_{2}\|_{2}\leq\epsilon_{2}\|b_{2}\|_{2} \tag{58}\]
holds with probability \(1-\delta_{2}\).
This step takes time
\[O((nd+d^{3})\cdot\log(\kappa/\epsilon_{2})\cdot\log^{2}(n/\delta_{2})).\]
**Correctness.**
To bound \(\|A^{\top}AA^{\top}Ax^{\prime}-b_{4}\|_{2}\), we have
\[\|A^{\top}AA^{\top}Ax^{\prime}-b_{4}\|_{2} =\|A^{\top}AA^{\top}Ax^{\prime}-A^{\top}Ay^{\prime}+A^{\top}Ay^{ \prime}-b_{4}\|_{2}\] \[\leq\|A^{\top}AA^{\top}Ax^{\prime}-A^{\top}Ay^{\prime}\|_{2}+\|A^ {\top}Ay^{\prime}-b_{4}\|_{2}\] \[\leq\|A^{\top}AA^{\top}Ax^{\prime}-A^{\top}Ay^{\prime}\|_{2}+ \epsilon_{2}\|b_{4}\|_{2} \tag{59}\]
where the first step follows from adding and subtracting the same thing, the second step follows from triangle inequality, and the third step follows from Eq. (53).
Let's consider the first term of Eq. (59).
We have
\[\|A^{\top}AA^{\top}Ax^{\prime}-A^{\top}Ay^{\prime}\|_{2} =\|A^{\top}A(A^{\top}Ax^{\prime}-y^{\prime})\|_{2}\] \[\leq\|A^{\top}A\|\cdot\|A^{\top}Ax^{\prime}-y^{\prime}\|_{2}\] \[\leq\|A^{\top}A\|\cdot\epsilon_{2}\|y^{\prime}\|_{2}\] \[\leq\|A^{\top}A\|\cdot\epsilon_{2}\cdot(2\sigma_{\min}(A)^{-2}\|b _{4}\|_{2})\] \[\leq\sigma_{\max}(A)^{2}\cdot\epsilon_{2}\cdot(2\sigma_{\min}(A)^ {-2}\|b_{4}\|_{2})\] \[\leq 2\kappa(A)\epsilon_{2}\|b_{4}\|_{2}, \tag{60}\]
where the first step follows from simple algebra, the second step follows from \(\|Ax\|_{2}\leq\|A\|\|x\|_{2}\), the third step follows from Eq. (58), the fourth step follows from Eq. (57), the fifth step follows from Fact A.4, and the last step follows from the definition of \(\kappa\) (see Definition A.2).
Therefore, we complete bounding \(\|A^{\top}AA^{\top}Ax^{\prime}-b_{4}\|_{2}\).
**Running time**
The total running time is
\[O((nd+d^{3})\cdot\log(\kappa/\epsilon_{4})\cdot\log^{2}(n/\delta_{4})).\]
**Failure probability**
By taking a union over two events, the failure probability is at most \(\delta_{2}+\delta_{2}=\delta_{4}\).
## Appendix J Even Number of Matrices Regression
In this section, we formulate Algorithm 5, which can solve the regression problem with even number of matrices. In Section J.1, we prove five important properties of our induction hypothesis. In Section J.2, we combine these properties and utilize mathematical induction to show the correctness and running time of this algorithm.
### Induction Hypothesis
In this section, we present our induction hypothesis and its proof.
**Lemma J.1** (Induction Hypothesis).: _Let \(C>1000\) denote a sufficiently large constant. If for all \(i\in[k]\), we have_
* \(\|(A^{\top}A)^{i}b_{i}-b_{0}\|_{2}\leq\epsilon_{i}\|b_{0}\|_{2}\)__
* \(\|b_{i}\|_{2}\leq 2\sigma_{\min}(A)^{-2i}\|b_{0}\|_{2}\)__
* \(\epsilon_{i}\leq 0.5\epsilon_{i-1}\)
* _The running time is_ \(C\cdot((nd+d^{3})\cdot k\cdot\log(\kappa(A)/\epsilon_{k})\cdot\log(1/\delta_{k}))\)__
* _The failure probability is_ \(\delta_{1}+\delta_{2}+\cdots+\delta_{k}\)__
_Then for \(i=k+1\), we have_
* \(\|(A^{\top}A)^{k+1}b_{k+1}-b_{0}\|_{2}\leq\epsilon_{k+1}\|b_{0}\|_{2}\)__
* \(\|b_{k+1}\|_{2}\leq 2\sigma_{\min}(A)^{-2(k+1)}\|b_{0}\|_{2}\)__
* \(\epsilon_{k+1}\leq 0.5\epsilon_{k}\)__
* _The running time is_ \(C\cdot((nd+d^{3})\cdot(k+1)\cdot\log(\kappa(A)/\epsilon_{k+1})\cdot\log(1/ \delta_{k+1}))\)__
* _The failure probability is_ \(\delta_{1}+\delta_{2}+\cdots+\delta_{k+1}\)__
Proof.: **Proof of Part 1.**
Running our two matrices version PSD regression, we can obtain \(b_{k+1}\) which is the approximate solution of
\[\min_{x\in\mathbb{R}^{d}}\|A^{\top}Ax-b_{k}\|_{2}\]
then we have
\[\|A^{\top}Ab_{k+1}-b_{k}\|_{2}\leq 0.1\epsilon_{k+1}\kappa(A)^{-2k}\|b_{k}\|_ {2} \tag{61}\]
The running time for this additional step is
\[0.1C\cdot((nd+d^{3})\cdot\log(\kappa(A)^{k}/\epsilon_{k+1})\cdot\log^{2}(n/ \delta_{k+1})).\]
We have
\[\|(A^{\top}A)^{k+1}b_{k+1}-b_{0}\|_{2} = \|(A^{\top}A)^{k+1}b_{k+1}-(A^{\top}A)^{k}b_{k}+(A^{\top}A)^{k}b_{ k}-b_{0}\|_{2}\] \[\leq \|(A^{\top}A)^{k+1}b_{k+1}-(A^{\top}A)^{k}b_{k}\|_{2}+\|(A^{\top}A )^{k}b_{k}-b_{0}\|_{2}\] \[= \|(A^{\top}A)^{k}(A^{\top}Ab_{k+1}-b_{k})\|_{2}+\|(A^{\top}A)^{k} b_{k}-b_{0}\|_{2}\]
\[\leq \|(A^{\top}A)^{k}\|\cdot\|A^{\top}Ab_{k+1}-b_{k}\|_{2}+\|(A^{\top}A)^{ k}b_{k}-b_{0}\|_{2}\] \[\leq \|(A^{\top}A)^{k}\|\cdot\|A^{\top}Ab_{k+1}-b_{k}\|_{2}+\epsilon_{k }\|b_{0}\|_{2}\] \[= \sigma_{\max}(A)^{2k}\cdot\|A^{\top}Ab_{k+1}-b_{k}\|_{2}+\epsilon _{k}\|b_{0}\|_{2}\] \[\leq \sigma_{\max}(A)^{2k}\cdot 0.1\epsilon_{k+1}\kappa(A)^{-2k}\|b_{k} \|_{2}+\epsilon_{k}\|b_{0}\|_{2}\] \[\leq 0.2\epsilon_{k+1}\|b_{0}\|_{2}+\epsilon_{k}\|b_{0}\|_{2}\] \[\leq \epsilon_{k+1}\|b_{0}\|_{2}, \tag{62}\]
where the first step follows from adding and subtracting the same thing, the second step follows from the triangle inequality, the third step follows from simple algebra, the fourth step follows from \(\|Ax\|_{2}\leq\|A\|\|x\|_{2}\), the fifth step follows from the assumption in the Lemma statement, the sixth step follows from Fact A.4, the seventh step follows from Eq. (61), the eighth step follows from the assumption in the Lemma statement, and the last step follows from \(\epsilon_{k}\leq 0.5\epsilon_{k+1}\).
**Proof of Part 2.**
We have
\[\|(A^{\top}A)^{k+1}b_{k+1}\|_{2} \leq \|(A^{\top}A)^{k+1}b_{k+1}-b_{0}\|_{2}+\|b_{0}\|_{2} \tag{63}\] \[\leq (1+\epsilon_{k+1})\|b_{0}\|_{2}\] \[\leq 2\|b_{0}\|_{2},\]
where the first step follows triangle inequality, the second step follows from Part 1, and the third step follows from \(\epsilon_{k+1}\leq 1\).
Thus,
\[\|b_{k+1}\|_{2} \leq \|((A^{\top}A)^{k+1})^{-1}\cdot(A^{\top}A)^{k+1}b_{k+1}\|_{2}\] \[\leq \|((A^{\top}A)^{k+1})^{-1}\|\cdot\|(A^{\top}A)^{k+1}b_{k+1}\|_{2}\] \[\leq \|((A^{\top}A)^{k+1})^{-1}\|\cdot 2\|b_{0}\|\] \[\leq 2\sigma_{\min}(A)^{-2(k+1)}\|b_{0}\|_{2},\]
where the first step follows from \(((A^{\top}A)^{k+1})^{-1}\cdot(A^{\top}A)^{k+1}=I\), the second step follows from \(\|Ax\|_{2}=\|A\|\|x\|_{2}\), the third step follows from Eq. (63), and the last step follows from Fact A.4.
**Proof of Part 3.**
We can choose \(\epsilon\) to satisfy these conditions. Thus, it automatically holds.
**Proof of Part 4.**
The proof follows by adding the time from the previous step and this step.
**Proof of Part 5.**
It follows from taking union bound.
### Main Result
In this section, we present and prove our main result.
**Theorem J.2**.: _Let \(A\in\mathbb{R}^{n\times d}\) be a matrix and \(b\in\mathbb{R}^{d}\) be a vector._
_Let \(\kappa\) denote the condition number of \(A\)._
_Consider the regression problem_
\[\min_{x\in\mathbb{R}^{d}}\|(A^{\top}A)^{j}x-b\|_{2}.\]
_Let \(\epsilon_{\mathrm{final}}\in(0,0.1)\) denote the accuracy parameter. Let \(\delta_{\mathrm{final}}\in(0,0.1)\) denote the failure probability._
_There exists an algorithm that runs in time_
\[O((nd+d^{3})\cdot j\cdot\log(\kappa/\epsilon_{\mathrm{final}}) \cdot\log^{2}(jn/\delta_{\mathrm{final}}))\]
_and outputs a vector \(x^{\prime}\in\mathbb{R}^{d}\) such that_
\[\|(A^{\top}A)^{j}x^{\prime}-b\|_{2}\leq\epsilon_{\mathrm{final}} \|b\|_{2}.\]
_holds with probability \(1-\delta_{\mathrm{final}}\)._
Proof.: We use mathematical induction to prove this.
**Base case:**
When \(j=0\), we have
\[\min_{x\in\mathbb{R}^{d}}\|A^{\top}A(A^{\top}A)^{0}x-b\|_{2}= \min_{x\in\mathbb{R}^{d}}\|A^{\top}Ax-b\|_{2}.\]
The base case follows from Lemma G.1
**Inductive case:**
We use
\[\delta_{1}=\delta_{2}=\cdots=\delta_{j}=\delta_{\mathrm{final}}/j\]
For each \(k\in[j]\), we choose \(\epsilon_{k}=\epsilon_{\mathrm{final}}\cdot 0.5^{j-k}\).
## Appendix K Odd Number of Matrices Regression
In this section, we provide the odd power algorithm and analyze its correctness and running time.
```
1:procedureOddPowers(\(A\in\mathbb{R}^{n\times d},b\in\mathbb{R}^{n},n\in\mathbb{Z}_{+},d\in\mathbb{Z}_{+},j \in\mathbb{Z}_{+},\epsilon_{\mathrm{final}}\in(0,1),\delta_{\mathrm{final}} \in(0,1)\))
2:\(\epsilon_{1}\gets 0.1\epsilon_{\mathrm{final}}\)
3:\(\delta_{1}\leftarrow\delta_{\mathrm{final}}/2\)
4:\(b_{1}\leftarrow\textsc{FastLinearRegression}(A\in\mathbb{R}^{n\times d},b\in \mathbb{R}^{n},n,d,\epsilon_{1},\delta_{1})\)\(\triangleright\)\(b_{1}\in\mathbb{R}^{d}\)
5:\(\epsilon_{\mathrm{even}}\leftarrow\epsilon_{\mathrm{final}}/\kappa(A)\)
6:\(\delta_{\mathrm{even}}\leftarrow\delta_{\mathrm{final}}/2\)
7:\(x^{\prime}\leftarrow\textsc{EvenPowers}(A\in\mathbb{R}^{n\times d},b_{1}\in \mathbb{R}^{d},n,d,j,\epsilon_{\mathrm{even}},\delta_{\mathrm{even}})\)
8:return\(x^{\prime}\)
9:endprocedure
```
**Algorithm 6** We want to solve regression problem \(\min_{x\in\mathbb{R}^{d}}\|A(A^{\top}A)^{j}x-b\|_{2}\)
**Theorem K.1**.: _Let \(A\in\mathbb{R}^{n\times d}\) be a matrix and \(b\in\mathbb{R}^{n}\) be a vector._
_Let \(\kappa\) denote the condition number of \(A\)._
_Consider the regression problem_
\[\min_{x\in\mathbb{R}^{d}}\|A(A^{\top}A)^{j}x-b\|_{2}.\]
_Let \(\epsilon_{\rm final}\in(0,0.1)\) denote the accuracy parameter. Let \(\delta_{\rm final}\in(0,0.1)\) denote the failure probability._
_There exists an algorithm that runs in time_
\[O((nd+d^{3})\cdot j\cdot\log(\kappa/\epsilon_{\rm final})\cdot\log^{2}(jn/\delta_ {\rm final}))\]
_and outputs a vector \(x^{\prime}\in\mathbb{R}^{d}\) such that_
\[\|(A^{\top}A)^{j}x^{\prime}-b\|_{2}\leq\epsilon_{\rm final}\|b\|_{2}.\]
_holds with probability \(1-\delta_{\rm final}\)._
Proof.: For convenient, we use \(b_{\rm odd}\) to denote \(b\).
We define \(\mathrm{OPT}\) as
\[\mathrm{OPT}:=\min_{x\in\mathbb{R}^{d}}\|A(A^{\top}A)^{j}x-b_{\rm odd}\|_{2}.\]
First, we use Algorithm 1 to solve
\[\min_{y\in\mathbb{R}^{d}}\|Ay-b_{\rm odd}\|_{2}. \tag{64}\]
By Lemma F.1, we can get \(y^{\prime}\in\mathbb{R}^{d}\) such that the following holds with probability \(1-\delta_{1}\),
\[\|Ay^{\prime}-b_{\rm odd}\|_{2} \leq (1+\epsilon_{1})\cdot\min_{y\in\mathbb{R}^{d}}\|Ay-b_{\rm odd} \|_{2} \tag{65}\] \[\leq (1+\epsilon_{1})\cdot\mathrm{OPT},\]
where the last step follows from the \(A^{\top}Ax\) might not be able to do a better job than minimizer \(y\) (in terms of minimizing cost).
This step takes time
\[O((nd+d^{3})\cdot\log(1/\epsilon_{1})\cdot\log^{2}(n/\delta_{1})).\]
By the triangle inequality, we can show
\[\|y^{\prime}\|_{2} = \|y^{\prime}-y_{*}+y_{*}\|_{2} \tag{66}\] \[\leq \|y^{\prime}-y_{*}\|_{2}+\|y_{*}\|_{2}.\]
To bound the first term of Eq. (66), we have
\[\|y^{\prime}-y_{*}\|_{2} \leq O(\sqrt{\epsilon_{1}})\cdot\sigma_{\min}(A)^{-1}\|Ay_{*}-b_{ \rm odd}\|_{2} \tag{67}\] \[\leq O(\sqrt{\epsilon_{1}})\cdot\sigma_{\min}(A)^{-1}\,\mathrm{OPT},\]
where the first step follows from Lemma D.4 and the second step follows from Eq. (65).
To bound the second term of Eq. (66), we have
\[\|y_{*}\|_{2} = \|A^{\dagger}b_{\rm odd}\|_{2} \tag{68}\] \[\leq \|A^{\dagger}\|\cdot\|b_{\rm odd}\|_{2}\] \[\leq \sigma_{\min}(A)^{-1}\cdot\|b_{\rm odd}\|_{2},\]
where the first step follows from \(y_{*}=A^{\dagger}b_{\rm odd}\), the second step follows from \(\|Ax\|_{2}\leq\|A\|\|x\|_{2}\), and the third step follows from \(\|A^{\dagger}\|=\sigma_{\min}(A)^{-1}\).
By plugging Eq. (67) and Eq. (68) into Eq. (66), we have
\[\|y^{\prime}\|_{2}\leq O(\sqrt{\epsilon_{1}})\cdot\sigma_{\min}(A)^{-1}\,{\rm OPT }+\sigma_{\min}(A)^{-1}\|b_{\rm odd}\|_{2}. \tag{69}\]
Let \(b_{\rm even}=y^{\prime}\in\mathbb{R}^{d}\).
Let \(x^{\prime}\in\mathbb{R}^{d}\).
Then, using Algorithm 5, we solve
\[\min_{x^{\prime}\in\mathbb{R}^{d}}\|(A^{\top}A)^{j}x^{\prime}-b_{\rm even}\|_{ 2}.\]
By Lemma G.1, we find an \(x^{\prime}\in\mathbb{R}^{d}\) such that
\[\|(A^{\top}A)^{j}x^{\prime}-b_{\rm even}\|_{2}\leq\epsilon_{\rm even}\|b_{ \rm even}\|_{2} \tag{70}\]
holds with probability \(1-\delta_{2}\).
This step takes time in
\[O((nd+d^{3})\cdot j\cdot\log(\kappa/\epsilon_{\rm even})\cdot\log^{2}(n/\delta _{\rm even})).\]
**Correctness.**
To bound \(\|A(A^{\top}A)^{j}x^{\prime}-b_{\rm odd}\|_{2}\), we have
\[\|A(A^{\top}A)^{j}x^{\prime}-b_{\rm odd}\|_{2} = \|A(A^{\top}A)^{j}x^{\prime}-Ay^{\prime}+Ay^{\prime}-b_{\rm odd }\|_{2} \tag{71}\] \[\leq \|A(A^{\top}A)^{j}x^{\prime}-Ay^{\prime}\|_{2}+\|Ay^{\prime}-b_{ \rm odd}\|_{2}\] \[\leq \|A(A^{\top}A)^{j}x^{\prime}-Ay^{\prime}\|_{2}+(1+\epsilon_{1}) \cdot{\rm OPT},\]
where the first step follows from adding and subtracting the same thing, the second step follows from the triangle inequality, and the third step follows from Eq. (65).
Let's consider the first term of Eq. (71).
We have
\[\|A(A^{\top}A)^{j}x^{\prime}-Ay^{\prime}\|_{2} = \|A((A^{\top}A)^{j}x^{\prime}-y^{\prime})\|_{2} \tag{72}\] \[\leq \|A\|\cdot\|(A^{\top}A)^{j}x^{\prime}-y^{\prime}\|_{2}\] \[\leq \|A\|\cdot\epsilon_{\rm even}\|y^{\prime}\|_{2}\] \[\leq \|A\|\cdot\epsilon_{\rm even}\left(O(\sqrt{\epsilon_{1}})\cdot \sigma_{\min}(A)^{-1}\,{\rm OPT}+\sigma_{\min}(A)^{-1}\|b_{\rm odd}\|_{2}\right)\] \[= \|A\|\cdot\epsilon_{\rm even}O(\sqrt{\epsilon_{1}})\cdot\sigma_{ \min}(A)^{-1}\,{\rm OPT}+\|A\|\cdot\epsilon_{\rm even}\sigma_{\min}(A)^{-1} \|b_{\rm odd}\|_{2},\]
where the first step follows from simple algebra, the second step follows from \(\|Ax\|_{2}\leq\|A\|\|x\|_{2}\), the third step follows from Eq. (70), the fourth step follows from Eq. (69), and the last step follows from simple algebra.
Then, to bound the first term of Eq. (72), we have
\[\|A\|\cdot\epsilon_{\rm even}O(\sqrt{\epsilon_{1}})\cdot\sigma_{ \min}(A)^{-1}\,{\rm OPT} \leq \sigma_{\max}(A)\sigma_{\min}(A)^{-1}\cdot\epsilon_{\rm even}O( \sqrt{\epsilon_{1}})\cdot{\rm OPT} \tag{73}\] \[= \sigma_{\max}(A)\sigma_{\min}(A)^{-1}\cdot O(\sqrt{\epsilon_{1}} \epsilon_{\rm even})\cdot{\rm OPT}\] \[= O(\sqrt{\epsilon_{1}}\epsilon_{\rm even})\kappa(A)\,{\rm OPT}\] \[= O(\sqrt{\epsilon_{1}}\epsilon_{\rm final})\,{\rm OPT},\]
where the first step follows from \(\|A\|\leq\sigma_{\max}(A)\), the second step follows from the property of \(O(\cdot)\), the third step follows from the definition of \(\kappa(A)\) (see Definition A.2), and the last step follows from \(\epsilon_{\mathrm{even}}=\epsilon_{\mathrm{final}}/\kappa(A)\).
Similarly, to bound the second term of Eq. (72), we get
\[\|A\|\cdot\epsilon_{\mathrm{even}}\sigma_{\min}(A)^{-1}\|b_{ \mathrm{odd}}\|_{2} \leq\sigma_{\max}(A)\sigma_{\min}(A)^{-1}\cdot\epsilon_{\mathrm{ even}}\|b_{\mathrm{odd}}\|_{2}\] \[=\kappa(A)\cdot\epsilon_{\mathrm{even}}\|b_{\mathrm{odd}}\|_{2}\] \[=\epsilon_{\mathrm{final}}\|b_{\mathrm{odd}}\|_{2}, \tag{74}\]
where the first step follows from \(\|A\|\leq\sigma_{\max}(A)\), the second step follows from the definition of \(\kappa(A)\) (see Definition A.2), and the last step follows from \(\epsilon_{\mathrm{even}}=\epsilon_{\mathrm{final}}/\kappa(A)\).
Plugging Eq. (73) and Eq. (74) into Eq. (72), we get
\[\|A(A^{\top}A)^{j}x^{\prime}-Ay^{\prime}\|_{2}\leq O(\sqrt{ \epsilon_{1}}\epsilon_{\mathrm{final}})\operatorname{OPT}+\epsilon_{\mathrm{ final}}\|b_{\mathrm{odd}}\|_{2}. \tag{75}\]
Therefore, by plugging Eq. (75) into (71), we have
\[\|A(A^{\top}A)^{j}x^{\prime}-b_{\mathrm{odd}}\|_{2} \leq O(\sqrt{\epsilon_{1}}\epsilon_{\mathrm{final}})\operatorname {OPT}+\epsilon_{\mathrm{final}}\|b_{\mathrm{odd}}\|_{2}+(1+\epsilon_{1}) \cdot\operatorname{OPT}\] \[\leq(1+\epsilon_{\mathrm{final}})\cdot\operatorname{OPT}+ \epsilon_{\mathrm{final}}\|b_{\mathrm{odd}}\|_{2},\]
where the last step follows from \(O(\epsilon_{1})\leq 1/10\) and \(\epsilon_{1}<\epsilon_{\mathrm{final}}/10\).
Therefore, we complete bounding \(\|A(A^{\top}A)^{j}x^{\prime}-b_{\mathrm{odd}}\|_{2}\).
**Running time**
The overall running time is
\[O((nd+d^{3})\cdot j\cdot\log(\kappa/\epsilon_{\mathrm{final}}) \cdot\log^{2}(n/\delta_{\mathrm{final}})).\]
**Failure probability.**
By taking a union over two events, the failure probability is at most \(\delta_{1}+\delta_{\mathrm{even}}=\delta_{\mathrm{final}}\).
## Appendix L Attention Kernel
In Section L.1, we discuss fast regression for the gaussian kernel (see Algorithm 7) and analyze its properties. In Section L.2, we discuss our algorithm for sketching the vector \(x^{\otimes p}\) with limited randomness (see Algorithm 8) and analyze its properties.
**Theorem L.1** (Theorem 3 in [1]).: _For every positive integers \(p,d,n\), every \(\epsilon,s_{\lambda}>0\), there exists a distribution on linear sketches \(\Pi^{p}\in\mathbb{R}^{m\times d^{p}}\) such that_
1. _If_ \[m=\widetilde{\Omega}(ps_{\lambda}^{2}\epsilon^{-2}),\] _then_ \(\Pi^{p}\) _is an_ \((n,d^{p},\epsilon,1/\operatorname{poly}(n),s_{\lambda})\)_-_SSE _(see Definition_ C.4_)._
2. _If_ \[m=\widetilde{\Omega}(p\epsilon^{-2}),\] _then_ \(\Pi^{p}\) _has the_ \(\operatorname{\mathsf{FAMP}}(n,\epsilon,1/\operatorname{poly}(n))\) _(Definition_ C.6_)._
_Moreover, in the setting of 1., for any \(X\in\mathbb{R}^{d\times n}\), if \(A\in\mathbb{R}^{d^{p}\times n}\) is the matrix whose columns are obtained by a \(p\)-fold self-tensoring of each column of \(X\), then the matrix \(\Pi^{p}A\) can be computed in time_
\[\widetilde{O}(pmm+p^{3/2}s_{\lambda}\epsilon^{-1}\operatorname{nnz}(X)).\]
### Theorem d.1
In this section, we discuss Algorithm 7 and its properties.
```
1:procedurePreconditionedGradientDescent(\(X,y,\beta\))\(\triangleright\) Theorem d.3
2:\(\triangleright\)\(X\in\mathbb{R}^{d\times n},y\in\mathbb{R}^{n}\), \(\beta\) is an upper bound on \(\operatorname{srank}(G)\)
3:\(m\gets O(\beta\log^{2}(nd/\epsilon\delta)\log(n/\delta)/\epsilon^{2})\)
4:\(s\leftarrow\Omega(m\log(mn/\epsilon_{0}\delta)\log(n/\delta)/\epsilon_{0}^{2})\)
5: Let \(W_{g}(X)\in\mathbb{R}^{m\times n}\) be the approximate Gaussian kernel in Theorem d.8
6: Let \(S\in\mathbb{R}^{s\times n}\) be an \(\mathsf{SRHT}\) matrix
7: Compute the SVD of \(SW_{g}(X)^{\top}=U\Sigma V^{\top}\)
8:\(R\gets U\Sigma^{-2}\in\mathbb{R}^{s\times m}\)
9:\(z_{0}\leftarrow\mathbf{0}_{m}\in\mathbb{R}^{m}\)
10:while\(\|W_{g}(X)^{\top}W_{g}(X)S^{\top}Rz_{t}-y\|_{2}\geq\epsilon\)do
11:\(z_{t+1}\gets z_{t}-(R^{\top}SW_{g}(X)^{\top}W_{g}(X)S^{\top}R)^{\top}(R^ {\top}SW_{g}(X)^{\top}W_{g}(X)S^{\top}Rz_{t}-R^{\top}Sy)\)
12:endwhile
13:return\(S^{\top}Rz_{t}\)
14:endprocedure
```
**Algorithm 7** Fast Regression for the Gaussian Kernel
**Lemma d.2** (Theorem 2.4 in [28]).: _Let \(T\) be an \(\mathsf{SRHT}\) matrix defined in Definition d.1._
_If_
\[m=O(\epsilon^{-2}n\log(nd/\delta)),\]
_then \(T\) is an \((n,d,\epsilon,\delta)\)-\(\mathsf{SE}\)._
**Theorem d.3** ( Formal version of Theorem 1.6 ).: _Let \(G\in\mathbb{R}^{n\times n}\) be the Attention kernel matrix (Definition d.2) for \(X\in\mathbb{R}^{d\times n}\)._
_Write \(G=Z^{\top}Z\)._
_Let \(\kappa\) denote the condition number of \(Z\)._
_If we assume that for all \(i\in[n]\), \(\|x_{i}\|_{2}\leq 1\), then Algorithm 7, with probability at least \(1-\delta\), computes an \(\widehat{x}\) satisfying the following:_
\[\|G\widehat{x}-y\|_{2}\leq\epsilon\|y\|_{2}.\]
_Moreover, let_
\[m=O(\epsilon^{-2}\beta\log^{2}(nd/\epsilon\delta)\log(n/\delta)),\]
_where \(\beta\) is an upper bound of \(\operatorname{srank}(G)\) (see Definition d.5), the vector \(\widehat{x}\in\mathbb{R}^{n}\) can be computed in time_
\[O(mn+\epsilon^{-2}nd+m^{\omega}),\]
_where \(\omega\) is the matrix multiplication exponent._
Proof.: Throughout the proof, we will set \(\widehat{\epsilon}=\epsilon/4\).
By Theorem d.8, we can compute an \(\epsilon\)-approximation to \(Z\) and \(W_{g}(X)\) in time
\[O(\epsilon^{-2}d^{2}\cdot\operatorname{poly}(\log(nd/\epsilon \delta))+nd\log(nd/\epsilon\delta))\]
If we solve the problem:
\[\min_{x\in\mathbb{R}^{n}}\|W_{g}(X)^{\top}W_{g}(X)x-y\|_{2}\]
with solution \(\widehat{x}\), then we have
\[\|W_{g}(X)^{\top}W_{g}(X)\widehat{x}-y\|_{2}\leq(1+\widehat{\epsilon})\min_{x \in\mathbb{R}^{n}}\|Z^{\top}Zx-y\|_{2}.\]
This means the optimal solution for the sketched problem gives a \(\widehat{\epsilon}\)-approximation to the optimal solution to the original problem. We will now show that Algorithm 7 computes the desired solution. By Lemma L.2, with probability at least \(1-\delta\), for any \(x\in\mathbb{R}^{m}\), we have:
\[\|\underbrace{S}_{s\times n}\underbrace{W_{g}(X)^{\top}}_{n\times m}x\|_{2}=( 1\pm\epsilon_{0})\cdot\|\underbrace{W_{g}(X)^{\top}}_{n\times m}x\|_{2}.\]
Note that from Algorithm 7, we have
\[\underbrace{S}_{s\times n}\underbrace{W_{g}(X)^{\top}}_{n\times m} = \underbrace{U}_{s\times m}\underbrace{\Sigma}_{m\times m} \underbrace{V^{\top}}_{m\times m} \tag{76}\] \[\underbrace{R}_{s\times m} = \underbrace{U}_{s\times m}\underbrace{\Sigma^{-2}}_{m\times m} \tag{77}\]
We know that
\[\kappa(R^{\top}SW_{g}(X)^{\top}) = \kappa(\Sigma^{-2}U^{\top}U\Sigma V^{\top}) \tag{78}\] \[= \kappa(\Sigma^{-1}V^{\top})\] \[= \kappa(\Sigma^{-1})\] \[= \kappa(\Sigma)\] \[\leq 2\kappa(W_{g}(X)),\]
where the first step follows from Eq. (76) and Eq. (77), the second step follows from \(U^{\top}U=I\), the third step follows from Fact A.4 and \(V\) is an orthonormal basis, the fourth step follows from the Fact A.4, and the last step follows from \(S\) is a \((n,d,\epsilon_{0},\delta)\)-SE (see Lemma L.2).
We have
\[\kappa(R^{\top}S) \leq \kappa(R^{\top}SW_{g}(X)^{\top})\cdot\kappa(W_{g}(X)) \tag{79}\] \[\leq 2\kappa(W_{g}(X))^{2}\]
where the first step follows from Fact A.4, the second step follows from Eq. (78).
For any unit vector \(x\in\mathbb{R}^{n}\), from the above formulation, we know that
\[\|SW_{g}(X)^{\top}W_{g}(X)S^{\top}Rx\|_{2} = \|U\Sigma V^{\top}V\Sigma U^{\top}U\Sigma^{-2}x\|_{2} \tag{80}\] \[= \|U\Sigma\Sigma\Sigma^{-2}x\|_{2}\] \[= \|Ux\|_{2}\] \[= \|x\|_{2}\] \[= 1,\]
where the first step follows from Eq. (76), the second step follows from \(V^{\top}V=I\) and \(U^{\top}U=I\), the third step follows from simple algebra, the fourth step follows Fact A.4, the last step follows from \(\|x\|_{2}=1\).
We need to obtain a bound on \(\|W_{g}(X)^{\top}W_{g}(X)S^{\top}Rx\|_{2}\):
\[\|W_{g}(X)^{\top}W_{g}(X)S^{\top}Rx\|_{2} = (1\pm\epsilon_{0})^{-1}\cdot\|SW_{g}(X)^{\top}W_{g}(X)S^{\top}Rx \|_{2}\] \[= (1\pm\epsilon_{0})^{-1}\] \[= 1\pm 2\epsilon_{0},\]
where first step follows from \(S\) is \((n,d,\epsilon_{0},\delta)\)-SE (see Lemma L.2) of \(W_{g}(X)^{\top}\), the second step follows from Eq. (80), the third step follows from \(\epsilon_{0}\in(0,0.1)\).
Now, pick \(\epsilon_{0}=0.1\) and solve the following regression problem:
\[\min_{z\in\mathbb{R}^{n}}\|\underbrace{R^{\top}}_{m\times s} \underbrace{S}_{s\times n}\underbrace{W_{g}(X)^{\top}}_{n\times m} \underbrace{W_{g}(X)}_{m\times n}\underbrace{S^{\top}}_{n\times s}\underbrace {R}_{s\times m}z-\underbrace{R^{\top}}_{m\times s}\underbrace{S}_{s\times n}y \|_{2}. \tag{81}\]
For convenient, we define \(\Phi\in\mathbb{R}^{m\times m}\) as follows
\[\Phi:=R^{\top}SW_{g}(X)^{\top}W_{g}(X)S^{\top}R\]
Notice that Algorithm 7 implements gradient descent.
Using Lemma D.3, after \(t=\log(1/\overline{\epsilon})\) iterations, we have
\[\|\Phi\cdot(z_{t}-z^{*})\|_{2}\leq\widehat{\epsilon}\cdot\|\Phi \cdot(z_{0}-z^{*})\|_{2}, \tag{82}\]
where
\[z^{*}=\Phi^{-1}R^{\top}Sy \tag{83}\]
is the optimal solution to Eq. (81).
We define
\[x_{t}:=S^{\top}Rz_{t} \tag{84}\]
We will show the following for \(x_{t}\) (in Eq. (84)):
\[\|W_{g}(X)^{\top}W_{g}(X)x_{t}-y\|_{2}\leq\kappa\widehat{\epsilon }\|y\|_{2}.\]
We get
\[\|R^{\top}SW_{g}(X)^{\top}W_{g}(X)x_{t}-R^{\top}Sy\|_{2} = \|\Phi\cdot z_{t}-R^{\top}Sy\|_{2} \tag{85}\] \[= \|\Phi\cdot(z_{t}-z^{*})\|_{2}\] \[\leq \widehat{\epsilon}\cdot\|\Phi(z_{0}-z^{*})\|_{2}\] \[= \widehat{\epsilon}\cdot\|\Phi z^{*}\|_{2}\] \[= \widehat{\epsilon}\cdot\|R^{\top}Sy\|_{2}\] \[\leq \widehat{\epsilon}\cdot\sigma_{\max}(R^{\top}S)\cdot\|y\|_{2},\]
where the first step follows follows from definition of \(\Phi\) and Eq. (84), the second step follows from Eq. (83), the third step follows from Eq. (82), the fourth step follows from \(z_{0}=\mathbf{0}_{m}\), the fifth step follows from the definition of \(z^{*}\), the last step follows from Fact A.4.
On the other hand,
\[\|R^{\top}SW_{g}(X)^{\top}W_{g}(X)x_{t}-R^{\top}Sy\|_{2} =\|R^{\top}S(W_{g}(X)^{\top}W_{g}(X)x_{t}-y)\|_{2}\] \[\geq\sigma_{\min}(R^{\top}S)\cdot\|W_{g}(X)^{\top}W_{g}(X)x_{t}-y \|_{2}, \tag{86}\]
where the first step follows from simple algebra and the second step follows from Fact A.4.
Putting everything together, we get
\[\|W_{g}(X)^{\top}W_{g}(X)x_{t}-y\|_{2}^{2} \leq\widehat{\epsilon}\kappa(R^{\top}S)\|y\|_{2}\] \[\leq 2\kappa(W_{g}(X))^{2}\widehat{\epsilon}\|y\|_{2},\]
where the first step follows from Eq. (85) and Eq. (86), the second step follows from Eq. (79).
This means by setting the number of iterations to
\[t=\log(\kappa(W_{g}(X))/\epsilon),\]
we obtain
\[\|W_{g}(X)^{\top}W_{g}(X)x_{t}-y\|_{2}\leq 2\widehat{ \epsilon}\|y\|_{2}. \tag{87}\]
Now, recall that for any \(x,y\in\mathbb{R}^{n}\), we have,
\[\|W_{g}(X)^{\top}W_{g}(X)x-y\|_{2}\leq(1+\widehat{\epsilon})\|Z ^{\top}Zx-y\|_{2}.\]
As a consequence,
\[\|Z^{\top}Zx_{t}-y\|_{2} \leq(1+\widehat{\epsilon})\|W_{g}(X)^{\top}W_{g}(X)x_{t}-y\|_{2}\] \[\leq(1+\widehat{\epsilon})2\widehat{\epsilon}\|y\|_{2}\] \[\leq\epsilon\|y\|_{2},\]
the second step follows from Eq. (87), and the third step follows from \(\widehat{\epsilon}\leq 0.1\epsilon\)
Now we analyze the runtime.
* Computing \(W_{g}(X)\), by Theorem L.8, takes time \[\epsilon^{-2}n\beta\cdot\mathrm{poly}(\log(nd/\epsilon\delta))+ nd\log(nd/\epsilon\delta).\]
* Applying \(S\) to \(W_{g}(X)\), using the FFT algorithm, takes time \[\epsilon^{-2}n\beta\cdot\mathrm{poly}(\log(nd/\epsilon\delta)).\]
* The SVD of \(SW_{g}(X)^{\top}\) can be computed in time \[(\epsilon^{-2}\beta)^{\omega}\cdot\mathrm{poly}(\log(nd/\epsilon /\delta))\]
The cost of each iteration is bounded by the cost of taking a matrix-vector product, which is at most \(\widetilde{O}(n\beta/\epsilon^{2})\), and there are \(O(\log{(\kappa/\epsilon)})\) iterations in total. Thus, we obtain a final runtime of
\[\epsilon^{-2}n\beta\cdot\mathrm{poly}(\log(nd/\epsilon\delta)) \cdot\log(\kappa/\epsilon)+(nd+(\epsilon^{-2}\beta)^{\omega})\cdot\log(nd/ \epsilon\delta).\]
### Tensor Tools
In this section, we discuss Algorithm 8 and its properties.
**Definition L.4**.: _Let_
\[S\in\mathbb{R}^{m^{2}}\to\mathbb{R}^{m}\]
_and_
\[T:\mathbb{R}^{d}\to\mathbb{R}^{m}\]
_be base sketches._
_Let \(X\in\mathbb{R}^{d\times n}\) be an input matrix._
_We define \(\mathcal{Z}(S,T,X)\) to be the matrix for which we apply Algorithm 8 on each column of \(X\), with base sketches \(S\) and \(T\)._
**Theorem L.5** (Theorem 4.8 in [13]).: _Let_
\[S:\mathbb{R}^{m^{2}}\to\mathbb{R}^{m}\]
_be an \((n,d,\epsilon,\delta,0)\)-SSE (see Definition C.4) for degree-2 tensors and_
\[T:\mathbb{R}^{d}\to\mathbb{R}^{m}\]
_be an \((n,d,\epsilon,\delta)\)-SE._
_Let \(p\) be a positive integer._
_Let \(Z=\mathcal{Z}(S,T,X)\) be the matrix as defined in Definition L.4._
_Then for any \(y\in\mathbb{R}^{n}\), we have_
\[(1-\epsilon)^{3p}\cdot\|X^{\otimes p}y\|_{2}\leq\|Zy\|_{2}\leq(1+\epsilon)^{3 p}\cdot\|X^{\otimes p}y\|_{2}.\]
**Theorem L.6**.: _Let \(p\in\mathbb{Z}_{+}\)._
_Let \(\epsilon,\delta\in(0,1)\)._
_Then for every \(X\in\mathbb{R}^{d\times n}\), there exists a distribution over oblivious linear sketches \(\Pi:\mathbb{R}^{d^{p}}\to\mathbb{R}^{m}\) such that if_
\[m=\Theta(\epsilon^{-2}np^{2}),\]
_we have_
\[(\Pi X^{\otimes^{p}})^{\top}\Pi X^{\otimes^{p}}\approx_{\epsilon}(X^{\otimes^{ p}})^{\top}X^{\otimes^{p}}.\]
_Moreover, using Algorithm 8,_
\[\Pi X^{\otimes^{p}}=\mathcal{Z}(S,T,X)\]
_can be computed in time_
\[O(nd+\epsilon^{-2}n^{2}p^{2}).\]
**Definition L.7** (Statistical Dimension, Definition 1 in [1]).: _Given \(\lambda\geq 0\), for every positive semidefinite matrix \(K\in\mathbb{R}^{n\times n}\), we define the \(\lambda\)-statistical dimension of \(K\) to be_
\[s_{\lambda}(K):=\operatorname{tr}[K(K+\lambda I_{n})^{-1}].\]
**Theorem L.8** (Theorem 5 in [1]).: _For every \(r>0\), every positive integers \(n,d\), and every \(X\in\mathbb{R}^{d\times n}\) such that \(\|x_{i}\|_{2}\leq r\) for all \(i\in[n]\), where \(x_{i}\) is the \(i\)-th column of \(X\), suppose \(K\in\mathbb{R}^{n\times n}\) is the attention kernel matrix i.e.,_
\[G_{j,k}=e^{\langle x_{j},x_{k}\rangle}\]
_for all \(j,k\in[n]\)._
_There exists an algorithm which computes \(W_{g}(X)\in\mathbb{R}^{m\times n}\) in time_
\[\widetilde{O}(q^{3}\epsilon^{-2}n\beta+nd\log(nd/\epsilon\delta))\]
_such that for every \(\epsilon>0\),_
\[\Pr_{W_{g}}[(1-\epsilon)K\preceq(W_{g}(X))^{\top}W_{g}(X)\preceq(1+\epsilon)K ]\geq 1-\frac{1}{\operatorname{poly}(n)},\]
_where_
\[m=\widetilde{\Theta}(q^{3}\beta/\epsilon^{2})\]
_and_
\[q=\Theta(r^{2}+\log(n/\epsilon))\]
_and \(\beta\) is an upper bound on the stable rank of \(K\)._
Proof.: Recall we define the attention kernel as
\[\exp(XX^{\top})\]
for \(X\in\mathbb{R}^{n\times d}\) (see Definition B.2). Define a matrix \(K\in\mathbb{R}^{n\times n}\) such that
\[K_{i,j}=\ \exp(x_{i}^{\top}x_{j})\]
Note that the Taylor series expansion for kernel \(K\) gives
\[K=\sum_{l=0}^{\infty}\frac{(X^{\otimes l})^{\top}X^{\otimes l}}{l!}.\]
Let
\[q=C\cdot(r^{2}+\log(n/\epsilon))\]
for a sufficiently large constant \(C\).
Let
\[Q=\sum_{l=0}^{q}\frac{(X^{\otimes l})^{\top}X^{\otimes l}}{l!}\]
be the first \(q\) terms of \(K\).
By the triangle inequality, we have:
\[\|K-Q\| \leq\;\sum_{l>q}\|\frac{(X^{\otimes l})^{\top}X^{\otimes l}}{l!}\|\] \[\leq\;\sum_{l>q}\|\frac{(X^{\otimes l})^{\top}X^{\otimes l}}{l!}\|_ {F}\] \[\leq\;\sum_{l>q}\frac{n\cdot r^{2l}}{l!}\] \[\leq\frac{\epsilon}{2}\cdot\|K\|,\]
where the first step follows from the definition of \(Q\), the second step follows from \(\|A\|\leq\|A\|_{F}\) for all matrix \(A\), the third step follows from the upper bounding the Frobenious norm, and the last step follows from the choice of \(q\) and \(\|K\|\leq n\exp(r)\).
For each term \((X^{\otimes l})^{\top}X^{\otimes l}\) in \(Q\), we run Algorithm 8 to approximate \(X^{\otimes l}\).
Let \(Z_{l}\in\mathbb{R}^{m_{l}\times n}\) be the resulting matrix \(\mathcal{Z}(S,T,X)\), where
\[m_{l}=\Omega(\epsilon^{-2}\beta l^{2}\log^{2}(nd/\epsilon\delta)\log(n/\delta)).\]
Then by Theorem L.6, we get
\[(1-\epsilon/2)(X^{\otimes l})^{\top}X^{\otimes l}\preceq(\Pi^{l}X^{\otimes l })^{\top}\Pi^{l}X^{\otimes l}\preceq(1+\epsilon/2)(X^{\otimes l})^{\top}X^{ \otimes l} \tag{88}\]
with probability at least \(1-\frac{\delta}{q+1}\).
Moreover, \(Z_{l}\) can be computed in time
\[O(\epsilon^{-2}n\beta l^{2}\cdot\log^{2}(nd/\epsilon\delta)\cdot\log(\frac{n} {\delta})).\]
Our algorithm will simply compute \(Z_{l}\) from \(l=0\) to \(q\), normalize each \(Z_{l}\) by \(\frac{1}{\sqrt{l}!}\).
More precisely, the approximation \(W_{g}(X)\) will be
\[W_{g}(X)=(\oplus_{l=0}^{q}\frac{Z_{l}}{\sqrt{l!}}),\]
Notice \(W_{g}(X)\in\mathbb{R}^{m\times n}\).
The following holds for \(W_{g}(X)^{\top}W_{g}(X)\):
\[W_{g}(X)^{\top}W_{g}(X)=\,\sum_{l=0}^{q}\frac{Z_{l}^{\top}Z_{l}}{l!}.\]
By combining terms in Eq. (88) and using a union bound over all \(0\leq l\leq q\), we obtain that with probability at least \(1-\delta\), we have the following:
\[(1-\epsilon/2)\cdot Q\preceq W_{g}(X)^{\top}W_{g}(X)\preceq(1+\epsilon/2) \cdot Q.\]
Thus, we conclude that
\[(1-\epsilon)\cdot K\preceq W_{g}(X)^{\top}W_{g}(X)\preceq(1+\epsilon)\cdot K.\]
Note the target dimension of \(W_{g}\) is
\[m = \sum_{i=0}^{q}m_{i}\] \[= \Omega(\epsilon^{-2}nq^{3}\cdot\log^{2}(nd/\epsilon\delta)\cdot \log(n/\delta)),\]
where the first step follows from the construction of \(W_{g}(X)\) and the second step follows from simple algebra.
Also, by Theorem L.6, the time to compute \(W_{g}(X)\) is
\[t = \sum_{j=0}^{q}t_{j}\] \[= O(\epsilon^{-2}n\beta q^{3}\cdot\log^{2}(nd/\epsilon\delta) \cdot\log(n/\delta)).\]
Notice that we will have to add the term \(nd\log(nd/\epsilon\delta)\) due to line 2 of Algorithm 8 when applying the SRHT to \(X\). However, we only need to perform this operation once for the term with the highest degree or for the terms with lower degree that can be formed by combining nodes computed with the highest degree. Therefore, the final runtime is:
\[O(\epsilon^{-2}n\beta q^{3}\cdot\log^{2}(nd/\epsilon\delta)\cdot\log(n/\delta )+nd\log(nd/\epsilon\delta)).\]
|
2310.17447 | Holographic Weyl Anomalies for 4d Defects in 6d SCFTs | In this note, we study $1/4$- and $1/2$-BPS co-dimension two superconformal
defects in the $6d$ $\mathcal{N}=(2,0)$ $A_{N-1}$ SCFT at large $N$ using their
holographic descriptions as solutions of $11d$ supergravity. In this regime, we
are able to compute the defect contribution to the sphere entanglement entropy
and the change in the stress-energy tensor one-point function due to the
presence of the defect using holography. From these quantities, we are then
able to unambiguously compute the values for two of the twenty-nine total Weyl
anomaly coefficients that characterize $4d$ conformal defects in six and higher
dimensions. We are able to demonstrate the consistency of the supergravity
description of the defect theories with the average null energy condition on
the field theory side. For each class of defects that we consider, we also show
that the A-type Weyl anomaly coefficient is non-negative. Lastly, we uncover
and resolve a discrepancy between the on-shell action of the $7d$ $1/4$-BPS
domain wall solutions and that of their $11d$ uplift. | Pietro Capuozzo, John Estes, Brandon Robinson, Benjamin Suzzoni | 2023-10-26T14:57:01Z | http://arxiv.org/abs/2310.17447v3 | # Holographic Weyl Anomalies for 4d Defects in 6d SCFTs
###### Abstract
In this note, we study \(1/4\)- and \(1/2\)-BPS co-dimension two superconformal defects in the \(6d\)\(\mathcal{N}=(2,0)\)\(A_{N-1}\) SCFT at large \(N\) using their holographic descriptions as solutions of \(11d\) supergravity. In this regime, we are able to compute the defect contribution to the sphere entanglement entropy and the change in the stress-energy tensor one-point function due to the presence of the defect using holography. From these quantities, we are then able to unambiguously compute the values for two of the twenty-nine total Weyl anomaly coefficients that characterize \(4d\) conformal defects in six and higher dimensions. We are able to demonstrate the consistency of the supergravity description of the defect theories with the average null energy condition on the field theory side. For each class of defects that we consider, we also show that the A-type Weyl anomaly coefficient is non-negative.
## 1 Introduction
Knowing the spectrum of local operators in a given quantum field theory (QFT) is insufficient to uniquely specify it in field theory space [1], and so operators with non-trivial extension along submanifolds embedded in the background spacetime ('defects') play an important role in classifying QFTs [2]. However, the way that the presence of these defects effects, say, correlation functions of local operators depends on the dimension \(d\) and geometry of the background manifold \(\mathcal{M}_{d}\), the co-dimension \(d-\mathfrak{d}\) and embedding of the
\(\mathfrak{d}\)-dimensional defect submanifold \(\Sigma_{\mathfrak{d}}\), and the couplings between ambient and defect degrees of freedom1. Thus, it is crucial to characterize allowable defects in a given theory and precisely determine how ambient physical observables change under the deformation by defect operators.
Footnote 1: See [3] for a recent review of defects of various (co-)dimension in QFTs.
In this effort, some of the most powerful tools that we have come from imposing symmetries on both the ambient and defect theories. The ambient field theories we consider are \(6d\), supersymmetric, and invariant under \(6d\) flat-space conformal symmetry \(SO(6,2)\); superconformal field theories (SCFTs). The defects that we study in this work are supported on embedded co-dimension 2 submanifolds, \(\Sigma_{4}\hookrightarrow\mathcal{M}_{6}\), that will preserve at least \(1/4\) of the total supersymmetries, i.e. \(\mathcal{N}\geq 1\)\(4d\) supersymmetry, as well as an \(SO(4,2)\times U(1)_{N}\subset SO(6,2)\) global symmetry representing the defect conformal symmetry and \(U(1)_{N}\) rotations in \(\mathcal{M}_{6}/\Sigma_{4}\). We will refer to these theories as defect [super]conformal field theories (D[S]CFTs).
In the following, we will focus on ambient theories that are maximally superconformal \(\mathcal{N}=(2,0)\) SCFTs with gauge algebra \(A_{N-1}\) in the large \(N\) limit and the \(1/4\)- and \(1/2\)-BPS co-dimension 2 defects that they support. Despite the highly restrictive symmetries imposed, \(6d\)\(\mathcal{N}=(2,0)\) SCFTs and their defect operators pose a challenge to direct study. We know from the worldvolume theory of a stack of coincident M5-branes [4] or M5-branes probing \(ADE\) singularities [5] that \(6d\)\(\mathcal{N}\geq(1,0)\) SCFTs exist, but generally they have no known Lagrangian description. We also know that \(6d\) SCFTs constructed from M-theory support \(4d\) BPS defect operators engineered at the intersection of orthogonal stacks of M5 branes. Since we often lack a Lagrangian description, our efforts to characterize these \(\mathfrak{d}=4\) DSCFTs are limited to analyzing their global properties using techniques such as anomaly inflow (e.g. [6]) and chiral algebra methods [7]. That said, there is a tremendous amount that we can learn about the defect theory by studying its conformal anomalies.
As with any systems preserving an \(SO(d,2)\) global conformal symmetry, putting the ambient theory on a curved \(\mathcal{M}_{d}\) results in a non-trivial Weyl anomaly. Crucial to our understanding of DCFTs, the theory supported on \(\Sigma_{\mathfrak{d}}\hookrightarrow\mathcal{M}_{d}\) has its own defect-localized contributions to the total Weyl anomaly that are sensitive to both the intrinsic submanifold geometry and its embedding in the ambient space. The resulting defect Weyl anomaly can be far more complicated than that of an ordinary \(\mathfrak{d}\)-dimensional theory. For example, it is common knowledge that the Weyl anomaly in \(d=4\) is a combination of an 'A-type' anomaly \(\sim aE_{4}\), where \(E_{4}\) is the \(4d\) Euler density, and a 'B-type' anomaly \(\sim c|W|^{2}\) with \(W_{\mu\nu\rho\sigma}\) denoting the Weyl-tensor [8]. On the other hand, it was recently discovered in [9] that the Weyl anomaly of a \(\mathfrak{d}=4\) defect in an ambient theory with \(d\geq 6\) has a total of 29 terms2.
Footnote 2: These 29 terms include 6 terms that break parity on the defect submanifold. The limit case of a co-dimension 1 defect in \(5d\) has 12 (including 3 parity odd) terms in the Weyl anomaly [9; 10].
The challenge thus far has been finding tractable, non-trivial \(\mathfrak{d}=4\) defect systems beyond free theories (e.g. [11]) in which any of the 29 available defect Weyl anomalies can be computed3. In light of recently discovered \(11d\) supergravity (SUGRA) solutions that
holographically describe certain \(\mathfrak{d}=4\) BPS defects in \(6d\) SCFTs [13; 14], we have a window on strongly coupled, non-Lagrangian defect systems that can be approached with standard tools in holography to compute quantities known to be controlled by defect anomalies.
Footnote 1: We thank M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkoozoz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berkooz, M. Berko, M. Berkooz, M. Berkooz, M. Berko, M. Berkooz, M. Berkooz, M. Berkooz, M. Berko, M. Berkooz, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berko, M. Berkov, M. Berko, M. Berkov, M.
discuss the details of the regulating scheme for the on-shell action including the vacuum solution that we use in background subtraction as well as the renormalized volume of the AdS\({}_{5}\) geometry.
## 2 Review
In this section, we will very briefly review some key background material in order to orient the subsequent computations. In the first subsection, we will introduce the two defect Weyl anomalies and discuss the physical quantities that they control, which will be the focus of the computations to follow. In the second subsection, we will give a short overview of the two solutions to \(11d\) SUGRA that will be the focus of our holographic study.
### Defect Weyl anomalies
Up to a total derivative, the Weyl anomaly of an ordinary \(4d\) CFT has two independent contributions4,
Footnote 4: This basis is not unique, and one can exchange either \(E_{4}\) or \(W^{2}\) for Branson’s \(Q\)-curvature [32] and a total derivative, which gives a basis for the 4d Weyl anomaly that is particularly convenient for holography.
\[T^{\mu}{}_{\mu}=\frac{1}{4\pi^{2}}(-a_{4d}E_{4}+c|W|^{2}). \tag{1}\]
The first term proportional to the Euler density \(E_{4}\) is the so-called "A-type" anomaly in the classification of [8], which exists in all even-dimensional CFTs and is unique in that it transforms as a total derivative under Weyl transformations. The second term given by the square of the Weyl tensor is a "B-type" anomaly. In arbitrary even-dimensional CFTs, there is generally a tower of B-type anomalies each of which is exactly Weyl invariant and built out of non-topological, rank-\(\frac{d}{2}\) monomials in curvatures. The Weyl anomaly coefficients of a \(4d\) CFT control correlation functions of the stress-energy tensor [33], and have strong upper and lower bounds on their ratio [34]; \(a_{4d}\) also appears in the EE [35], and obeys an '\(a\)'-theorem under renormalization group (RG) flows [36; 37]. For \(4d\) SCFTs with an R-symmetry, \(a_{4d}\) and \(c\) are both related to the cubic and mixed R-anomalies through non-perturbative formulae [38].
The Weyl anomaly of a conformal defect supported on \(\Sigma_{\mathfrak{d}}\hookrightarrow\mathcal{M}_{d}\) is much richer due to the additional freedom of building submanifold conformal invariants out of not only the intrinsic curvature but also the normal bundle curvature, the pullback of curvature tensors from the ambient space, and the second fundamental form for the embedding. For conformal defects on \(\Sigma_{4}\hookrightarrow\mathcal{M}_{d}\) of co-dimension 2 or greater5, there are a total of 23 anomalies respecting submanifold parity [9]6. The complete form of the \(4d\) defect Weyl anomaly is cumbersome, and so we will only display the parts relevant to the computations in the following sections (see eq. 3.1 of [9] for the full expression):
Footnote 5: The limit case of co-dimension one is far more restricted and only leads to 9 parity even anomalies [9; 10].
Footnote 6: There are an additional 6 parity odd defect Weyl anomalies, but as of yet, there are neither any known physical quantities in which they appear nor any no-go theorem to forbid them.
\[T^{\mu}{}_{\mu}{}_{\Sigma_{4}}\supset\frac{1}{(4\pi)^{2}}\Big{(}- a_{\Sigma}\overline{E}_{4}+d_{2}\mathcal{J}_{2}+\dots\Big{)}\,. \tag{2}\]
The first term is recognizable as the defect A-type anomaly proportional to the _intrinsic_ Euler density, \(\overline{E}_{4}\), of \(\Sigma_{4}\). The second term \({\cal J}_{2}\) is a B-type anomaly built out of a complicated linear combination of the submanifold pullback of the ambient curvatures, connection on the normal bundle, normal bundle curvature, and the second fundamental form for the embedding (see eq. 3.2 of [9] for the full expression). Importantly, \({\cal J}_{2}\) does not contain a term like the pullback of \(|W|^{2}\) or the square of the intrinsic Weyl tensor, and so is not analogous to the B-type anomaly of a standalone \(4d\) CFT above.
While it is unclear what physics the vast majority of terms in the full expression of the defect Weyl anomaly control, the two anomalies displayed above appear in two physical quantities that will be the primary focus of the following work.
The first quantity we will analyze is the one-point function of the stress-energy tensor. For a \(\mathfrak{d}\)-dimensional conformal defect embedded in a \(d\)-dimensional CFT, conformal symmetry preserved by the defect constrains the form of the one-point function of the stress-energy tensor a distance \(x_{\perp}\) away from the defect to be of the form [39; 40]
\[\langle T^{ab}\rangle=-h_{T}\frac{(d-\mathfrak{d}-1)\delta^{ab}}{|x_{\perp}|^ {d}}\,,\qquad\langle T^{ij}\rangle=h_{T}\frac{(\mathfrak{d}+1)\delta^{ij}-d \frac{x_{\perp}^{i}x_{\perp}^{j}}{|x_{\perp}|^{2}}}{|x_{\perp}|^{d}}\,, \tag{3}\]
where \(a,b\) index directions parallel to the defect and \(i,j\) label directions normal to the defect. By starting from the defect geometry \(\Sigma_{4}=\mathbb{R}^{4}\hookrightarrow\mathbb{R}^{d}\) and then finding the totally transverse log divergent parts of the effective action in the presence of a linear ambient metric perturbation [9; 41], it can be shown that the normalization of the stress-energy tensor one-point function is determined by
\[h_{T}=-\frac{\Gamma\left(\frac{d}{2}-1\right)}{\pi^{\frac{d}{2}}\left(d-1 \right)}d_{2}\,. \tag{4}\]
In the case that we are particularly interested in for the following work, i.e. \(d=6\),
\[h_{T}=-\frac{1}{5\pi^{3}}d_{2}\,. \tag{5}\]
There is a constraint on the sign of \(d_{2}\) that follows from the assumption that the average null energy condition (ANEC) holds in the presence of a defect. That is, the statement of the ANEC is that for any state \(\,|\Psi\rangle\) of a QFT, the expectation value of the stress-energy tensor projected along a null direction \(v^{\mu}\) in that state satisfies
\[\int_{-\infty}^{\infty}d\lambda\,\,\,\langle\Psi|T_{\mu\nu}|\Psi\rangle\,v^{ \mu}v^{\nu}\geq 0, \tag{6}\]
where \(\lambda\) parametrizes the null geodesic. From eq. (4), we see that by taking the ambient theory to be a CFT and \(\,|\Psi\rangle\) to be the vacuum state of the theory deformed by a defect and orienting the null ray \(v^{\mu}\) to be parallel to the defect and separated by a distance \(x_{\perp}\) in the normal direction, \(h\geq 0\), which implies \(d_{2}\leq 0\)[9; 30]7.
The other physical quantity controlled by defect Weyl anomalies that we will study below is the contribution to the EE of a spherical region of size \(R\) centered on \(\Sigma_{4}=\mathbb{R}^{1,3}\hookrightarrow\mathbb{R}^{1,d-1}\). Following the same logic that formed the basis of the proof for \(2d\) defects [30; 44], it was shown in [9] that for a \(4d\) defect of co-dimension \(d-4\), the coefficient of the universal, i.e. the log divergent, part of the defect EE is
\[S_{\rm EE}[\Sigma]\Big{|}_{\rm log}=-4\left[\,a_{\Sigma}+\frac{1}{4}\frac{(d- 4)(d-5)}{d-1}\,d_{2}\right]\log\left(\frac{R}{\epsilon}\right), \tag{7}\]
where \(\epsilon\ll R\) is a UV cutoff scale and \(\big{|}_{\rm log}\) denotes dropping the leading non-universal divergences as well as the trailing scheme dependent terms.
For a conformal defect on \(\Sigma_{4}\), we will use a background subtraction scheme to isolate the defect contribution to the EE. That is, our computations below will use
\[4a_{\Sigma}+\frac{2}{5}d_{2}=-R\partial_{R}\left(S_{\rm EE}[\Sigma]-S_{\rm EE }[\emptyset]\right)|_{R\to 0}, \tag{8}\]
where \(S_{\rm EE}[\emptyset]\) is the EE computed without the defect, i.e. the EE of the vacuum of the \(6d\) ambient theory. Thus, combining the computation of \(d_{2}\) from \(\Delta\left\langle T_{ij}\right\rangle\) with the result of eq. (8), we can compute the defect A-type anomaly unambiguously.
Unlike \(d_{2}\), however, there is no constraint on the sign of \(a_{\Sigma}\). Indeed, in the simple case of a free scalar on a \(5d\) manifold with a boundary, \(a_{\Sigma}>0\) for Neumann (Robin) boundary conditions, while \(a_{\Sigma}<0\) for Dirichlet [10]8.
Footnote 8: Note we are using the conventions for the definition of the 4d defect A-type anomaly \(a_{\Sigma}\) as in [9], which differs from the defect A-type anomaly, \(a\), in [10] by \(a_{\Sigma}\leftrightarrow-a/5760\).
### 11d SUGRA solutions
#### Two-charge solutions
We now briefly review the domain wall solutions in \(7d\)\(\mathcal{N}=4\) gauged SUGRA found in [13] and uplifted to \(11d\) in [14]. The bosonic \(7d\) gauged SUGRA action built from the metric \(g\), two scalars \(\Phi_{1,2}\) and two \(U(1)\) gauge fields \(A_{1,2}\) takes the following form:
\[S=-\frac{1}{16\pi G_{N}^{(7)}}\int d^{7}x\sqrt{|g|}\left(\mathcal{R}-\frac{1} {2}|\partial_{\mu}\Phi_{I}|^{2}-\hat{g}^{2}V(\Phi)-\frac{1}{4}\sum_{I=1}^{2}e ^{\vec{a}_{I}\vec{\Phi}}F_{I}^{2}\right). \tag{9}\]
Using \(\vec{a}_{1}=(\sqrt{2},\sqrt{2/5})\), \(\vec{a}_{2}=(-\sqrt{2},\sqrt{2/5})\), the potential is given by
\[V=-4e^{-\frac{1}{2}(\vec{a}_{1}+\vec{a}_{2})\vec{\Phi}}-2\left(e^{\frac{1}{2} (\vec{a}_{1}+2\vec{a}_{2})\vec{\Phi}}+e^{\frac{1}{2}(2\vec{a}_{1}+\vec{a}_{2}) \vec{\Phi}}\right)+\frac{1}{2}e^{2(\vec{a}_{1}+\vec{a}_{2})\vec{\Phi}}. \tag{10}\]
The domain wall solution to eq. (9) describing the double analytic continuation of a charged black hole is given by
\[ds_{7}^{2}=(yP(y))^{\frac{1}{5}}ds_{\rm AdS_{5}}^{2}+\frac{y(yP(y))^{\frac{1}{ 5}}}{4Q(y)}dy^{2}+\frac{yQ(y)}{(yP(y))^{\frac{4}{5}}}dz^{2}, \tag{11}\]
where the polynomials \(P,\,Q\) are given by
\[P(y) =H_{1}(y)H_{2}(y), \tag{12a}\] \[Q(y) =-y^{3}+\mu y^{2}+\frac{\hat{g}^{2}}{4}P(y), \tag{12b}\]
where \(H_{I}(y)=y^{2}+q_{I}\), \(I\in\{1,2\}\). The gauge fields in this solution9 are given by
Footnote 9: Note that, in general, the action in eq. (9) does not qualify as a consistent truncation of \(11d\) supergravity. The \(7d\) solutions considered here, however, are characterized by \(F_{\rm i}\wedge F_{2}=0\); this guarantees that their uplift produces consistent solutions of the \(11d\) theory [45].
\[A_{I}=\left(\sqrt{1-\frac{\mu}{q_{I}}}\frac{q_{I}}{H_{I}(y)}+a_{I}\right)dz. \tag{13}\]
In order to find BPS solutions, SUSY forces \(\mu=0\). For both \(q_{I}\neq 0\), the solutions are \(1/4\)-BPS, while setting one charge, say \(q_{2}\), to zero allows for \(1/2\)-BPS solutions. In the following, we will refer to the former \(1/4\)-BPS cases as 'two-charge solutions' and the latter \(1/2\)-BPS cases as 'one-charge solutions'. The coordinate \(y\) ranges from \(y_{+}\), the largest root of \(Q(y)\), to infinity. To have a smooth geometry one can choose the gauge so that \(A_{I}(y_{+})=0\) by appropriate choice of the \(a_{I}\). Setting \(\hat{g}=2\), the \(\text{AdS}_{5}\times\text{S}^{1}\) geometry does not have a conical deficit provided \(z\in[0,2\pi)\) (this will be assumed in the uplift to \(11d\)). At \(y=y_{+}\), the geometry either has a smooth cap or a conical deficit \(2\pi\frac{\hat{n}-1}{\hat{n}}\) with \(\hat{n}\) related to \(y_{+}\) by the constraint \(\hat{n}\,Q^{\prime}(y_{+})=y_{+}^{2}\).
The conditions \(Q(y_{+})=0\) and \(\hat{n}\,Q^{\prime}(y_{+})=y_{+}^{2}\) can be solved to determine \(q_{1}\) and \(q_{2}\) in terms of \(\hat{n}\) and \(y_{+}\) as follows
\[q_{I}=y_{+}\left(\frac{3\hat{n}+1}{\hat{n}\hat{g}^{2}}-y_{+}\pm\frac{2}{\hat{ g}}\sqrt{\frac{(1+3\hat{n})^{2}}{4\hat{g}^{2}\hat{n}^{2}}-y_{+}}\right), \tag{14}\]
where \(q_{1}\) and \(q_{2}\) are chosen with opposite signs for the square root. This has real solutions provided \(0\leq y_{+}\leq y_{+,\text{max}}\) with \(y_{+,\text{max}}=(1+3\hat{n})^{2}/4\hat{g}^{2}\hat{n}^{2}\). It will be useful later to notice that the sum \(q_{1}+q_{2}\) is always non-negative as is evident from
\[\frac{q_{1}+q_{2}}{2y_{+}}=\left(\frac{3\hat{n}+1}{\hat{n}\hat{g}^{2}}-y_{+} \right)\geq\left(\frac{3\hat{n}+1}{\hat{n}\hat{g}^{2}}-y_{+,\text{max}}\right) =\frac{(\hat{n}-1)(3\hat{n}+1)}{4\hat{g}^{2}\hat{n}^{2}}\geq 0. \tag{15}\]
Uplifting to \(11d\), the metric for the two-charge \(1/4\)-BPS solutions can be written schematically as
\[ds_{11}^{2}=\hat{f}_{\text{AdS}}^{2}ds_{\text{AdS}_{5}}^{2}+\hat {f}_{y}^{2}dy^{2}+\hat{f}_{z}^{2}dz^{2}+\hat{f}_{\phi_{4}}^{2}d\phi_{i}^{2}+ \hat{f}_{z\phi_{i}}^{2}dzd\phi_{i}+\hat{f}_{\psi}^{2}d\psi^{2}+\hat{f}_{\zeta} ^{2}d\zeta^{2}+\hat{f}_{\psi\zeta}d\psi d\zeta, \tag{16}\]
where each of the \(\hat{f}\)'s displayed in eq. (10) is a function of the \(y\), \(\psi\), and \(\zeta\) coordinates and also depends on the \(q_{I}\)'s and \(a_{I}\)'s. Note that in eq. (10), we have introduced the slightly abusive shorthand
\[\sin x\equiv s_{x},\qquad\cos x\equiv c_{x}, \tag{17}\]
in order to compactly express some of the more cumbersome expressions, and we will adopt this notation throughout the following sections. Continuing on, the uplifted four-form field strength can be inferred from
\[\frac{\star_{11}F_{4}}{\kappa^{2}}= -2(\hat{H}(X_{0}+2(X_{1}+X_{2}))-2X_{0}^{2}+2(X_{0}^{2}-X_{1}^{2}) s_{\zeta}^{2}+2(X_{0}^{2}-X_{2}^{2})c_{\psi}^{2}c_{\zeta}^{2})\Upsilon_{7} \tag{18}\] \[+\frac{c_{\zeta}^{2}c_{\psi}s_{\psi}}{2X_{0}X_{2}}(X_{2}\star_{7} dX_{0}-X_{0}\star_{7}dX_{2})\wedge d\psi+\frac{c_{\zeta}s_{\zeta}}{2X_{1}}\star_{7} dX_{1}\wedge d\zeta\] \[-\frac{c_{\zeta}s_{\zeta}}{2X_{0}X_{2}}(X_{2}s_{\psi}^{2}\star_{7} dX_{0}+X_{0}c_{\psi}^{2}\star_{7}dX_{2})\wedge d\zeta+\frac{c_{\zeta}s_{\zeta}}{4X_{1} ^{2}}d\zeta\wedge(d\phi_{1}+2A_{1})\wedge\star_{7}dA_{1}\] \[-\frac{c_{\zeta}c_{\psi}}{4X_{2}^{2}}(c_{\zeta}s_{\psi}d\psi+s_{ \zeta}c_{\psi}d\zeta)\wedge(d\phi_{2}+2A_{2})\wedge\star_{7}dA_{2}\]
where we have set \(\hat{g}=2\), and where \(\Upsilon_{7}\) is the \(7d\) volume form. We also defined
\[X_{1}=\frac{(yH_{2}(y))^{\frac{2}{5}}}{H_{1}(y)^{\frac{3}{5}}}, \quad X_{2}=\frac{(yH_{1}(y))^{\frac{2}{5}}}{H_{2}(y)^{\frac{3}{5}}},\quad X_{ 0}=(X_{1}X_{2})^{-2} \tag{19}\]
as well as
\[\hat{H}=\frac{X_{2}(H_{2}-q_{2}c_{\psi}^{2})c_{\zeta}^{2}}{y^{2}}+X_{1}s_{ \zeta}^{2}. \tag{20}\]
#### Electrostatic solutions
In this subsection, we review the construction of an infinite class of 'bubbling' solutions to \(11d\) SUGRA with \(\text{AdS}_{5}\times\mathbb{S}^{1}\) boundary geometries that holographically describe \(1/2\)-BPS co-dimension \(2\) defects in \(6d\) SCFTs [14]. There is a long history of \(\text{AdS}_{5}\) compactifications in \(11d\) SUGRA and M-theory holographically dual to \(4d\)\(\mathcal{N}=2\) SCFTs, e.g. [15; 16; 46; 47]. The class into which the solutions of [13; 14] are embedded are a particular type of Lin-Lunin-Maldacena (LLM) 'bubbling' geometries [15; 16].
Recall that the general LLM solution consists of an \(11d\) geometry with a warped product \(\text{AdS}_{5}\times\mathbb{S}^{2}\) over \(\mathcal{M}_{4}\) realized as a \(U(1)_{\chi}\)-fibration over a \(3d\) base space \(\mathcal{B}_{3}\) supported by four-form flux. The data that specifies the solution is encoded in a function that satisfies a non-linear Toda equation on \(\mathcal{B}_{3}\), which is generically difficult to solve. However, by imposing that \(\mathcal{B}_{3}\) has an additional \(U(1)_{\beta}\) isometry, the Toda equation can be cast in an axi-symmetric form that can be solved more easily. Further facilitating finding general solutions to the axi-symmetric Toda equation, one can perform a Backlund transformation to map to a Laplace-type equation on \(\mathbb{R}^{3}\), and so the problem is turned into an 'electrostatic' one [6; 14; 48; 49; 50; 18]. Hence, the class of bubbling geometries reviewed below will be referred to as 'electrostatic solutions' in the following sections.
In the formulation as a Laplace-type equation, finding a solution to the SUGRA equations of motion amounts to specifying a linear charge density \(\varpi\) which determines the electrostatic potential \(V\). Exploiting the axial symmetry of the problem on \(\mathcal{B}_{3}\), we take \(\varpi=\varpi(\eta)\) to be aligned along the \(\eta\)-axis, i.e. the fixed point of the \(U(1)_{\beta}\) rotations. The
bosonic sector of these solutions takes the form
\[ds_{11}^{2} =\kappa_{11}^{\frac{2}{3}}\left(\frac{\dot{V}\sigma}{2V^{\prime \prime}}\right)^{\frac{1}{3}}\left(4ds_{\text{AdS}_{5}}^{2}+\frac{2V^{\prime \prime}\dot{V}}{\sigma}d\Omega_{2}^{2}+\frac{2(2\dot{V}-\ddot{V})}{\dot{V} \sigma}\Big{(}d\beta+\frac{2\dot{V}\dot{V}^{\prime}}{2\dot{V}-\ddot{V}}d\chi \Big{)}^{2}\right. \tag{21a}\] \[\qquad\qquad\qquad\qquad\qquad+\frac{2V^{\prime\prime}}{\dot{V}} \Big{(}dr^{2}+\frac{2\dot{V}}{2\dot{V}-\ddot{V}}r^{2}d\chi^{2}+d\eta^{2}\Big{)} \Bigg{)}\] \[\equiv f_{\text{AdS}}^{2}ds_{\text{AdS}_{5}}^{2}+f_{\text{S}^{2}}d \Omega_{2}^{2}+f_{\beta}^{2}d\beta^{2}+f_{\chi}^{2}d\chi^{2}+f_{\beta\chi}^{2} d\beta d\chi+f_{3}^{2}(dr^{2}+d\eta^{2})\,\] \[C_{3} =\frac{2\kappa_{11}}{\sigma}\left(\left(\dot{V}\dot{V}^{\prime}- \sigma\eta\right)d\beta-2\dot{V}^{2}V^{\prime\prime}d\chi\right)\wedge\Upsilon _{\text{S}^{2}}\, \tag{21b}\]
where we have adopted the notation where \(\Upsilon_{\mathcal{M}}\!:=\!\sqrt{|g_{\mathcal{M}}|}dx^{1}\wedge\ldots\wedge dx ^{d}\) is the volume form on a \(d\)-dimensional manifold \(\mathcal{M}\). In this notation, the coordinates \(\{r,\eta,\beta\}\) span \(\mathcal{B}_{3}\), \(\kappa_{11}=\pi\ell_{P}^{3}/2\), and
\[V^{\prime}\equiv\partial_{\eta}V,\qquad\dot{V}\equiv r\partial_{r}(V),\qquad \sigma\equiv V^{\prime\prime}(2\dot{V}-\ddot{V})+(\dot{V}^{\prime})^{2}. \tag{22}\]
In this background, away from sources, the electrostatic potential \(V(r,\eta)\) satisfies
\[\ddot{V}(r,\eta)+r^{2}V^{\prime\prime}(r,\eta)=0, \tag{23}\]
subject to the boundary condition \(\left.\partial_{r}V\right|_{\eta=0}=0\). Exploiting the \(U(1)_{\beta}\) isometry imposed on \(\mathcal{B}_{3}\), the line charge distribution \(\varpi(\eta)\) specifying the solution is related to the Laplace potential \(V\) by
\[\varpi(\eta)=\lim_{r\to 0^{+}}\dot{V}(r,\eta). \tag{24}\]
Given an appropriate \(\varpi(\eta)\), the solution to eq. (23) can be expressed in terms of a Green's function, \(G(r,\eta,\eta^{\prime})\), as
\[V(r,\eta)=-\frac{1}{2}\int d\eta^{\prime}G(r,\eta,\eta^{\prime})\varpi(\eta^{ \prime}). \tag{25}\]
By the symmetry of the problem, the Green's function can be written simply using the method of images as [14, 18]
\[G(r,\eta,\eta^{\prime})=\frac{1}{\sqrt{r^{2}+(\eta-\eta^{\prime})^{2}}}-\frac{ 1}{\sqrt{r^{2}+(\eta+\eta^{\prime})^{2}}}. \tag{26}\]
The complete description of the solution to the \(11d\) SUGRA field equations is thus given by finding a \(\varpi(\eta)\) that obeys a set of necessary conditions.
For a generic \(\varpi(\eta)\), the constraints that follow from charge conservation and regularity (modulo \(A_{k}\) singularities on \(\mathcal{M}_{4}\)) of the full \(11d\) geometry were given in [47]. Satisfying these constraints determines the profile of \(\varpi(\eta)\) to be a continuous, convex piecewise linear function of \(\eta\) with integer slope, whose slope decreases by integer values at discrete \(\eta_{a}\). In general, the boundary conditions and symmetry imposed on \(V\) in solving eq. (23) require \(\varpi(0)=0\). However, there are generally two cases for the behavior of \(\varpi\) as \(\eta\) increases.
In the first case, apart from the zero at the origin, \(\varpi\) has a zero at some value \(\eta=\eta_{c}>0\) where the internal space closes off. The geometry of the \(11d\) SUGRA solution
is then a warped product of AdS\({}_{5}\) over the compact internal space \(\mathcal{M}_{6}=\mathcal{C}_{\mathbf{g}}\times\mathcal{M}_{4}\), and holographically describes a 4d theory that descends from the compactification of a 6\(d\) SCFT on a Riemann surface \(\mathcal{C}_{\mathbf{g}}\). The generic charge distribution is decomposed into \(n+1\)'regular' intervals with positive slope and an 'irregular' interval \([\eta_{n},\eta_{c}]\) with negative slope fixed by ratios of four-form flux. The data associated with the kinks between the regular parts of the charge distribution, namely a partition of \(N\), label a regular puncture on \(\mathcal{C}_{\mathbf{g}}\), while the data specifying the slope of the irregular interval is mapped to an irregular puncture [18]. This construction - reminiscent of other spindle compactifications engineering \(4d\) SCFTs [51; 52; 53; 54; 55; 56; 57; 58] - was argued in [18] to be the SUGRA dual to class-\(\mathcal{S}\) constructions [19] of certain classes of Argyres-Douglas theories [59] by analyzing anomalies and counting of Coulomb and Higgs branch operators in the field theory. While we will not study these types of solutions further here, we will mention some of their properties as they pertain to the results of holographic calculations of defect anomalies.
The second case, relevant for our study, is where \(\varpi(\eta)\) has non-trivial support over the whole range \(\eta\in[0,\infty)\)[14]. Since \(\varpi(\eta)\) never turns around to hit the \(\eta\)-axis, the geometry \(\mathcal{M}_{6}\) in the \(11d\) SUGRA solution is non-compact, and the \(11d\) geometry can be engineered to be asymptotically locally AdS\({}_{7}\times\mathds{S}^{4}\) where the geometry of the conformal boundary of the AdS\({}_{7}\) factor is AdS\({}_{5}\times\mathds{S}^{1}\). These solutions are, thus, interpreted as holographically describing co-dimension 2 defect operators in \(6d\) SCFTs, where the defect operator 'lives' at the conformal boundary of AdS\({}_{5}\).
As a simple example of a line charge density that gives rise to a non-compact geometry, it was shown in [14] that the one-charge solution reviewed in the previous subsection can be recast in the language of the electrostatic solutions as a \(\varpi(\eta)\) with two segments:
\[\varpi(\eta)=\left\{\begin{array}{ll}\left(1+\frac{1}{\sqrt{1-4q_{1}}}\right) \eta,&\quad\eta\in\big{[}0,\frac{N}{2}\sqrt{1-4q_{1}}\big{]}\\ \eta+N/2,&\quad\eta\in\big{[}\frac{N}{2}\sqrt{1-4q_{1}}\big{]},\infty\big{)}. \end{array}\right. \tag{27}\]
Due to \(\varpi\) being continuous and piecewise linear, we will refer to the solution engineered by eq. (27) as a'single kink solution'. This relation between the \(q_{2}\to 0\) limit of the two-charge solutions and the simple single kink line charge distribution for the electrostatic solutions will be useful in later sections as a consistency check for our computations. We should also note that the constraints that the change in slope of \(\varpi(\eta)\) is integral forces \(q_{1}=\frac{i^{2}-1}{4j^{2}}\) for \(j\in\mathbb{N}\).
Generalizing beyond the single kink solutions, the constraints on \(\varpi(\eta)\) realizing a defect solution allow for a generic \(n\)-kink charge profile. Since \(\varpi(\eta)\) is piecewise linear, its behavior on the \(a^{\text{th}}\) interval, where \(\eta\in[\eta_{a},\eta_{a+1}]\) and \(a\in\{0,1,\ldots,n\}\), can be written as [6; 14]
\[\varpi_{a}(\eta) =\left(1+\sum_{b=a+1}^{n}k_{b}\right)\eta+\sum_{b=1}^{a}\eta_{b}k _{b} \tag{28}\] \[\equiv p_{a+1}\eta+\delta_{a+1},\]
where in the second line we have introduced a convenient short hand for the slope \(p_{a+1}\) and intercept \(\delta_{a+1}\) of the line continued from the \(a^{\text{th}}\) segment. From the boundary condition
\(\varpi(0)=0\) it is understood that \(\eta_{0}=0\), and due to the semi-infinite domain of support we take \(\eta_{n+1}\to\infty\).
As a simple visualization of an arbitrary distribution, see the left side of figure 1. Note that from the constraint following from the quantization of four-form flux \(N=2\sum_{a=1}^{n}\eta_{a}k_{a}\) along with the quantization of the \(\eta_{a}\) and their ordering along the \(\eta\)-axis (\(0<\ldots<\eta_{a}<\eta_{a+1}<\ldots<\eta_{n}\)), there is a natural interpretation of the data \((\eta_{a},k_{a})\) specifying the charge distribution as a Young diagram, which is displayed on the right side of figure 1.
In the language of the field theory description, the Young diagram corresponding to the specific \(\varpi(\eta)\) is in correspondence to both the Lie algebra homomorphism \(\vartheta:\mathfrak{sl}(2)\to\mathfrak{g}\) and to the choice of Levi subalgebra \(\mathfrak{l}\) of the \(A_{N-1}\) gauge algebra. Furthermore, the slope change \(k_{a}\in\mathbb{Z}\) between the \((a-1)^{\text{th}}\) and \(a^{\text{th}}\) intervals corresponds to the monopole charge at the \(\mathbb{R}^{4}/\mathbb{Z}_{k_{a}}\) orbifold point located at \((r,\eta)=(0,\eta_{a})\) in the internal manifold. These points are the holographic realization of the non-Abelian summands \(\mathfrak{su}(k_{a})\) of the global symmetry algebra [47].
Lastly, for use in future computations, it will be convenient to define the'moments' of the potential as in [14]
\[m_{j}=\sum_{a=1}^{n}(p_{a}-p_{a+1})\eta_{a}^{j}=\sum_{a=1}^{n}k_{a}\eta_{a}^{j}. \tag{29}\]
For most of the following, we will only need the first and third moments
\[m_{1}=\frac{N}{2}\qquad\text{and}\qquad m_{3}=\sum_{a}\frac{N_{a}^{3}}{8k_{a}^ {2}} \tag{30}\]
respectively.
Figure 1: **(Left)** A generic line charge distribution \(\varpi(\eta)\), with \(n\) kinks at positions \(\eta_{a}\) along the axis of cylindrical symmetry, specifying a solution to the axially symmetric Laplace equation in \(\mathbb{R}^{3}\). The \(\text{AdS}_{7}\times\mathbb{S}^{4}\) vacuum corresponds to the single-kink (\(n=1\)) charge distribution with \(k_{1}=1\); the location of the kink is then given by \(\eta_{1}=N/2\). **(Right)** The Young Tableau corresponding to the partition \(N=2\sum_{a=1}^{n}k_{a}\eta_{a}=\sum_{a=1}^{n}N_{a}\). The height and width of the \(a\)-th block are given by the location \(\eta_{a}\in\mathbb{Z}\) and slope change \(k_{a}\in\mathbb{Z}\) of the \(a\)-th kink in \(\varpi(\eta)\), respectively. The \(\text{AdS}_{7}\times\mathbb{S}^{4}\) vacuum is associated to the \(\mathbf{1}\) of \(\mathfrak{su}(N)\) determined by \(n=k_{1}=1\) and \(\eta_{1}=N/2\).
Holographic stress-energy tensor one-point function
In this section, we will compute the contribution of a co-dimension 2 defect to the one-point function of the stress-energy tensor of the ambient \(6d\) SCFT. To do so, we will reduce the \(11d\) SUGRA backgrounds described in the previous section on the internal \(\mathbb{S}^{4}\) and employ the holographic renormalization methods of [60]. In their original formulation, these methods are meant to apply to asymptotically AdS solutions of pure Einstein-Hilbert gravity; therefore, we must ensure that the presence of the four-form flux in the dimensionally reduced M-theory solutions does not necessitate a modification of those methods. In both two-charge and electrostatic solutions, we will show that the field strength decays sufficiently fast as the conformal boundary of AdS\({}_{7}\) is approached so that it produces a vanishing contribution to the field equations on the boundary.
Following the general procedure in [60], we begin by recasting the \(11d\) metric as a perturbation \(h_{11}\) about the AdS\({}_{7}\times\mathbb{S}^{4}\) vacuum:
\[ds_{11}^{2}=g_{\text{AdS}_{7}\times\mathbb{S}^{4}}+h_{11}. \tag{13}\]
Dimensionally reducing on the internal \(\mathbb{S}^{4}\) then leads to the \(7d\) line element
\[ds_{7}^{2}=\left(1+\frac{\bar{\varsigma}}{5}\right)g_{\text{AdS}_{7}}+\bar{h}_ {7}, \tag{14}\]
where \(g_{\text{AdS}_{7}}\) is the metric on AdS\({}_{7}\), the \(7d\) field \(h_{7}\) captures the fluctuations about the AdS\({}_{7}\) geometry, and \(\varsigma\) is the trace of the fluctuations in the internal manifold. Bars indicate zero modes on the internal space; for instance10,
Footnote 10: Our index conventions in this section are that \(\mu,\nu,\dots\) are AdS\({}_{7}\) indices, \(a,b,\dots\) are \(\mathbb{S}^{4}\) indices, and \(i,j,\dots\) are \(6d\) indices on the conformal boundary of AdS\({}_{7}\).
\[\bar{\varsigma}=\frac{3}{4}\int_{\mathbb{S}^{4}}\sqrt{g_{\mathbb{S}^{4}}}\ h^{ab}g_{ab}^{(0)}. \tag{15}\]
Mapping the \(7d\) line element into Fefferman-Graham (FG) gauge,
\[ds_{7}^{2}=\frac{L^{2}}{u^{2}}\left(du^{2}+g\right), \tag{16}\]
where the \(6d\) metric \(g\) admits the power series expansion
\[g=g_{(0)}+g_{(2)}u^{2}+g_{(4)}u^{4}+g_{(6)}u^{6}+h_{(6)}u^{6}\log u^{2}+\dots, \tag{17}\]
the \(6d\) stress-energy tensor one-point function can be computed
\[\langle T_{ij}\rangle\ dx^{i}dx^{j} =\frac{3L^{5}}{8\pi G_{N}^{(7)}}\left(g_{(6)}-A_{(6)}+\frac{S}{24 }\right) \tag{18a}\] \[=\frac{N^{3}}{4\pi^{3}}\left(g_{(6)}-A_{(6)}+\frac{S}{24}\right), \tag{18b}\]
where \(A_{(6)}\) and \(S\) are rank-2 tensors built out of \(g_{(0)}\), and in the second line we have used the holographic map to field theory quantities
\[\frac{1}{G_{N}^{(7)}}=\frac{\text{vol}(\mathbb{S}^{4})}{G_{N}^{(11)}},\quad G_{N} ^{(11)}=2^{4}\pi^{7}\ell_{P}^{9},\quad L^{3}=\pi N\ell_{P}^{3},\quad\text{vol}( \mathbb{S}^{4})\ =\frac{L^{4}\pi^{2}}{6}. \tag{10}\]
Note that in our conventions the internal \(\mathbb{S}^{4}\) has curvature scale \(L^{2}/4\). Explicit expressions for \(A_{(6)}\) and \(S\) are provided in [60]11. Once the appropriate vacuum subtraction is performed, the defect contribution to \(h_{T}\), and therefore to \(d_{2}\), can be extracted via eq. (3) and eq. (5).
Footnote 11: Note that the differences in sign are due the fact that we are using the convention that, in units of \(L^{2d}\), the scalar curvature \(\mathcal{R}<0\) for a space of constant “negative curvature”; whereas the authors of [60] use the opposite convention, \(\mathcal{R}>0\).
### Two-charge solutions
In this subsection, we will focus on the \(11d\) uplift of the two-charge solutions described in 2.2 and compute \(\langle T_{ij}\rangle\) with the methods described above. In order to isolate the contributions from the holographic dual to the defect, we will employ a background subtraction scheme where we remove the contributions from vacuum AdS\({}_{7}\times\mathbb{S}^{4}\).
Before jumping in to the computation of \(\langle T_{ij}\rangle\), we need to carefully check that we can properly utilize our chosen holographic renormalization scheme. One of the crucial assumptions in the construction of eq. (10a) is that Einstein's equations near the boundary of the dimensionally reduced AdS\({}_{7}\) geometry are not modified by contributions coming from non-trivial fluxes, such as the four-form curvature \(F_{4}\). So, we must be careful to make sure that in the asymptotic small \(u\) region, the components of the variation of the \(F_{MNPQ}F^{MNPQ}\) part of the 11d SUGRA action involving AdS\({}_{7}\) directions fall off sufficiently fast so as to not modify the boundary equations of motion.
For the solutions in eqs. (16) and (19), it suffices to show the fall-off conditions for the single charge case. Setting \(q_{2}\to 0\) and \(a_{2}\to 0\), transforming \(\phi_{I}\to\varphi_{I}-2a_{I}z\), and mapping to FG gauge as in appendix A.1, a quick computation shows the small \(u\) behavior to be (up to overall numerical prefactors)
\[F_{a}^{MNP}F_{bMNP} \sim c_{\theta}^{2}g_{ab}+\dots\, \tag{11}\] \[F_{\varphi_{1}}^{MNP}F_{\varphi_{1}MNP} \sim s_{\theta}^{2}+\dots\,\] (12) \[F_{\theta}^{MNP}F_{\theta MNP} \sim 1+\dots\,\] \[F_{z}^{MNP}F_{zMNP} \sim q_{1}^{2}(13-5c_{2\theta})u^{8}+\dots,\] \[F_{z}^{MNP}F_{\varphi_{1}MNP} \sim q_{1}s_{\theta}^{2}u^{4}+\dots\,\] \[F_{y}^{MNP}F_{yMNP} \sim q_{1}^{2}s_{2\theta}^{2}u^{12}+\dots\,\]
where \(g_{ab}\) are components along the \(\mathbb{S}^{2}\subset\mathbb{S}^{4}\) and the AdS\({}_{5}\) components of the variation vanish. From the \(zz\)- and \(z\varphi_{1}\)-components of the variation of \(F_{4}^{2}\), we can see that the contributions to the boundary equations of motion dies at worst as \(u^{4}\) as \(u\to 0\). The analysis of the two-charge solution follows similarly, and so we can proceed using eq. (10a)
without modification. Allowing for \(q_{2}\neq 0\) modifies the variation of \(F_{MNPQ}F^{MNPQ}\) but crucially does not introduce any leading terms in the small \(u\) expansion.
Now that we have established that the variation of \(F_{4}^{2}\) decays sufficiently fast near the AdS\({}_{7}\) boundary, we can proceed using the logic of [60] recapped above to compute \(\langle T_{ij}\rangle\). To do so, we first map eq. (16) to FG gauge as in eq. (100), which we reproduce here for clarity
\[ds_{\text{FG}}^{2}= \frac{L^{2}}{u^{2}}(du^{2}+\hat{\alpha}_{\text{AdS}}ds_{\text{AdS }_{5}}^{2}+\hat{\alpha}_{z}dz^{2})+L^{2}s_{\theta}^{2}\hat{\alpha}_{z\varphi_{ 1}}dzd\varphi_{1}+L^{2}c_{\aleph}^{2}c_{\theta}^{2}\hat{\alpha}_{z\varphi_{2}} dzd\varphi_{2}\] \[+\frac{L^{2}}{4}(\hat{\alpha}_{\theta}d\theta^{2}+s_{\theta}^{2} \hat{\alpha}_{\varphi_{1}}d\varphi_{1}^{2}+c_{\theta}^{2}(\hat{\alpha}_{ \aleph}d\aleph^{2}+c_{\aleph}^{2}\hat{\alpha}_{\varphi_{2}}d\varphi_{2}^{2})+ \hat{\alpha}_{\theta\aleph}d\theta d\aleph).\]
The \(\hat{\alpha}\) metric functions are given in eq. (101). In order to put the dimensionally reduced metric in the form of eq. (100), we then write \(ds_{\text{FG}}^{2}\) as a fluctuation around AdS\({}_{7}\times\mathds{S}^{4}\)
\[ds^{2}=(g_{\mu\nu}^{(0)}+h_{\mu\nu})dx^{\mu}dx^{\nu} \tag{11}\]
where
\[g_{\mu\nu}^{(0)}dx^{\mu}dx^{\nu}=\frac{L^{2}du^{2}}{u^{2}}+\frac{ L^{2}}{u^{2}}\left(\left(1+\frac{u^{2}}{2}+\frac{u^{4}}{16}\right)ds_{\text{ AdS}_{5}}^{2}+\left(1-\frac{u^{2}}{2}+\frac{u^{4}}{16}\right)dz^{2}\right)+ \frac{L^{2}}{4}d\Omega_{4}^{2}. \tag{12}\]
Using the expressions in eq. (101), we can compute the zero modes of the fluctuations around the AdS\({}_{7}\) directions
\[\bar{h}_{7}= -\frac{2L^{2}(q_{1}+q_{2})}{15}u^{4}(ds_{\text{AdS}_{5}}^{2}-5dz ^{2}). \tag{13}\]
Similarly, the trace fluctuations on the S\({}^{4}\) are found to be
\[\varsigma=\frac{10q_{2}c_{2\aleph}c_{\theta}^{2}+5(q_{2}-2q_{1})c_{2\aleph}+2q _{1}-3q_{2}}{8}u^{4}+\ldots. \tag{14}\]
Integrating the internal space fluctuations over the S\({}^{4}\) gives \(\bar{\varsigma}=0\). The vanishing of the zero modes of the trace fluctuations means that the dimensionally reduced metric is already in FG form. The resulting stress-energy tensor one-point function is
\[\langle T_{ij}\rangle\ dx^{i}dx^{j}=\frac{N^{3}}{192\pi^{3}}\left[1-\frac{32} {5}(q_{1}+q_{2})\right]\left(ds_{\text{AdS}_{5}}^{2}-5dz^{2}\right). \tag{15}\]
In order to isolate the holographic quantities associated with the defect, we will subtract off the value of \(\langle T_{ij}^{(\text{vac})}\rangle\) computed using vacuum AdS\({}_{7}\times\mathds{S}^{4}\). Note that, taking \(q_{I}\to 0\) in eq. (13) kills the fluctuations and gives the exact AdS\({}_{7}\) metric upon dimensional reduction, as expected. So, taking \(q_{I}\to 0\) in eq. (15) yields the vacuum 1-pt function
\[\langle T_{ij}^{(\text{vac})}\rangle\ dx^{i}dx^{j}=\frac{N^{3}}{19 2\pi^{3}}\left(ds_{\text{AdS}_{5}}^{2}-5dz^{2}\right). \tag{16}\]
Subtracting this vacuum contribution from eq. (3.13) computes the change in the stress-energy tensor one-point function due to the introduction of the holographic dual to the field theory defect:
\[\Delta\left\langle T_{ij}\right\rangle dx^{i}dx^{j}=-\frac{N^{3}(q_{1}+q_{2})}{ 30\pi^{3}}(ds^{2}_{\text{AdS}_{5}}-5dz^{2}), \tag{3.15}\]
which recovers the results in [13] up to subtraction of the contribution from the \(\text{AdS}_{7}\times\mathbb{S}^{4}\) vacuum. Using eq. (2.3) we arrive at
\[h_{T}=\frac{N^{3}(q_{1}+q_{2})}{30\pi^{3}} \tag{3.16}\]
Thus, one of the B-type anomaly coefficients for 1/4-BPS co-dimension 2 operators in a \(6d\)\(\mathcal{N}=(2,0)\)\(A_{N-1}\) SCFT holographically described by the two-charge solutions is found to be
\[d_{2}=-\frac{1}{6}N^{3}(q_{1}+q_{2}). \tag{3.17}\]
Recall that in eq. (2.15), we found that the linear combination \(q_{1}+q_{2}\geq 0\) for all \(\hat{n}\). Further, we know that eq. (2.6) implies \(d_{2}\leq 0\), and so all of the two-charge solutions
Figure 2: The solutions to the constraint in eq. (2.15) for \(\hat{n}=1\) (red), \(\hat{n}=2\) (blue), and \(\hat{n}=3\) (green) on the \((q_{1},q_{2})\) plane, reproduced from [13; 14]. The shaded regions correspond to the two-charge configurations for which \(Q(y)=0\) admits no real solutions (region I) or which violate the defect ANEC (region II).
studied in [13; 14] are consistent with the defect ANEC. In figure 2, we reproduce the curves for solutions obeying eq. (15) as appears in [13; 14] together with the region excluded by consistency with defect ANEC. We see that, indeed, all of the \(\hat{n}=1,\,2,\,3\) solutions lie above the line \(q_{1}+q_{2}\geq 0\) with only \(\hat{n}=1\) saturating the bound at \(q_{1}=q_{2}=0\).
### Electrostatic solutions
Prior to approaching the holographic computation of \(\Delta\left\langle T_{ij}\right\rangle\) for the electrostatic solutions using the methods outlined above, we again must verify that the boundary equations of motion in the dimensionally reduced geometry are unmodified by the four-form flux. From eq. (21b), we can compute \(F_{4}\). For brevity, we will immediately define \(r=\varrho c_{\omega}\) and \(\eta=\varrho s_{\omega}\) to map eq. (21b) into \((\varrho,\omega)\) coordinates on the internal space and adopt \((z,\varphi)\) using eq. (14) as our angular coordinates and compute the large \(\varrho\) expansion to leading order in each component
\[\frac{F_{4}}{2\kappa_{11}}= \left[c_{\omega}^{2}s_{\omega}^{3}\frac{5m_{3}-2m_{1}^{3}}{ \varrho^{3}}\;d\varrho\wedge dz+3c_{\omega}s_{\omega}^{2}m_{1}d\omega\wedge dz\right. \tag{25}\] \[\left.\qquad+s_{\omega}^{3}\frac{4(m_{3}-m_{1}^{3})}{\varrho^{3} }d\varrho\wedge d\varphi+\,c_{\omega}s_{\omega}^{2}\frac{6(m_{1}^{3}-m_{3})}{ \varrho^{2}}d\omega\wedge d\varphi\right]\wedge\,\,\mathrm{vol}(\mathbb{S}^{2} )\ +\ldots\]
where we have fixed \(\mathcal{C}_{z}=-2\) following the discussion in appendix A.2.
Now, we can check the fall off of the contribution of the variation of \(F_{4}^{2}\) to the equations of motion. Keeping \(\mathcal{C}_{z}=-2\) fixed and transforming into FG gauge, we find the leading behavior in the small-\(u\) expansion (up to numerical factors)
\[\begin{split} F_{uMNP}F_{u}^{\,MNP}&\sim s_{2 \theta}^{2}(m_{1}^{3}-m_{3})^{2}u^{6}+\ldots\,\\ F_{zMNP}F_{z}^{\,MNP}&\sim(13-5c_{2\theta})(m_{1}^{ 3}-m_{3})^{2}u^{8}+\ldots\,\\ F_{\varphi MNP}F_{z}^{\,MNP}&\sim(m_{1}^{3}-m_{3}) s_{\theta}^{2}u^{4}+\ldots\,\\ F_{aMNP}F_{b}^{\,MNP}&\sim g_{\mathbb{S}^{4}}+ \ldots\,\end{split} \tag{26}\]
where \(a,\,b\) are indices for \(\mathbb{S}^{4}\) coordinates \(\{\theta,\varphi,\mathbb{S}^{2}\}\), \(g_{\mathbb{S}^{4}}\) is the metric on the unit \(\mathbb{S}^{4}\) in \(\mathbb{S}^{1}\times\mathbb{S}^{2}\) coordinatization. Note that the variations in the AdS\({}_{5}\) directions vanish identically. So, in the \(u\to 0\) limit, there are no surviving contributions to the equations of motion in the dimensionally reduced geometry coming from the variation of the \(F_{4}^{2}\) term.
We can now proceed with [60]. First, we rewrite the metric in eq. (16) as fluctuations around AdS\({}_{7}\times\mathbb{S}^{4}\). The perturbation away from AdS\({}_{7}\times\mathbb{S}^{4}\) takes the form
\[\begin{split} h_{11}&=\frac{L^{2}}{u^{2}}\Big{(} \alpha_{\text{AdS}}-1-\frac{u^{2}}{2}-\frac{u^{4}}{16}\Big{)}ds_{\text{AdS}_{5} }^{2}+\frac{L^{2}}{u^{2}}\Big{(}\alpha_{z}-1+\frac{u^{2}}{2}-\frac{u^{4}}{16} \Big{)}dz^{2}\\ &\quad+\frac{L^{2}}{4}(\alpha_{\theta}-1)d\theta^{2}+\frac{L^{2} s_{\theta}^{2}}{4}(\alpha_{\varphi}-1)d\varphi^{2}+\frac{L^{2}c_{\theta}^{2}}{4}( \alpha_{\mathbb{S}^{2}}-1)d\Omega_{2}^{2}+L^{2}s_{\theta}^{2}\alpha_{z\varphi }dzd\varphi.\end{split} \tag{27}\]
Fixing \(\chi=-z-\varphi\) and \(\beta=2z+\varphi\), we can compute the zero modes for the AdS\({}_{7}\) part of the fluctuations,
\[\bar{h}_{7}=L^{2}\frac{m_{3}-m_{1}^{3}}{30m_{1}^{3}}u^{4}ds_{\text{AdS}_{5}}^{ 2}+L^{2}\frac{m_{1}^{3}-m_{3}}{6m_{1}^{3}}u^{4}dz^{2}+\ldots. \tag{28}\]
The trace \(\mathds{S}^{4}\) fluctuations are found to be
\[\varsigma=(1-5c_{2\theta})\frac{m_{1}^{3}-m_{3}}{16m_{1}^{3}}u^{4}-11(1-5c_{2 \theta})\frac{m_{1}^{3}-m_{3}}{216m_{1}^{3}}u^{6}+\dots. \tag{3.22}\]
Integrating \(\varsigma\) over the \(\mathds{S}^{4}\), we find the zero modes \(\bar{\varsigma}=0\). The reduced geometry
\[g_{7}=\left(1+\frac{\bar{\varsigma}}{5}\right)g^{(0)}+\bar{h}_{7} \tag{3.23}\]
is thus already in FG form. So, the dimensionally reduced metric is
\[\begin{split} ds_{7}^{2}=&\frac{L^{2}}{u^{2}}\left[ du^{2}+\left(1+\frac{u^{2}}{2}+\frac{u^{4}}{16}+\frac{(m_{3}-m_{1}^{3})u^{6}}{30m_{1} ^{3}}\right)ds_{\text{AdS}_{5}}^{2}\right.\\ &\left.+\left(1-\frac{u^{2}}{2}+\frac{u^{4}}{16}+\frac{(m_{1}^{3} -m_{3})u^{6}}{6m_{1}^{3}}\right)dz^{2}\right],\end{split} \tag{3.24}\]
where we have suppressed higher powers of \(u\). From this expression for \(ds_{7}^{2}\), we can easily read off \(g_{(0)}\), \(g_{(2)}\), \(g_{(4)}\), and \(g_{(6)}\). Note if we take \(n=1\) and \(k_{1}=1\), then \(m_{3}=m_{1}^{3}=N^{3}/8\), and so in this limit, eq. (3.24) reduces to the exact AdS\({}_{7}\) metric, which is expected from eqs. (21a), (2.23), and (2.28).
Proceeding with the computation in the same way as the previous subsection, we find that the holographic stress-energy tensor one-point-function takes the form
\[\left\langle T_{ij}\right\rangle\,dx^{i}dx^{j}=-\frac{N^{3}(3m_{1}^{3}-8m_{3} )}{960\pi^{3}m_{1}^{3}}\left(ds_{\text{AdS}_{5}}^{2}-5dz^{2}\right). \tag{3.25}\]
Regulating this result by subtracting the AdS\({}_{7}\times\)S\({}^{4}\) vacuum contribution \(\left\langle T_{ij}^{\text{(vac)}}\right\rangle\) in eq. (3.14) produces
\[\Delta\left\langle T_{ij}\right\rangle\,dx^{i}dx^{j}=-\frac{N^{3}(m_{1}^{3}-m_ {3})}{120\pi^{3}m_{1}^{3}}\left(ds_{\text{AdS}_{5}}^{2}-5dz^{2}\right). \tag{3.26}\]
As a quick check, computing the trace of eq. (3.26) gives \(\Delta\left\langle T^{i}{}_{i}\right\rangle=0\) as expected due to defect conformal symmetry. Comparing eq. (3.26) to eq. (2.3), we find
\[h_{T}=\frac{m_{1}^{3}-m_{3}}{15\pi^{3}}. \tag{3.27}\]
We can thus read off the defect Weyl anomaly coefficient \(d_{2}\) from eq. (2.5):
\[d_{2} =-\frac{m_{1}^{3}-m_{3}}{3} \tag{3.28a}\] \[=-\frac{1}{24}\left(N^{3}-\sum_{a}\frac{N_{a}^{3}}{k_{a}^{2}} \right), \tag{3.28b}\]
where in the second line we have rewritten \(d_{2}\) in terms of the parameters \(N_{a}\) and \(k_{a}\) which are more suitable for comparison to field theory. For any partition of \(N=\sum_{a}N_{a}\), it is clear that \(d_{2}\leq 0\). The upper bound \(d_{2}=0\) is only saturated in the vacuum case \(n=1\), \(k_{1}=1\) where there is no defect.
There is a non-trivial consistency check on the value of \(d_{2}\) in the \(n=1\) case. As mentioned above, the \(11d\) uplift of the \(1/2\)-BPS one-charge solutions is related to the single-kink electrostatic solutions by setting \(n=1\) and \(k_{1}=1/\sqrt{1-4q_{1}}\). Plugging these values into in to eq. (3.28b) results in \(d_{2}=-N^{3}q_{1}/6\). Checking this against the one-charge solutions found by taking \(q_{2}\to 0\) in eq. (3.17), we also find \(d_{2}=-N^{3}q_{1}/6\). Thus, the values of \(d_{2}\) computed in the two-charge and \(n\)-kink electrostatic solutions are consistent in this limit.
## 4 Defect sphere EE and the defect A-type anomaly
In the following subsections we will use the techniques developed in [61; 62] to holographically compute the defect contribution to the EE of a spherical region in the dual \(6d\)\(A_{N-1}\)\(\mathcal{N}=(2,0)\) SCFT at large \(N\) for both the \(1/4\)-BPS two-charge and \(1/2\)-BPS electrostatic co-dimension 2 defects. Leveraging the results of the previous section and eqs. (2.7) and (2.8), we will be able to compute the defect A-type anomaly \(a_{\Sigma}\).
To facilitate the discussion below, let us briefly review some of the relevant background concepts for defect EE. We will restrict our discussion here to the holographic duals to \(6d\) (D)SCFTs.
To start, we will need the Ryu-Takayanagi (RT) formula for holographic EE [63; 64; 65], which we write agnostic to the presence of a defect as
\[S_{\rm EE}=\frac{\mathcal{A}_{\rm min}}{4G_{N}}. \tag{4.1}\]
The quantity \(\mathcal{A}_{\rm min}\) is the area of the extremal surface that minimizes the bulk area functional subject to the condition that the surface anchored at the conformal boundary of AdS\({}_{7}\) is homologous to the entangling region in the dual theory. For our computations below, we take the entangling region in the \(6d\) SCFT at a fixed time slice to be a Euclidean 5-ball \(\mathcal{B}=\mathds{B}^{5}\hookrightarrow\mathbb{R}^{5}\) of radius \(R\). When we consider the theory deformed by a flat embedding of a Lorentzian defect on \(\Sigma=\mathbb{R}^{1,3}\), we will take the defect to be co-original with the entangling surface such that \(\partial\mathcal{B}\cap\Sigma=\mathds{S}^{2}\) sitting along the equator of \(\partial\mathcal{B}\).
By including a defect in the field theory, there are subtleties that arise in directly applying eq. (2.7). On the field theory side, \(S_{\rm EE}\) will now have short-distance divergences near \(\partial\mathcal{B}\) due to highly entangled UV modes in both ambient and defect localized theories. In the holographic description, one needs to adopt a suitable regularization scheme that isolates the defect contribution to \(S_{\rm EE}\); we will use a background subtraction scheme \((S_{\rm EE}[\Sigma]-S_{\rm EE}[\emptyset])\) akin to the one used in computing the holographic stress-energy tensor one-point function. One further complication in the holographic computation is the fact that the FG expansion is generally not globally defined, and so one must be careful to find the asymptotic form of the map to FG gauge in order to define the UV cutoff slice at fixed AdS\({}_{7}\) radius \(\Lambda\gg L\). A general formula for finding the asymptotic form of the FG transformation and cutoff slice was found in [62], which we will use in the computations below.
Since we are considering a spherical entangling region, the solution for \(\mathcal{A}_{\rm min}\) takes a particularly simple form; even in the presence of a defect. It was shown in [61] that for
a bulk geometry realizing the defect symmetry group \(SO(2,d-\mathfrak{d})\times SO(\mathfrak{d})\), the relative warp factors of the AdS\({}_{\mathfrak{d}+1}\) and \(\mathbb{S}^{\mathfrak{d}-1}\) spaces are largely immaterial, and the logic of [66] can be generalized to prove eq.2.7 for these backgrounds. In the process, the authors of [61] proved that for the holographic defect spherical EE the surface \(\mathcal{A}_{\rm min}\) is simply a hemispherical region extending into the bulk anchored at \(\mathcal{B}\). For the \(11d\) backgrounds corresponding to both the two-charge and electrostatic solutions that we consider, if we write the line element on the AdS\({}_{5}\) in the form
\[ds^{2}_{\rm AdS_{5}}=\frac{1}{w^{2}}(dw^{2}-dt^{2}+dr_{\parallel}^{2}+r_{ \parallel}^{2}d\Omega_{2}^{2})\, \tag{4.2}\]
then \(\mathcal{A}_{\rm min}\) is the surface \(w^{2}+r_{\parallel}^{2}=R^{2}\). We will exploit the simplicity of the minimal surface to great effect in the subsequent computations.
### Two-charge solutions
To begin computing the defect spherical EE for the two-charge solutions, we need to express the area functional \(\mathcal{A}\) in terms of the metric functions, \(\hat{f}\) in eq.2.16 with the AdS\({}_{5}\) factor written as in eq.4.2. Evaluating on the extremal surface \(r_{\parallel}^{2}+w^{2}=R^{2}\), we regularize the \(w\) integration by introducing a UV cutoff \(\epsilon_{w}\ll 1\) and performing the integral over the angular coordinates \(\phi_{1},\phi_{2}\) and \(z\) to obtain
\[\mathcal{A}_{\rm min}[\Sigma]=8\pi^{4}L^{9}R\int_{\epsilon_{w}}^{\infty}dw \frac{\sqrt{R^{2}-w^{2}}}{w^{3}}\mathcal{I}=4\pi^{4}\left(\frac{R^{2}}{ \epsilon_{w}^{2}}-\log\frac{2R}{\epsilon_{w}}+\ldots\right)\mathcal{I}\, \tag{4.3}\]
where we have defined the remaining integral
\[\mathcal{I}\equiv\int d\psi\,d\zeta\int_{y_{+}}^{\Lambda_{y}(\epsilon_{u},\psi,\zeta)}dy\,\hat{f}_{\rm AdS}^{3}f_{y}\sqrt{(4\hat{f}_{\psi}^{2}\hat{f}_{\zeta }^{2}-\hat{f}_{\psi\zeta}^{4})(\hat{f}_{\phi_{1}}^{2}\hat{f}_{z\phi_{2}}^{4}+ \hat{f}_{\phi_{2}}^{2}\hat{f}_{z\phi_{1}}^{4}-4\hat{f}_{\phi_{1}}^{2}\hat{f}_{ \phi_{2}}^{2}\hat{f}_{z}^{2})}. \tag{4.4}\]
Despite the initially complicated appearance of the integrand upon substituting the form of the metric functions in eq.4.1, we find after a bit of algebra that the remaining integral drastically simplifies to
\[\mathcal{I}=\frac{1}{8}\int d\psi\ d\zeta\ c_{\psi}c_{\zeta}^{2}s_{\zeta}\int_ {y_{+}}^{\Lambda_{y}(\epsilon_{u},\psi,\zeta)}dy\ y. \tag{4.5}\]
Using the double-cutoff prescription to compute \(\mathcal{I}\) as in [62; 67], we first map the radial coordinate \(y\) to the FG coordinate \(u\) leaving the remaining angular coordinates \(\psi\) and \(\zeta\) in their original frame. We then impose a cutoff \(\epsilon_{u}\ll 1\), which induces a cutoff in large \(y\), \(\Lambda_{y}(\epsilon_{u},\psi,\zeta)\). Recalling the asymptotic FG map in appendixA.1 used in the previous section and recasting the FG angular coordinates \(\aleph,\theta\) in terms of \(\psi,\zeta\), we find that
\[\Lambda_{y}(\epsilon_{u},\psi,\zeta)=\frac{1}{\epsilon_{u}^{2}}+\frac{1}{2}+ \frac{3-10q_{1}-9q_{2}-2q_{2}c_{2\psi}c_{\zeta}^{2}+(2q_{1}-q_{2})c_{2\zeta}} {48}\epsilon_{u}^{2}+\ldots. \tag{4.6}\]
Evaluating the integral \(\mathcal{I}\) with this cutoff is straightforward, yielding
\[\mathcal{I}=\frac{1}{24\epsilon_{u}^{4}}+\frac{1}{24\epsilon_{u}^{2}}+\frac{1} {960}(15-16(q_{1}+q_{2})-40y_{+}^{2})+\ldots \tag{4.7}\]
In order to find the contributions coming from the defect, we must regulate the \(\epsilon_{u}\) divergences present in \(\mathcal{A}_{\rm min}\). In order to do so, we employ the same vacuum subtraction scheme as was used in computing \(\Delta\left\langle T_{ij}\right\rangle\) above. For the two-charge solution, the vacuum is obtained by setting \(q_{1}=q_{2}=0\) and \(a_{1}=a_{2}=0\), which sets \(y_{+}^{(\rm vac)}=1\). Recomputing \(\mathcal{A}_{\rm min}[\emptyset]\) for the vacuum solution and subtracting it from \(\mathcal{A}_{\rm min}[\Sigma]\), the regulated area functional gives
\[\mathcal{A}_{\rm min}[\Sigma]-\mathcal{A}_{\rm min}[\emptyset]=- \frac{\pi^{4}L^{9}}{30}(2q_{1}+2q_{2}+5(y_{+}^{2}-1))\left(\frac{R^{2}}{ \epsilon_{w}^{2}}-\log\frac{2R}{\epsilon_{w}}+\ldots\right)\, \tag{4.8}\]
free from \(\epsilon_{u}\) divergences.
In order to compute \(a_{\Sigma}\) for the defect theory, we insert eq. (4.8) in eq. (4.1). Mapping to field theory quantities by \(L^{3}=4\pi N\ell_{P}^{3}\) and \(G_{N}=2^{4}\pi^{7}\ell_{P}^{9}\), we can read off the coefficient of the universal part of the defect sphere EE from eq. (4.1)
\[-R\partial_{R}(S_{\rm EE}[\Sigma]-S_{\rm EE}[\emptyset])|_{R\to 0}=- \frac{N^{3}}{30}(2(q_{1}+q_{2})+5(y_{+}^{2}-1)). \tag{4.9}\]
Hence, using \(d_{2}=-\frac{N^{3}}{6}(q_{1}+q_{2})\) derived above in eq. (2.8) we find
\[a_{\Sigma}=\frac{N^{3}}{24}(1-y_{+}^{2}). \tag{4.10}\]
One interesting consequence of this computation is that one can show that A-type anomaly of the general two-charge solution must satisfy \(a_{\Sigma}\geq 0\). To see this more clearly, recall from eq. (2.15) that
\[y_{+}\leq\frac{3\hat{n}+1}{4\hat{n}}\leq 1. \tag{4.11}\]
The second inequality follows from \(\hat{n}\in\mathbb{N}\), and so the upper bound is saturated only for \(\hat{n}=1\). Thus, for all consistent two-charge solutions, \(a_{\Sigma}\geq 0\).
### Electrostatic solutions
Continuing with the logic used in the previous subsection, we now turn our attention to the electrostatic solutions. Our starting point for the computation is in transforming the metric in eq. (2.21a) using eq. (A.14) and reading off the metric functions. Since only \(\mathcal{C}_{z}=-2\) gives an asymptotic form for the metric suitable for mapping into FG gauge, we fix the transformation \(\chi=-z-\varphi\) and \(\beta=2z+\varphi\) and arrive at
\[ds_{11}^{2}=f_{\rm AdS}^{2}ds_{\rm AdS_{5}}^{2}+f_{\mathbb{S}^{2}}d\Omega_{2} ^{2}+f_{z}^{2}dz^{2}+f_{\varphi}^{2}d\varphi^{2}+f_{z\varphi}^{2}dzd\varphi+f _{\varrho}^{2}d\varrho^{2}+f_{\omega}^{2}d\omega^{2}. \tag{4.12}\]
We will also write the \({\rm AdS}_{5}\) line element as in eq. (4.2).
Plugging in the expression for the minimal surface, \(r_{\parallel}^{2}+w^{2}=R^{2}\), into the area functional, we first integrate over the two \(\mathbb{S}^{2}\) factors as well as the angular coordinates \(z\in[0,2\pi]\) and \(\varphi\in[0,2\pi]\), which yields
\[\mathcal{A}_{\rm min}[\Sigma]=32\pi^{4}R\int dw\frac{\sqrt{R^{2}-w^{2}}}{w^{3} }\mathcal{I}[\Sigma]. \tag{4.13}\]
where
\[\mathcal{I}[\Sigma]\equiv\int_{0}^{\pi/2}d\omega\int_{0}^{\Lambda_{ \varrho}(\epsilon_{u},\omega)}f_{\text{AdS}}^{3}f_{\mathcal{S}^{2}}^{2}f_{\omega }f_{\varrho}\sqrt{4f_{z}^{2}f_{\varrho}^{2}-f_{z\varphi}^{4}}. \tag{4.14}\]
Note that we have introduced the large \(\varrho\) cutoff, \(\Lambda_{\varrho}\), that was induced by the small \(u\) cutoff in FG gauge \(\epsilon_{u}\):
\[\Lambda_{\varrho}(\epsilon_{u},\omega)= \frac{2m_{1}}{\epsilon_{u}^{2}}+\frac{2m_{1}^{3}s_{\omega}^{2}-( 1+5c_{2\omega})m_{3}}{48m_{1}^{2}}\epsilon_{u}^{2}+s_{\omega}^{2}\frac{m_{3}-m _{1}^{3}}{36m_{1}^{2}}\epsilon_{u}^{4}+\ldots. \tag{4.15}\]
Since the metric functions \(f\) are independent of \(w\), the \(w\) integral can be performed over \([\epsilon_{w},\infty)\), where \(\epsilon_{w}\ll 1\),
\[\mathcal{A}_{\text{min}}[\Sigma]=16\pi^{4}\left(\frac{R^{2}}{ \epsilon_{w}^{2}}-\log\frac{2R}{\epsilon_{w}}+O(\epsilon_{w}^{0})\right) \mathcal{I}[\Sigma]. \tag{4.16}\]
Using the expressions for the metric functions in eq. (2.21a) in terms of the potential, we find that \(\mathcal{I}\) can be expressed as a total derivative. To see this more clearly, we note that in \((\varrho,\omega)\) coordinates
\[\mathcal{I}[\Sigma]=64\kappa_{11}^{3}\int_{0}^{\pi/2}d\omega\int_ {0}^{\Lambda_{\varrho}(\epsilon_{u},\omega)}d\varrho\,\varrho^{2}c_{\omega} \dot{V}V^{\prime\prime}. \tag{4.17}\]
Switching to \((r,\eta)\) coordinates and using the Laplace equation \(\tilde{V}=-r^{2}V^{\prime\prime}\), we arrive at
\[\mathcal{I}[\Sigma]=-32\kappa_{11}^{3}\int_{0}^{\Lambda_{\eta}} d\eta\int_{0}^{\Lambda_{r}}dr\,\partial_{r}\dot{V}^{2}\, \tag{4.18}\]
where we have mapped the asymptotic cutoff in \(\varrho\) back to the \((r,\eta)\) frame,
\[\Lambda_{r}=\Lambda_{\varrho}(\epsilon_{u},\omega)c_{\omega}\,\qquad \Lambda_{\eta}=\Lambda_{\varrho}(\epsilon_{u},\omega)s_{\omega}. \tag{4.19}\]
The remaining integral in \(\mathcal{I}\) is identical to the one found in computing the central charge for the compact electrostatic solutions in [18] and again in [14]. For clarity, let us analyze \(\mathcal{I}\) in detail here. We can integrate the total derivative in eq. (4.18) and find that the surviving contributions come from the boundary of the region in the \(\varrho-\omega\) quarter-plane spanned by the \(\eta\)-axis at \(\omega=\pi/2\) and the contour at fixed \(\varrho=\Lambda_{\varrho}\) between \(\omega=0\) and \(\omega=\pi/2\). The integral along the \(\eta\)-axis can be decomposed into the regions of \(\eta\in[0,\eta_{n}]\) and \(\eta\in[\eta_{n},\Lambda_{\varrho}(\epsilon_{u},\pi/2)]\); in the latter region, the line charge density takes the form \(\lambda(\eta)=\eta+m_{1}\). In all,
\[\frac{\mathcal{I}[\Sigma]}{32\kappa_{11}^{3}}=\underbrace{\int_{0 }^{\eta_{n}}d\eta\varpi(\eta)^{2}}_{I_{1}}+\underbrace{\int_{\eta_{n}}^{ \Lambda_{\varrho}(\epsilon_{u},\pi/2)}d\eta(\eta+m_{1})^{2}}_{I_{2}}- \underbrace{\int_{\omega=0}^{\omega=\pi/2}\dot{V}^{2}\Big{|}_{\Lambda_{r}}d( \Lambda_{\rho}(\epsilon_{u},\omega))}_{I_{3}}, \tag{4.20}\]
where \(\dot{V}^{2}\Big{|}_{\Lambda_{r}}\) in \(I_{3}\) is held at fixed \(r=\Lambda_{r}\) in the integration over \(\omega\).
Let's take each of the \(\varGamma\)'s individually, starting with \(I_{2}\). Performing the integral is trivial and leads to the small \(\epsilon_{u}\) expansion
\[I_{2}= \frac{8m_{1}^{2}}{3\epsilon_{u}^{6}}+\frac{4m_{1}^{3}}{\epsilon_{u} ^{4}}+\frac{13m_{1}^{3}+2m_{3}}{6\epsilon_{u}^{2}}+\frac{8m_{3}+m_{1}^{3}-18m_ {1}^{2}\eta_{n}-18m_{1}\eta_{n}^{2}-6\eta_{n}^{3}}{18}+\ldots. \tag{4.21}\]
The integral \(I_{3}\) can also be easily taken. First, we expand the integrand using the large \(\varrho\) expansions of the potential in eq. (A.10). Then after computing \(d\Lambda_{r}(\epsilon_{u},\omega)\), we expand in small \(\epsilon_{u}\) and integrate term-by-term in \(\omega\in[0,\pi/2]\), which gives
\[I_{3}=\frac{8m_{1}^{3}}{3\epsilon_{u}^{6}}+\frac{8m_{1}^{3}}{3\epsilon_{u}^{4 }}+\frac{5m_{1}^{3}+2m_{3}}{6\epsilon_{u}^{2}}+\frac{m_{1}^{3}+14m_{3}}{45}+ \ldots. \tag{4.22}\]
Combining \(I_{2}\) and \(I_{3}\), we see
\[I_{2}-I_{3}=\frac{4m_{1}^{3}}{3\epsilon_{u}^{4}}+\frac{4m_{1}^{3}}{3\epsilon_{ u}^{2}}+\frac{4m_{3}+m_{1}^{3}-10\eta_{n}(\eta_{n}^{2}+3\eta_{n}m_{1}+3m_{1}^{2})} {30}+\ldots. \tag{4.23}\]
Lastly, we need to take care of the integral \(I_{1}\). To do so, we break up the the integral over \(\eta\in[0,\eta_{n}]\) into a sum over the intervals \([\eta_{a},\eta_{a+1}]\) for \(a=0,\ldots,n-1\) with \(\eta_{0}=0\). Then, using \(\varpi_{a}=p_{a+1}\eta+\delta_{a+1}\) over each interval we find
\[I_{1}=\frac{1}{3}\sum_{a=0}^{n-1}\left(p_{a+1}^{2}(\eta_{a+1}^{3}-\eta_{a}^{3} )+3\delta_{a+1}p_{a+1}(\eta_{a+1}^{2}-\eta_{a}^{2})+3\delta_{a+1}^{2}(\eta_{a +1}-\eta_{a})\right). \tag{4.24}\]
Combining everything we get
\[\begin{split}\frac{\mathcal{I}[\Sigma]}{32\kappa_{11}^{3}}& =\frac{4m_{1}^{3}}{3\epsilon_{u}^{4}}+\frac{4m_{1}^{3}}{3\epsilon _{u}^{2}}+\frac{4m_{3}+m_{1}^{3}}{30}+\frac{1}{3}\sum_{a=0}^{n}(p_{a+1}^{2} \eta_{a+1}^{3}-\eta_{a}^{3})\\ &\quad+\sum_{a=0}^{n}\delta_{a+1}p_{a+1}(\eta_{a+1}^{2}-\eta_{a} ^{2})+\sum_{a=0}^{n}\delta_{a+1}^{2}(\eta_{a+1}-\eta_{a}),\end{split} \tag{4.25}\]
where we slightly abuse the notation by setting \(\eta_{n+1}=0\) in this sum to make the expressions a bit more compact.
The \(\epsilon_{u}\) divergences in \(\mathcal{I}[\Sigma]\) need to be regulated. We again adopt the background subtraction scheme as before, where the background vacuum AdS\({}_{7}\times\mathbb{S}^{4}\) solution is obtained by taking \(n=1\) and \(k_{1}=1\). Taking this limit in eq. (4.25) yields
\[\frac{\mathcal{I}[\emptyset]}{32\kappa_{11}^{3}}=\frac{4m_{1}^{3}}{3\epsilon_ {u}^{4}}+\frac{4m_{1}^{3}}{3\epsilon_{u}^{2}}-\frac{5m_{1}^{3}}{6}+\ldots. \tag{4.26}\]
We then arrive at the expression for the regulated \(\mathcal{I}\):
\[\begin{split}\frac{\mathcal{I}[\Sigma]-\mathcal{I}[\emptyset]}{32 \kappa_{11}^{3}}=&\frac{2m_{3}+13m_{1}^{3}}{15}+\frac{1}{3}\sum_ {a=0}^{n}p_{a+1}^{2}(\eta_{a+1}^{3}-\eta_{a}^{3})+\sum_{a=0}^{n}\delta_{a+1}p_ {a+1}(\eta_{a+1}^{2}-\eta_{a}^{2})\\ &\quad+\sum_{a=0}^{n}\delta_{a+1}^{2}(\eta_{a+1}-\eta_{a})),\end{split} \tag{4.27}\]
which recovers the result of the integral for the non-compact electrostatic solutions in [14]. Thus, the regulated minimal area is given by
\[\mathcal{A}_{\text{min}}[\Sigma]-\mathcal{A}_{\text{min}}[\emptyset]=2^{9}\pi^{4 }\kappa_{11}^{3}\left(\frac{R^{2}}{\epsilon_{w}^{2}}-\log\frac{2R}{\epsilon_{w }}+O(1)\right)(\mathcal{I}[\Sigma]-\mathcal{I}[\emptyset]). \tag{4.28}\]
Proceeding with the computation of \(a_{\Sigma}\), we feed eq. (4.28) in eq. (4.1) to get \(S_{\text{EE}}\). Computing the log derivative with respect to \(R\) of the regularized minimal area functional at \(R=0\) gives the universal part of defect entanglement entropy
\[R\partial_{R}(S_{\text{EE}}[\Sigma]-S_{\text{EE}}[\emptyset])=-\,(\mathcal{I}[ \Sigma]-\mathcal{I}[\emptyset]), \tag{4.29}\]
where we mapped to the field theory variables using \(G_{N}^{(11)}=2^{13}\pi^{4}\kappa_{11}^{3}\) and \(\kappa_{11}=L^{3}/8N\). Using eq. (2.8) we can read off the A-type anomaly coefficient using \(d_{2}=-\frac{1}{3}(m_{1}^{3}-m_{3})\)
\[a_{\Sigma}=\frac{(\sum_{a=1}^{n}k_{a}\eta_{a})^{3}}{4}+\frac{1} {12}\sum_{a=0}^{n}(p_{a+1}^{2}(\eta_{a+1}^{3}-\eta_{a}^{3})+3\delta_{a+1}p_{a+ 1}(\eta_{a+1}^{2}-\eta_{a}^{2})+3\delta_{a+1}^{2}(\eta_{a+1}-\eta_{a})). \tag{4.30}\]
Recall that the \(\eta_{a}\) are ordered by \(0=\eta_{0}<\eta_{1}<\ldots<\eta_{n}\), and so \((\eta_{a+1}^{j}-\eta_{a}^{j})>0\) for any \(j\in\mathbb{N}\) and for all \(a\). Further, the orbifold parameters are non-negative \(k_{a}\in\mathbb{N}\), and so by definition are the \(p_{a}\), and in addition \(2\delta_{a}\in\mathbb{N}\). Hence, we see that \(a_{\Sigma}\geq 0\). Note that the inequality is saturated at \(n=k_{1}=1\) i.e. \(a_{\Sigma}=0\), which is expected since this line charge density configuration corresponds to having no defect.
For completeness, we can rewrite \(a_{\Sigma}\) in terms of the ranks, \(N_{a}\), of the factors in the Levi subalgebra \(\mathfrak{l}\subset A_{N-1}\) and their associated monopole charges, \(k_{a}\),
\[a_{\Sigma}= \frac{N^{3}}{32}-\frac{1}{96}\sum_{a=1}^{n}\left(\frac{1+2k_{a}} {k_{a}^{2}}N_{a}^{3}+\sum_{b=a+1}^{n}N_{a}k_{b}\left(\frac{N_{a}^{2}}{k_{a}^{2 }}+3\frac{N_{b}^{2}}{k_{b}^{2}}\right)\right). \tag{4.31}\]
While the definite sign of \(a_{\Sigma}\) is a bit less clear in terms of the gauge algebra data, it is nonetheless non-negative following from eq. (4.30).
As we mentioned toward the end of section 3.2, there is a non-trivial consistency check of our results in eq. (4.30) from the comparison to the one-charge (\(q_{2}\to 0\)) solutions. Setting \(n\to 1\) and \(k_{1}\to 1/\sqrt{1-4q_{1}}\) in eq. (4.30) results in
\[a_{\Sigma}\big{|}_{n=1}=\frac{N^{3}}{48}\left(1+2q_{1}-\sqrt{1-4q_{1}}\right). \tag{4.32}\]
Looking back to the computation of \(\text{a}_{\Sigma}\) for the two-charge solutions, we need the largest root of \(Q(y)\) with \(q_{2}\to 0\), which is simply \(y_{+}(q_{1})=\frac{1}{2}(1+\sqrt{1-4q_{1}})\). Plugging \(y_{+}(q_{1})\) into eq. (4.10) exactly matches eq. (4.32).
We now compare \(a_{\Sigma}\) to the computations of the 'defect central charge' for these solutions. The 'defect central charge' was computed in [14] using the standard formula for the central charge \(c\) of _standalone_\(4d\)\(\mathcal{N}=2\) SCFTs at large \(N\) holographically dual to \(\text{AdS}_{5}\) solutions in M-theory [68]
\[c=\frac{2^{5}\pi^{3}\kappa_{11}^{3}}{(2\pi\ell_{P})^{9}}\int_{\mathcal{M}_{6}} \left(\frac{\dot{V}\sigma}{2V^{\prime\prime}}\right)^{\frac{3}{2}}, \tag{4.33}\]
which applies to \(11d\) metrics of the form
\[ds_{11}^{2}=\left(\frac{\kappa_{11}^{2}\dot{V}\sigma}{2V^{\prime\prime}}\right)^{ \frac{1}{3}}(ds_{\text{AdS}_{5}}^{2}+ds_{\mathcal{M}_{6}}^{2}). \tag{100}\]
This formula had been used to find the holographic central charge dual to electrostatic solutions with compact internal space engineering irregular punctures [18; 69]. Despite the integrals in eqs. (101) and (102) having the same form, the crucial difference is in the interpretation of the result: the relative difference between \(a_{\Sigma}\) and \(c\) is a factor of \(-2d_{2}/5\).
Lastly, while monotonicity of the universal part of the defect sphere EE has yet to be tested for \(4d\) DCFTs, in the case of a co-dimension 4 Wilson surface in \(6d\) SCFTs the universal defect contribution to the sphere EE does not behave monotonically under defect RG flows (see e.g. [70]). Due to the relative sign in \(\Delta S_{\text{EE}}\) and the fact that only \(a_{\Sigma}\) is known to obey a weak defect \(a\)-theorem12, it is expected that eq. (100) is not a monotone along defect RG flows.
Footnote 12: The recent entropic proof in [43] of the irreversibility of defect RG flows in addition to the dilaton effective action methods (à la [37]) in [71] have firmly established the existence of at least a weak defect \(a\)-theorem.
## 5 Discussion
In this work, we have analyzed solutions in \(11d\) SUGRA that holographically describe \(1/4-\) and \(1/2-\)BPS co-dimension 2 defects in the \(6d\)\(A_{N-1}\)\(\mathcal{N}=(2,0)\) SCFT at large \(N\).
Our holographic computations of the defect contribution to the one-point function of the stress energy tensor have revealed simple expressions for the defect Weyl anomaly coefficient \(d_{2}\) in section 3. For the \(1/4\)-BPS two-charge solutions specified by charges \(q_{1}\), \(q_{2}\), we have found that \(d_{2}\propto N^{3}(q_{1}+q_{2})\). For the \(1/2\)-BPS electrostatic solutions determined by a potential solving a Laplace-type equation with moments \(m_{j}\), \(d_{2}\propto(m_{1}^{3}-m_{3})\sim N^{3}-\sum_{a}N_{a}^{3}\) where \(N=\sum_{a}N_{a}\). Using the \(4d\) form of the defect ANEC, which states \(d_{2}\leq 0\), we have demonstrated that all of the allowed two-charge solutions found in [13] and the electrostatic solutions in [14] obey the bound and are thus consistent with this known defect energy condition [43]. We were also able to compare against a similar computation for the two-charge solutions done in \(7d\)\(\mathcal{N}=4\) gauged SUGRA, and found an agreement with \(\left\langle T_{ij}\right\rangle\) in [13].
In section 4, we used the tools developed in [61; 62] to holographically compute the contribution of flat, co-dimension 2 defects to the EE of a spherical region in the dual field theory. By isolating the universal, log-divergent part of the defect sphere EE, we were able to find closed form expressions for the A-type anomaly \(a_{\Sigma}\) for both defect systems considered. Since we know that the universal part of the defect sphere EE, is a linear combination of \(a_{\Sigma}\) and \(d_{2}\) as in eq. (7), by combining \(\Delta\left\langle T_{ij}\right\rangle\) and \(\Delta S_{\text{EE}}\), we have a direct computation of \(a_{\Sigma}\): for the two-charge solutions we found \(a_{\Sigma}\propto N^{3}(1-y_{+}^{2})\) where \(y_{+}\) is the largest root of the quartic polynomial in eq. (12b), while \(a_{\Sigma}\) for the electrostatic solutions in eq. (101) is a complicated function of the data of the line charge distribution that specifies
the solution. For the electrostatic solutions, we have shown that the computation of the holographic 'central charge' in [14] is proportional to the universal part of the defect sphere EE. Further, we were able to show that the complicated sum over line charge density data that appears in \(a_{\Sigma}\) is the same sum that determines the large \(N\) 'central charge' \(c(=a)\) for the _compact_ electrostatic solutions describing \(4d\)\(\mathcal{N}=2\) SCFTs; the important difference is that the defect \(a_{\Sigma}\) has an additional contribution of \(N^{3}/32\). In both classes of defects, we have also shown that \(a_{\Sigma}\geq 0\), where the inequality is only saturated for a trivial defect.
Curiously, in appendix B, we showed that the holographically renormalized on-shell action for the 11d uplift of the two-charge solutions using the full form of the radial cutoff in FG gauge and found that the log divergent part of the action cannot be written in terms of either \(a_{\Sigma}\), as was expected from the same computation done in 7d gauged SUGRA description of the two-charge defects [13], \(d_{2}\), or a linear combination of them. The reason for this discrepancy is unclear at this time, but may be related to the insufficiency of the background subtraction scheme for the on-shell action, which highlights a need for a full covariant holographic renormalization scheme for 'defects' in 11d SUGRA.
With the holographic predictions for \(a_{\Sigma}\) and \(d_{2}\) in hand, let us compare to results in the field theory at large \(N\). We will focus entirely on the 1/2-BPS electrostatic solutions in the following comparisons.
#### Defect supersymmetric Casimir energy
In ordinary \(4d\) SCFTs with R-symmetry placed on \(\mathbb{S}^{1}_{\beta}\times\mathbb{S}^{3}\), the supersymmetric localized partition function can be decomposed as a product of an exponential prefactor multiplying the superconformal index
\[\mathcal{Z}_{\mathbb{S}^{1}_{\beta}\times\mathbb{S}^{3}}=e^{- \beta E_{C}}\mathcal{I}. \tag{110}\]
The supersymmetric Casimir energy (SCE), \(E_{C}\), can be expressed in terms of the conformal anomalies \(a\) and \(c\)[72; 73] of the theory, the equivariant integral of the anomaly polynomial [74], or 't Hooft anomalies [75]. Given the results in [76] for the localized partition functions a 1/2-BPS co-dimension 2 defect in a \(6d\)\(\mathcal{N}=(2,0)\)\(A_{N-1}\) SCFT labelled by \(\vartheta\) wrapping \(\Sigma=\mathbb{S}^{1}_{\beta}\times\mathbb{S}^{3}\subset\mathbb{S}^{1}_{\beta }\times\mathbb{S}^{5}\), it was conjectured in [77] that the change in the exponential prefactor due to the introduction of the defect was in fact the defect SCE and could be related to defect conformal anomalies13. Now that we have holographic predictions for two defect anomalies, we can look for a superficial match to this field theory quantity.
Footnote 13: Evidence for a version of this conjecture for \(\mathfrak{d}=2\) defects appeared to support the claim, but no rigorous proof has yet been given.
As a very brief overview, we start the comparison by putting the ambient theory on the squashed \(\mathbb{S}^{1}_{\beta}\times\mathbb{S}^{5}_{\mathfrak{b}}\) and reducing along the \(\mathbb{S}^{1}\) factor. The localized partition function of the \(6d\)\(\mathcal{N}=(2,0)\)\(A_{N-1}\) SCFT in the unrefined limit becomes the partition function of \(5d\)\(\mathcal{N}=2\)\(U(N)\) super-Yang-Mills theory on \(\mathbb{S}^{5}_{\mathfrak{b}}\), which determines the ambient SCE
\[E_{C}[\emptyset]\equiv\frac{\mathfrak{c}}{24}\,\qquad\text{where}\qquad \mathfrak{c}=N(N^{2}-1)(\mathfrak{b}+\mathfrak{b}^{-1})^{2}+N-1. \tag{111}\]
The quantity \(\mathfrak{c}\) in this picture is the central charge of the 2d \(W_{N}\)-algebra on the plane orthogonal to the directions that defect will eventually wrap [7; 76]. The introduction of a co-dimension 2 defect breaks the gauge algebra to the Levi subalgebra \(\mathfrak{l}=\mathfrak{s}\left[\bigoplus_{a=1}^{n}\mathfrak{u}(N_{a})\right]\). The most general 1/2-BPS defect configuration allows for monodromy parameters \(\vec{\mathfrak{w}}=(\mathfrak{w}_{1},\ldots,\mathfrak{w}_{n})\) for the Levi factors. The change in the SCE due to introducing the defect along \(\Sigma\) labelled by \(\vartheta:\mathfrak{sl}(2)\to\mathfrak{g}\) with monodromy parameters \(\vec{\mathfrak{w}}\) was found to be given by [76; 77]
\[E_{C}[\Sigma]_{\vartheta,\vec{\mathfrak{w}}}-E_{C}[\emptyset] =\frac{1}{2}(\mathfrak{b}+\mathfrak{b}^{-1})^{2}[(\hat{\varrho} _{\mathfrak{l}},\hat{\varrho}_{\mathfrak{l}})-(\hat{\varrho}_{\mathfrak{g}},\hat{\varrho}_{\mathfrak{g}})]+\frac{1}{2}(\vec{\mathfrak{w}},\vec{ \mathfrak{w}}), \tag{120}\] \[=-\frac{1}{6}\left(N^{3}-\sum_{a=1}^{n}N_{a}^{3}-3(\vec{ \mathfrak{w}},\vec{\mathfrak{w}})\right).\]
In the second line we took the limit \(\mathfrak{b}\to 1\), and replaced the scalar product of the Weyl vectors - denoted \(\hat{\varrho}_{\mathfrak{l}}\) and \(\hat{\varrho}_{\mathfrak{g}}\) for \(\mathfrak{l}\) and \(\mathfrak{g}=\mathfrak{su}(N)\), respectively - with
\[(\hat{\varrho}_{\mathfrak{l}},\hat{\varrho}_{\mathfrak{l}})=\frac{1}{12}\sum_ {a=1}^{n}(N_{a}^{3}-N_{a}),\qquad(\hat{\varrho}_{\mathfrak{g}},\hat{\varrho}_ {\mathfrak{g}})=\frac{1}{12}(N^{3}-N). \tag{121}\]
Turning off the monodromy parameters14 (\(\mathfrak{w}_{a}=0\)) in eq. (120) we see the superficial relation
Footnote 14: In light of the compact LLM-type solutions found recently in [78] where the additional internal \(U(1)\) symmetry is broken by the presence of scalar fields, which are interpreted as monodromy parameters, it may be possible to pin down a more precise relation between \(E_{C}\) and defect anomalies by computing \(\langle T_{\mu\nu}\rangle\) if similar non-compact solutions allowing for \(\mathfrak{w}_{a}\neq 0\) can be constructed.
\[E_{C}[\Sigma]_{\vartheta,\vec{0}}-E_{C}[\emptyset]=4d_{2}|_{k_{a}\to 1}\, \tag{122}\]
where on the right hand side we take all orbifold parameters \(k_{a}\to 1\) in eq. (37b).
Since the expression for the defect SCE in terms of explicit defect Weyl anomalies is still unknown and 4d DCFTs have 23 possible parity even anomalies, we cannot definitively state that the defect SCE is determined solely by \(d_{2}\). We note, though, that a similar relation was found for co-dimension 4 Wilson surface defects: the defect SCE in that case was also related to the \(2d\) DCFT equivalent of \(d_{2}\). Since \(2d\) DSCFT preserving at least \(\mathcal{N}=(2,0)\) supersymmetry have only two independent Weyl anomalies [31], which for the Wilson surface defect can be clearly distinguished from one another [30], it was conjectured that \(d_{2}\) alone fixed the defect SCE [77]. So, while it is not inconceivable that \(d_{2}\) could appear in the defect SCE for co-dimension 2 defects, we leave establishing the precise relation for future work.
#### R-anomalies
Ordinarily in \(4d\) SCFTs, there are non-perturbative formulae that relate the A-type and B-type Weyl anomalies to 't Hooft anomalies for the superconformal \(R\) symmetry [38]. In [71], it was conjectured that \(a_{\Sigma}\) obeys the same relation to defect \(R\)-anomalies as a standalone
theory15
Footnote 15: It was also conjectured that a B-type defect anomaly built out of the square of intrinsic Weyl tensor (\(c_{\Sigma}|\bar{W}|^{2}\)) obeys the usual relation [38]
\[c_{\Sigma}=\frac{9k_{rrr}-5k_{r}}{32}. \tag{110}\]
However, the basis used in [9] did not include \(|\bar{W}|^{2}\). From the Gauss-Codazzi and Ricci relations, \(|\bar{W}|^{2}\) is related to several anomalies in the original basis (none of which include \(d_{2}\)). So it is unclear at the this time, what observables can be used to compute \(c_{\Sigma}\). Though it is reasonable to expect that the defect limit of \(\langle T_{\mu\nu}T_{\rho\sigma}\rangle\) may be the appropriate correlator to compute \(c_{\Sigma}\), proving this is the subject of future work.
\[a_{\Sigma}=\frac{9k_{rrr}-3k_{r}}{32}, \tag{111}\]
where \(k_{rrr}\) and \(k_{r}\) are the cubic and mixed \(U(1)_{r}\) R-anomalies. Importantly for the defect theory written in \(4d\)\(\mathcal{N}=1\) language, the superconformal \(r_{\Sigma}\) symmetry is a linear combination of the Cartan generator of the ambient \(SU(2)_{R}\) R-symmetry and the generator of normal bundle rotation \(M_{\varphi}\)[71]
\[r_{\Sigma}=\frac{2}{3}(2r_{6d}-M_{\varphi}). \tag{112}\]
It was further stated in [71] that precisely for the types of defects holographically described by the electrostatic solutions considered above, in order to determine the \(R\) and mixed anomaly we should use the counting formulae [6]
\[k_{rrr}=\frac{2}{27}(\mathfrak{n}_{v}-\mathfrak{n}_{h})+\frac{8}{9}\mathfrak{ n}_{v},\qquad k_{r}=\frac{2}{3}(\mathfrak{n}_{v}-\mathfrak{n}_{h}), \tag{113}\]
where \(\mathfrak{n}_{v}\) is the number of \(4d\) vector multiplets and \(\mathfrak{n}_{h}\) is the number hypermultiplets. In turn, both \(\mathfrak{n}_{h}\) and \(\mathfrak{n}_{v}\) are determined by the Young diagram data.
As we have pointed out around eq. (108), the defect A-type anomaly contains a contribution that is precisely of the form of the central charge \(c\) of \(4d\) SCFTs engineered from irregularly punctured Riemann surface compactifications of \(6d\)\(\mathcal{N}=(2,0)\)\(A_{N-1}\) series SCFTs dual to electrostatic solutions of the type studied above. Further, in [18; 6], a match was found between the holographic computation of \(c\) of the dual \(4d\) SCFT and the large \(N\) behavior of the central charge computed in the field theory using the R-anomalies and eq. (113). However since we have found \(a_{\Sigma}\sim c+N^{3}/32\), it is clear that the naive application of eq. (111) and eq. (113) do not directly match.
### Future directions and open questions
The work that we have presented in this paper is only scratching the surface of \(4d\) defects. While a full accounting of all of the defect Weyl anomalies of these systems through computing entropies, correlation functions, or other physical quantities is not currently possible, there are a number of questions opened up by our analysis that we will leave for future work.
#### Probe branes
Even though we have access to the full \(11d\) SUGRA bubbling geometry solution, it is useful to consider limit cases where we can instead appeal to a probe brane construction. By finding \(\kappa\)-symmetric embeddings of probe M5-branes in an \(\text{AdS}_{7}\times\mathbb{S}^{4}\) background wrapping \(\text{AdS}_{5}\subset\text{AdS}_{7}\) and an \(\mathbb{S}^{1}\) living either in the internal \(\mathbb{S}^{4}\) or in the \(\text{AdS}_{7}\), we expect to be able to holographically study defects engineered by Young diagrams associated to totally symmetric or totally antisymmetric representations of \(\mathfrak{su}(N)\) similar to the co-dimension 4 Wilson surface defects from M2 and M5 probe branes [70; 79; 80]. One advantage of studying these defect systems using probe brane holography is that we will have clearer access to the study of defect RG flows, which will provide holographic tests of the defect \(a_{\Sigma}\)-theorem in a strongly coupled theory, a means to study defect phase transitions, and a setting to test the monotonicity of the defect sphere EE along an RG flow [70]. Further taking inspiration from \(\text{AdS}_{5}\) holography [81; 82; 83], if one was able to construct a \(\kappa\)-symmetric probe M5 brane embedding in global \(\text{AdS}_{7}\), say with an \(\mathbb{S}^{1}\times\mathbb{S}^{5}\) boundary, one could try to compare to recent results in type IIB probe brane holography and supersymmetric localization in \(3d/5d\) systems on a sphere [84; 85]. These questions are currently being investigated in work currently in progress.
#### Dimensional reduction
By (partial) topologically twisted dimensional reduction on a Riemann surface or a 3-manifold, \(6d\) SCFTs can be used to engineer large classes of \(4d\)[86; 22] and \(3d\)[87; 88] theories. Further, we can enrich the algorithm to determine the lower dimensional theory by starting from a \(6d\) theory deformed by their natural co-dimension 2 and 4 defects to end up with a dimensionally reduced theory possibly with defects [89; 90; 21]. As we have seen in the computation of the A-type anomaly for co-dimension 2 defects in the \(6d\)\(\mathcal{N}=(2,0)\)\(A_{N-1}\) series SCFTs, there is a connection to the central charge of a \(4d\) SCFT engineered on a Riemann surface with regular punctures, at least in the large \(N\) limit. It is natural, then, to wonder how the rest of the data contained in the other 22 parity even defect Weyl anomalies can be used to characterize the lower dimensional theory, or whether the remaining unknown defect Weyl anomalies are vanishing or fixed by \(a_{\Sigma}\) and \(d_{2}\). For BPS Wilson surfaces in \(6d\) preserving at least \(2d\)\(\mathcal{N}=(2,0)\) supersymmetry, the defect supersymmetry imposes non-trivial relations among the B-type defect Weyl anomalies [31], but as of yet, there is no known relation imposed by \(4d\)\(\mathcal{N}=2\) defect supersymmetry.
A special case of dimensional reduction of the \(6d\)\(\mathcal{N}=(2,0)\)\(A_{N-1}\) theory is taking the Riemann surface to be \(\mathbb{T}^{2}\), which reduces to \(4d\)\(\mathcal{N}=4\)\(SU(N)\) super Yang-Mills theory. The co-dimension 2 defects labelled by \(\vartheta:\mathfrak{sl}(2)\rightarrow\mathfrak{su}(N)\) in the parent theory that we have holographically studied above wrapped on \(\mathbb{T}^{2}\) reduce to Gukov-Witten type defects. In the absence of complex structure deformations on \(\mathbb{T}^{2}\), all of the defect Weyl anomalies are equal to one another and are \(\propto N^{2}-\sum_{a}N_{a}^{2}\)[91; 28; 29; 29], which is closer in appearance to \(d_{2}\) in eq. (3.28b) than \(a_{\Sigma}\) in eq. (4.30). However, an exact relation to determine the anomalies of the Gukov-Witten defect from the higher dimensional defect anomalies is as of yet unknown.
#### Defect Weyl anomalies and 't Hooft anomalies
As we saw in the attempt to match the any of the holographic results for \(a_{\Sigma}\) or \(d_{2}\) to large \(N\) field theory computations, there are points of tension that should be resolved. One of the biggest issues, though, is that the putative relation between defect 't Hooft anomalies and defect Weyl anomalies seemed to disagree with the holographic results. While it remains a possibility that the issue stems from the holographic side of the story, there is an open question on the field theory side that must be addressed as well. Namely, the formulae conjectured in [71] only relate two of the twenty-three parity even defect Weyl anomalies to the defect R-anomalies for co-dimension \(\geq 2\)\(4d\) defects. That is, only the \(\overline{E}_{4}\) and \(|\overline{W}|^{2}\) structures in the defect anomaly have been supersymmetrized. A similar supersymmetrization of the defect Weyl anomaly for \(2d\) defects limited to be sensitive only to the intrinsic geometry of the defect submanifold was carried out in [92]. This naturally leads one to wonder if it is possible to supersymmetrize the full defect Weyl anomaly including the anomalies containing the second fundamental form and normal bundle curvature in order to arrive at a complete set of non-perturbative formulae for defect Weyl anomalies.
The authors would like to thank Pieter Bomans, Michael Gutperle, and Andy O'Bannon for useful discussions throughout this work. We would also like to thank Pieter Bomans and Michael Gutperle for comments on the draft. JE would also like to thank the EIC Theory Institute at BNL for partial financial support and warm hospitality while this work was being completed. The work of PC is supported by a Mayflower studentship from the University of Southampton. The work of BR is supported by the INFN. The work of BS is supported in part by the STFC consolidated grant ST/T000775/1. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics under Award Number DE-SC0024557.
Disclaimer: "This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof."
## Appendix A Fefferman-Graham coordinates
The starting point for computing holographic quantities associated with the two-charge solutions and electrostatic solutions is finding the asymptotic transformation which maps
their respective metrics into FG gauge. In this appendix, we will first derive the transformations of eq. (16) and find the asymptotic expressions for the metric functions in FG gauge. We will also derive the transformation of eq. (21a) into FG gauge. In this process, we will find necessary conditions on the mixing of two of the angular coordinates that allow for the metrics to be put into FG form. The interpretation of this mixing of angular coordinates is interpreted in the field theory language as an identification of the defect superconformal R-symmetry.
### Two-charge solutions
In this subsection, we will focus on putting the two-charge solutions in FG gauge. The explicit forms of the metric functions in eq. (16) are as follows:
\[\hat{f}^{2}_{\text{AdS}} =\kappa^{2/3}\left[\frac{c_{\zeta}^{2}\left(q_{1}+y^{2}\right) \left(q_{2}-q_{2}c_{2\psi}+2y^{2}\right)}{2y}+y\left(q_{2}+y^{2}\right)s_{\zeta }^{2}\right]^{1/3}\, \tag{18a}\] \[\hat{f}^{2}_{y} =\kappa^{2/3}\frac{\hat{f}^{2}_{\text{AdS}}y}{4\left(q_{1}+y^{2} \right)\left(q_{2}+y^{2}\right)-4y^{3}}\,\] (18b) \[\hat{f}^{2}_{z} =\kappa^{2/3}\Bigg{[}\frac{c_{\zeta}^{2}\left(c_{2\psi}\left( \left(a_{2}+1\right){}^{2}q_{2}y+a_{2}^{2}y^{3}-q_{2}\left(q_{1}+y^{2}\right) \right)+\left(a_{2}+1\right){}^{2}q_{2}y+\left(a_{2}^{2}-2\right)y^{3}\right)}{ 2y\hat{f}^{4}_{\text{AdS}}}\] \[\quad+\frac{s_{\zeta}^{2}\left(y\left(\left(a_{1}^{2}-1\right)y+ \left(q_{2}+y^{2}\right)\right)+\left(a_{1}+1\right){}^{2}q_{1}\right)}{\hat{f }^{4}_{\text{AdS}}}+\frac{c_{\zeta}^{2}\left(q_{1}+y^{2}\right)\left(q_{2}+2y^ {2}\right)}{2\hat{f}^{4}_{\text{AdS}}y}\Bigg{]}\,\] (18c) \[\hat{f}^{2}_{\phi_{1}} =\kappa^{2/3}\frac{\left(q_{1}+y^{2}\right)s_{\zeta}^{2}}{4\hat{f }^{4}_{\text{AdS}}}\,\] (18d) \[\hat{f}^{2}_{\phi_{2}} =\kappa^{2/3}\frac{c_{\psi}^{2}c_{\zeta}^{2}\left(q_{2}+y^{2} \right)}{4\hat{f}^{4}_{\text{AdS}}}\,\] (18e) \[\hat{f}^{2}_{z\phi_{1}} =\kappa^{2/3}\frac{s_{\zeta}^{2}\left(a_{1}q_{1}+a_{1}y^{2}+q_{1} \right)}{\hat{f}^{4}_{\text{AdS}}}\,\] (18f) \[\hat{f}^{2}_{z\phi_{2}} =\kappa^{2/3}\frac{c_{\psi}^{2}c_{\zeta}^{2}\left(a_{2}q_{2}+a_{2} y^{2}+q_{2}\right)}{\hat{f}^{4}_{\text{AdS}}}\,\] (18g) \[\hat{f}^{2}_{\psi} =\kappa^{2/3}\frac{c_{\zeta}^{2}\left(q_{2}-q_{2}c_{2\psi}+2y^{2} \right)}{8\hat{f}^{4}_{\text{AdS}}}\,\] (18h) \[\hat{f}^{2}_{\zeta} =\kappa^{2/3}\frac{q_{1}c_{\zeta}+2q_{2}c_{\psi}^{2}s_{\zeta}^{2 }+q_{1}+2y^{2}}{8\hat{f}^{4}_{\text{AdS}}}\,\] (18i) \[\hat{f}^{2}_{\psi\zeta} =\kappa^{2/3}\frac{q_{2}c_{\psi}c_{\zeta}s_{\psi}s_{\zeta}}{2 \hat{f}^{4}_{\text{AdS}}}\, \tag{18j}\]
where we denote \(\kappa=\hat{g}^{3}N\ell_{\text{P}}^{3}/2\).
We seek an asymptotic map from \(\{y,\,\psi,\,\zeta\}\) to the FG coordinates \(\{u,\aleph,\theta\}\) in the
large-\(y\)/small-\(u\) regime. By solving
\[\hat{f}_{y}^{2}dy^{2}+\hat{f}_{\psi}^{2}d\psi^{2}+\hat{f}_{\zeta}^{2}d\zeta^{2}+ \hat{f}_{\psi\zeta}^{2}d\psi d\zeta=\frac{L^{2}}{u^{2}}du^{2}+\frac{L^{2}}{4} \left(c_{\theta}^{2}\hat{\alpha}_{\text{N}}d\text{N}^{2}+\hat{\alpha}_{\theta }d\theta^{2}+\hat{\alpha}_{\theta\text{N}}d\theta d\text{N}\right) \tag{110}\]
order by order in \(u\), we find that the appropriate asymptotic map is
\[\begin{split} y&=\frac{1}{u^{2}}+\frac{1}{2}+\frac{ \left(2q_{1}-q_{2}\right)c_{2\theta}-2q_{2}c_{2\text{N}}c_{\theta}^{2}-10q_{1} -9q_{2}+3}{48}u^{2}+\ldots,\\ \psi&=\aleph+\frac{q_{2}s_{2\text{N}}}{24}u^{4}+ \ldots,\\ \zeta&=\theta-\frac{s_{2\theta}\left(q_{1}-q_{2}c_{ \aleph}^{2}\right)}{24}u^{4}+\ldots\,\end{split} \tag{111}\]
where we have suppressed higher orders in \(u\) due to their cumbersome expressions. To complete this map, we need to identify \(\kappa=L^{3}\), where \(L\) denotes the radius of the asymptotic AdS\({}_{7}\) spacetime.
Mapping all of the other metric functions in eq. (16), we find the FG form of the metric to be
\[\begin{split} ds_{\text{FG}}^{2}&=\frac{L^{2}}{u^{ 2}}(du^{2}+\hat{\alpha}_{\text{AdS}}ds_{\text{AdS}_{5}}^{2}+\hat{\alpha}_{z} dz^{2})+L^{2}s_{\theta}^{2}\hat{\alpha}_{z\varphi_{1}}dzd\varphi_{1}+L^{2}c_{ \aleph}^{2}c_{\theta}^{2}\hat{\alpha}_{z\varphi_{2}}dzd\varphi_{2}\\ &+\frac{L^{2}}{4}(\hat{\alpha}_{\theta}d\theta^{2}+s_{\theta}^{2} \hat{\alpha}_{\varphi_{1}}d\varphi_{1}^{2}+c_{\theta}^{2}(\hat{\alpha}_{\aleph }d\aleph^{2}+c_{\aleph}^{2}\hat{\alpha}_{\varphi_{2}}d\varphi_{2}^{2})+\hat{ \alpha}_{\theta\aleph}d\theta d\aleph),\end{split} \tag{112}\]
where we have transformed the angular coordinates using
\[\phi_{I}=\varphi_{I}-2a_{I}z. \tag{113}\]
Note that since \(\phi_{I}\) and \(z\) are all \(2\pi\)-periodic and \(a_{I}\in\mathbb{Z}/2\), the new angular coordinates \(\varphi_{I}\) are also \(2\pi\)-periodic. The metric functions have the asymptotic behavior
\[\hat{\alpha}_{\text{AdS}} =1+\frac{u^{2}}{2}+\frac{3-2q_{1}+3q_{2}-10q_{2}c_{2\aleph}c_{ \theta}^{2}+5(2q_{1}-q_{2})c_{2\theta}}{48}u^{4}+\ldots, \tag{114a}\] \[\hat{\alpha}_{z} =1-\frac{u^{2}}{2}+\frac{3-2q_{1}+3q_{2}-10q_{2}c_{2\aleph}c_{ \theta}^{2}+5(2q_{1}-q_{2})c_{2\theta}}{48}u^{4}+\ldots,\] (114b) \[\hat{\alpha}_{\varphi_{1}} =1+\frac{10q_{2}c_{2\aleph}c_{\theta}^{2}+5(q_{2}-2q_{1})c_{2 \theta}+14q_{1}-11q_{2}}{24}u^{4}+\ldots,\] (114c) \[\hat{\alpha}_{\varphi_{2}} =1+\frac{10q_{2}c_{2\aleph}c_{\theta}^{2}+5(q_{2}-2q_{1})c_{2 \theta}-6q_{1}+9q_{2}}{24}u^{4}+\ldots,\] (114d) \[\hat{\alpha}_{z\varphi_{1}} =q_{1}u^{4}-q_{1}u^{6}+\ldots,\] (114e) \[\hat{\alpha}_{z\varphi_{2}} =q_{2}u^{4}-q_{2}u^{6}+\ldots,\] (114f) \[\hat{\alpha}_{\theta} =1+\frac{5q_{2}c_{2\aleph}+2q_{1}-3q_{2}}{12}u^{4}+\ldots,\] (114g) \[\hat{\alpha}_{\aleph} =1+\frac{5(q_{2}-2q_{1})c_{2\theta}-10q_{2}c_{2\aleph}s_{\theta}^ {2}-6q_{1}-q_{2}}{24}u^{4}+\ldots,\] (114h) \[\hat{\alpha}_{\theta\aleph} =\frac{5q_{2}s_{2\theta}s_{2\aleph}}{12}u^{4}+\ldots. \tag{114i}\]
If we had not transformed to \(\varphi_{I}\), we would not have been able to put the metric in FG form. We can see this in the original \(\phi_{I}\) coordinates, where \(\hat{\alpha}_{z\phi_{I}}\) has an \(O(1)\) term which is proportional to \(a_{I}\). FG gauge requires \(\hat{\alpha}_{z\phi_{I}}\sim u^{4}\), which would mean setting \(a_{I}=0\). However, the values of the \(a_{I}\)'s are set by regularity, i.e.
\[a_{I}=-\frac{q_{I}}{q_{I}+y_{+}^{2}}\, \tag{100}\]
and so, we cannot simply tune them to zero without also setting the corresponding \(q_{I}=0\), which lands us on the pure AdS\({}_{7}\times\mathbb{S}^{4}\) solution.
### Electrostatic solutions
We now turn to deriving the FG form of the metric for the electrostatic solutions. Finding the asymptotic expansions of the metric factors in eq. (21a) requires explicit expressions for \(\dot{V}\), \(\ddot{V}\), \(\dot{V}^{\prime}\), \(V^{\prime\prime}\), and \(\sigma\). We can compute the indefinite integral in \(V\) for a trial line charge distribution \(\varpi_{a}(\eta)=p_{1+a}\eta+\delta_{1+a}\),
\[-\frac{1}{2}\int d\eta^{\prime}G(r,\eta,\eta^{\prime})\varpi_{a} (\eta^{\prime}) =\frac{p_{1+a}}{2}\bigg{(}\sqrt{r^{2}+(\eta+\eta^{\prime})^{2}}- \sqrt{r^{2}+(\eta-\eta^{\prime})^{2}} \tag{101}\] \[\quad-\eta\tanh^{-1}\Big{(}\frac{\eta+\eta^{\prime}}{\sqrt{r^{2} +(\eta+\eta^{\prime})^{2}}}\Big{)}+\eta\tanh^{-1}\Big{(}\frac{\eta-\eta^{ \prime}}{\sqrt{r^{2}+(\eta-\eta^{\prime})^{2}}}\Big{)}\bigg{)}\] \[\quad+\frac{\delta_{1+a}}{2}\left(\tanh^{-1}\Big{(}\frac{\eta+ \eta^{\prime}}{\sqrt{r^{2}+(\eta+\eta^{\prime})^{2}}}\Big{)}+\tanh^{-1}\Big{(} \frac{\eta-\eta^{\prime}}{\sqrt{r^{2}+(\eta-\eta^{\prime})^{2}}}\Big{)} \right)\,\]
and then build up the full potential by summing over the intervals. Clearly, evaluating the result above in the \(\eta^{\prime}\to\infty\) region leads to linear and logarithmic divergences. However, when evaluating derivatives of the right-hand side above, these divergences are eliminated, and only derivatives of \(V\) appear in all of the computations carried out below and in the main body of the text.
The asymptotically AdS\({}_{7}\times\mathbb{S}^{4}\) region corresponds to the limits \(r\), \(\eta\to\infty\). In order to facilitate the expansion of the derivatives of the electrostatic potential in this region, we redefine \(r=\varrho c_{\omega}\) and \(\eta=\varrho s_{\omega}\), with \(\omega\in[0,\pi/2]\), so that
\[f_{3}^{2}(dr^{2}+d\eta^{2})\to f_{\varrho}^{2}d\varrho^{2}+f_{\omega}^{2}d \omega^{2}, \tag{102}\]
with \(f_{\varrho}^{2}=f_{3}^{2}\) and \(f_{\omega}^{2}=f_{3}^{2}\varrho^{2}\). The AdS\({}_{7}\times\mathbb{S}^{4}\) region now lies in the \(\varrho\to\infty\) limit. We can compute the asymptotic expansions of the derivatives of the electrostatic potential in this region in terms of its moments as follows,
\[\dot{V} =\varrho s_{\omega}+m_{1}s_{\omega}-\frac{m_{3}c_{\omega}^{2}s_{ \omega}}{2\varrho^{2}}+\frac{m_{5}\left(7c_{2\omega}-1\right)c_{\omega}^{2}s_ {\omega}}{16\varrho^{4}}+\ldots\, \tag{103a}\] \[\ddot{V} =-m_{1}c_{\omega}^{2}s_{\omega}+\frac{m_{3}\left(5c_{2\omega}+1 \right)c_{\omega}^{2}s_{\omega}}{4\varrho^{2}}-\frac{m_{5}\left(28c_{2\omega}+ 63c_{4\omega}+29\right)c_{\omega}^{2}s_{\omega}}{64\varrho^{4}}+\ldots\,\] (103b) \[\dot{V}^{\prime} =1+\frac{m_{1}c_{\omega}^{2}}{\varrho}+\frac{m_{3}\left(3-5c_{2 \omega}\right)c_{\omega}^{2}}{4\varrho^{3}}+\frac{3m_{5}\left(21c_{4\omega}-28c _{2\omega}+15\right)c_{\omega}^{2}}{64\varrho^{5}}+\ldots\, \tag{103c}\]
\[V^{\prime\prime}=\frac{m_{1}s_{\omega}}{\varrho^{2}}-\frac{m_{3}\left(5c_{2\omega} +1\right)s_{\omega}}{4\varrho^{4}}+\frac{m_{5}\left(28c_{2\omega}+63c_{4\omega} +29\right)s_{\omega}}{64\varrho^{6}}+\ldots. \tag{111}\]
From these expressions, we can also find the asymptotic behavior of \(\sigma\) in terms of the moments of the electrostatic potential to be
\[\sigma=1+\frac{2m_{1}}{\varrho}-\frac{m_{1}^{2}\left(c_{2\omega} -3\right)}{2\varrho^{2}}+\frac{m_{3}\left(1-3c_{2\omega}\right)}{2\varrho^{3} }+\frac{m_{3}m_{1}\left(1-12c_{2\omega}+3c_{4\omega}\right)}{8\varrho^{4}}+ \ldots. \tag{112}\]
Together, these expansions can be inserted into the definitions of the metric functions in eq. (21) to give
\[\frac{(2m_{1})^{1/3}}{\kappa_{11}^{2/3}}f_{\rm AdS}^{2} =4\varrho+4m_{1}+\frac{5m_{3}c_{2\omega}+4m_{1}^{3}s_{\omega}^{2} +m_{3}}{3m_{1}\varrho}+\frac{4\left(m_{3}-m_{1}^{3}\right)s_{\omega}^{2}}{3 \varrho^{2}}+\ldots\, \tag{113a}\] \[\frac{(2m_{1})^{1/3}}{s_{\omega}^{2}\kappa_{11}^{2/3}}f_{\rm S^{2} }^{2} =2m_{1}-\frac{\left(1+5c_{2\omega}\right)m_{3}+4s_{\omega}^{2}m_{1 }^{3}}{3\varrho^{2}}+\frac{8m_{1}\left(m_{1}^{3}-m_{3}\right)s_{\omega}^{2}}{3 \varrho^{3}}+\ldots\,\] (113b) \[\frac{(2m_{1})^{1/3}}{\kappa_{11}^{2/3}}f_{\varrho}^{2} =\frac{2m_{1}}{\varrho^{2}}-\frac{\left(1+5c_{2\omega}\right)m_{3 }-2s_{\omega}^{2}m_{1}^{3}}{3\varrho^{4}}+\frac{4m_{1}\left(m_{3}-m_{1}^{3} \right)s_{\omega}^{2}}{3\varrho^{5}}+\ldots\,\] (113c) \[\frac{(2m_{1})^{1/3}}{\kappa_{11}^{2/3}}f_{\beta}^{2} =4\varrho+m_{1}\left(c_{2\omega}-3\right)+\frac{\left(1+5c_{2 \omega}\right)m_{3}+4m_{1}^{3}s_{\omega}^{2}}{3m_{1}\varrho}+\ldots\,\] (113d) \[\frac{(2m_{1})^{1/3}}{\kappa_{11}^{2/3}}f_{\chi}^{2} =4\varrho+4m_{1}c_{2\omega}+\frac{5m_{3}c_{2\omega}+4m_{1}^{3}s_ {\omega}^{2}+m_{3}}{3m_{1}\varrho}+\ldots\,\] (113e) \[\frac{(2m_{1})^{1/3}}{\kappa_{11}^{2/3}}f_{\beta\chi}^{2} =8\varrho-8m_{1}s_{\omega}^{2}+\frac{2\left(5m_{3}c_{2\omega}+4m _{1}^{3}s_{\omega}^{2}+m_{3}\right)}{3m_{1}\varrho}+\ldots. \tag{113f}\]
We again look for an asymptotic map to a set of coordinates \(\{u,\theta\}\) in terms of which the metric is in FG form. By taking \(\varrho=\varrho(u,\theta)\) and \(\omega=\omega(u,\theta)\), and expanding in small \(u\) to solve
\[f_{\varrho}^{2}d\varrho^{2}+f_{\omega}^{2}d\omega^{2}=\frac{L^{2} }{u^{2}}du^{2}+\frac{L^{2}}{4}\alpha_{\theta}d\theta^{2} \tag{114}\]
order by order, we find
\[\rho =\frac{2m_{1}}{u^{2}}+\frac{2m_{1}^{3}c_{\theta}^{2}+m_{3}\left(5 c_{2\theta}-1\right)}{48m_{1}^{2}}u^{2}+\frac{\left(m_{3}-m_{1}^{3}\right)c_{ \theta}^{2}}{36m_{1}^{2}}u^{4}+\ldots\, \tag{115a}\] \[\omega =\theta+\frac{\pi}{2}-\frac{\left(m_{1}^{3}+5m_{3}\right)s_{2 \theta}}{96m_{1}^{3}}u^{4}+\frac{\left(m_{1}^{3}-m_{3}\right)s_{2\theta}}{216m _{1}^{3}}u^{6}+\ldots. \tag{115b}\]
The asymptotic expansions of \(f_{\chi}^{2}\), \(f_{\beta}^{2}\), and \(f_{\beta\chi}^{2}\) under the above transformation reveal an ambiguity as to which of the angular coordinates should be identified as parametrizing the external \({\sf S}^{1}\subset{\rm AdS}_{7}\) and which as parametrizing the internal \({\sf S}^{1}\subset{\sf S}^{4}\) upon mapping to FG gauge. That is, both are characterized by \(1/u^{2}\) divergences at small-\(u\), so that the resulting asymptotic metric is not in FG gauge. To resolve this issue, we introduce
\[\chi=(1+{\cal C}_{z})z+a_{\varphi}\varphi,\qquad\beta=-{\cal C}_{ z}z+b_{\varphi}\varphi, \tag{116}\]
where \(\mathcal{C}_{z}\in\mathbb{Z}\) and \(a_{\varphi}\) and \(b_{\varphi}\) are arbitrary constants. Note that this transformation parallels the one taken in [18], where \(\mathcal{C}_{z}=1/\mathcal{C}\) is fixed by the ratio of four-form flux through two 4-cycles, which in turn fixes the mixing parameter between the \(U(1)\) symmetries leading to \(U(1)_{r}\) symmetry \(\partial_{\chi}=\partial_{z}+\frac{1}{\mathcal{C}}\partial_{\varphi}\) in the field theory. Here we are following the conventions of [14] where the corresponding \(\mathcal{C}\) is negative. We then find that the metric functions for the transformed coordinates display the following asymptotic behavior,
\[\frac{f_{\varphi}^{2}}{L^{2}} =\frac{(a_{\varphi}+b_{\varphi})^{2}}{u^{2}}-\frac{1}{8}\left((2a _{\varphi}+b_{\varphi})^{2}c_{2\theta}+b_{\varphi}(4a_{\varphi}+3b_{\varphi}) \right)+\ldots\, \tag{115a}\] \[\frac{f_{z\varphi}^{2}}{L^{2}} =\frac{2(a_{\varphi}+b_{\varphi})}{u^{2}}+\frac{1}{4}(2a_{ \varphi}\mathcal{C}_{z}+b_{\varphi}(\mathcal{C}_{z}-2)-(2a_{\varphi}+b_{ \varphi})(\mathcal{C}_{z}+2)c_{2\theta})+\ldots\,\] (115b) \[\frac{f_{z}^{2}}{L^{2}} =\frac{1}{u^{2}}+\frac{1}{8}(\mathcal{C}_{z}(\mathcal{C}_{z}+4)- (\mathcal{C}_{z}+2)^{2}c_{2\theta})+\ldots\, \tag{115c}\]
where we introduced the AdS\({}_{7}\) radius \(L=(16m_{1}\kappa_{11})^{1/3}\). Setting \(a_{\varphi}=-b_{\varphi}=-1\) removes the \(1/u^{2}\) divergences in the asymptotic expansions of \(f_{\varphi}^{2}\) and \(f_{z\varphi}^{2}\). In particular, \(f_{\varphi}^{2}=L^{2}s_{\theta}^{2}/4+\ldots\). This identifies the \(\varphi\)-circle as the internal \(\mathbb{S}^{1}\subset\mathbb{S}^{4}\). Furthermore, \(f_{z}^{2}=L^{2}/u^{2}+\ldots\), as required for the external \(\mathbb{S}^{1}\subset\text{AdS}_{7}\). The final requirement to achieve an FG parametrization is that \(f_{z\varphi}^{2}\sim O(u^{2})\). Eliminating the \(u^{0}\) behavior of \(f_{z\varphi}^{2}\) fixes \(\mathcal{C}_{z}\equiv-2\). Recalling the role of \(\mathcal{C}_{z}\), we see that the defect superconformal R-symmetry is \(\partial_{\chi}=\partial_{z}-2\partial_{\varphi}\).
Having identified the correct combination of angular variables, we can at once express the metric in FG gauge as
\[\begin{split} ds_{\text{FG}}^{2}&=\frac{L^{2}}{u^{ 2}}(du^{2}+\alpha_{\text{AdS}}ds_{\text{AdS}_{5}}^{2}+\alpha_{z}dz^{2})+L^{2} s_{\theta}^{2}\alpha_{z\varphi}dzd\varphi\\ &\quad+\frac{L^{2}}{4}\big{(}s_{\theta}^{2}\alpha_{\varphi}d \varphi^{2}+c_{\theta}^{2}\alpha_{\mathbb{S}^{2}}d\Omega_{2}^{2}+\alpha_{ \theta}d\theta^{2}\big{)}\,\end{split} \tag{116}\]
where the metric functions have asymptotic behavior
\[\alpha_{\text{AdS}} =1+\frac{u^{2}}{2}+\frac{1}{96}\left(10c_{\theta}^{2}+\frac{m_{3} \left(1-5c_{2\theta}\right)}{m_{1}^{3}}\right)u^{4}+\frac{\left(m_{3}-m_{1}^{3} \right)c_{\theta}^{2}}{18m_{1}^{3}}u^{6}\ldots\, \tag{117a}\] \[\alpha_{z} =1-\frac{u^{2}}{2}+\frac{1}{96}\left(10c_{\theta}^{2}+\frac{m_{3} \left(1-5c_{2\theta}\right)}{m_{1}^{3}}\right)u^{4}+\frac{\left(m_{3}-m_{1}^{3 }\right)\left(5c_{2\theta}-13\right)}{72m_{1}^{3}}u^{6}+\ldots\,\] (117b) \[\alpha_{\varphi} =1+\frac{\left(m_{3}-m_{1}^{3}\right)\left(5c_{2\theta}-7\right) }{48m_{1}^{3}}u^{4}+\frac{\left(m_{1}^{3}-m_{3}\right)\left(10c_{2\theta}-17 \right)}{108m_{1}^{3}}u^{6}+\ldots\,\] (117c) \[\alpha_{\mathbb{S}^{2}} =1+\frac{\left(m_{3}-m_{1}^{3}\right)\left(5c_{2\theta}+3\right) }{48m_{1}^{3}}u^{4}+\frac{\left(m_{1}^{3}-m_{3}\right)\left(5c_{2\theta}+4 \right)}{54m_{1}^{3}}u^{6}+\ldots\,\] (117d) \[\alpha_{z\varphi} =\frac{m_{1}^{3}-m_{3}}{4m_{1}^{3}}u^{4}-\frac{m_{1}^{3}-m_{3}}{4m _{1}^{3}}u^{6}+\ldots\,\] (117e) \[\alpha_{\theta} =1+\frac{m_{1}^{3}-m_{3}}{24m_{1}^{3}}u^{4}+\frac{\left(m_{3}-m_{ 1}^{3}\right)\left(5c_{2\theta}+9\right)}{216m_{1}^{3}}u^{6}+\ldots. \tag{117f}\]
Note that, upon being evaluated on the single kink electrostatic profile in eq. (27), the asymptotic metric above recovers the \(q_{2}=0\) instance of eq. (104); in particular, the coordi
nate \(\varphi\) maps over to \(\varphi_{1}\), while \(\aleph\) and \(\varphi_{2}\) correspond to, respectively, the polar and azimuthal angles on the asymptotic internal \(\mathbb{S}^{2}\subset\mathbb{S}^{4}\) in the electrostatic description.
## Appendix B On-shell action
Given a solution to SUGRA equations of motion, one of the most basic quantities that one can compute is the on-shell action. Holographically, the on-shell action is mapped to the free energy of the theory, and so with an even dimensional spherical boundary, has universal divergences that are related to anomalies. In this section, we will compute the on-shell action for the \(11d\) uplift of the two-charge solutions and compare the results of the log-divergent parts to the holographic defect anomalies computed in the preceding sections.
### Two-charge solutions
In this subsection, we consider the on-shell action for the two-charge solutions. The computation of the on-shell action for the two-charge solutions was originally carried out in their realization as \(7d\)\(\mathcal{N}=4\) gauged SUGRA domain wall solutions [13]. Here, we will work with the \(11d\) uplift in section 2.2 using the regulating scheme where we subtract off the on-shell action for the \(\text{AdS}_{7}\times\mathbb{S}^{4}\) vacuum computed in appendix C.1.
The starting point for computing the on-shell action for the electrostatic solution is the bosonic part of the \(11d\) SUGRA action
\[S=\frac{1}{16\pi G_{N}^{(11)}}\int_{\mathcal{M}}d^{11}x\ \sqrt{-g_{11}} \left(\mathcal{R}-\frac{1}{48}F_{MNPQ}F^{MNPQ}\right)+\frac{1}{8\pi G_{N}^{(1 1)}}\int_{\partial\mathcal{M}}K\Upsilon_{\partial\mathcal{M}}+S_{\text{CS}}, \tag{113}\]
where \(\Upsilon_{\partial\mathcal{M}}\) is the natural volume form associated to the metric induced on the boundary \(\partial\mathcal{M}\), while \(K\) is the trace of the boundary extrinsic curvature \(K_{MN}=-\frac{1}{2}(\nabla_{M}\nu_{N}+\nabla_{N}\nu_{M})\) with \(\nu_{M}\) denoting the components of the outward-pointing normal vector to \(\partial\mathcal{M}\) and where capital Latin indices \(M\), \(N\in\{0,\dots,10\}\). Using the equations of motion for the \(11d\) metric we can write the bulk term as
\[\sqrt{-g_{11}}\left(\mathcal{R}-\frac{1}{48}F_{MNPQ}F^{MNPQ}\right)d^{11}x=- \frac{1}{3}F_{4}\wedge\star F_{4}. \tag{114}\]
Note that for this particular solution, the four-form flux obeys the equation
\[d\star F_{4}=0, \tag{115}\]
and consequently the Chern-Simons term \(S_{\text{CS}}\) vanishes. As a further consequence of the equations of motion for the four-form flux, we can freely exchange \(\star F_{4}\) for \(dC_{6}\), which due to \(C_{6}\) being better behaved will make the following computation a bit easier. Using this fact and the bulk equations of motion, the bulk integrand can be expressed as a total derivative. Thus, the on-shell action can be written as a boundary integral
\[S_{\text{OS}}=\frac{1}{16\pi G_{N}^{(11)}}\int_{\partial\mathcal{M}}\left(2K \Upsilon_{\partial\mathcal{M}}-\frac{1}{3}F_{4}\wedge C_{6}\right)=:S_{\text{ OS,GHY}}+S_{\text{OS,bulk}}. \tag{116}\]
The particular solutions we are interested in are asymptotically locally AdS\({}_{7}\times\mathds{S}^{4}\). So, in order to regularize the boundary integral, we first map the metric into FG form as in eq. (100) using the explicit asymptotic coordinate transformation derived in eq. (101). That is, we will define a regulating hypersurface at \(u=\epsilon_{u}\) that will become \(\partial\mathcal{M}\) as we take \(\epsilon_{u}\to 0\). Note that due to the presence of an AdS\({}_{5}\) factor, an additional regularization procedure will have to be applied, which we will address later.
Before beginning the computation in earnest, we will need the asymptotic \(u\ll 1\) expansions of \(F_{4}\) and \(C_{6}\). First, we compute \(C_{6}\) from eq. (19), which yields
\[C_{6}=L^{6}\Bigg{\{} \frac{1}{2}q_{2}c_{\zeta}^{2}c_{\psi}^{2}d\phi_{2}-\frac{1}{2}q_{1 }c_{\zeta}^{2}d\phi_{1}+\Bigg{[}y(y^{2}+q_{2})-\frac{c_{\zeta}^{2}}{2y}\left(q _{2}c_{2\psi}\left(y\left(y-a_{2}-1\right)+q_{1}\right)\right.\] \[\left.+2q_{1}y\left(a_{1}-y+1\right)+q_{2}y\left(y-a_{2}-1\right) -q_{2}q_{1}\right)\Biggr{]}dz\Bigg{\}}\wedge\Upsilon_{\text{AdS}_{5}}. \tag{102}\]
We can then use the residual gauge freedom to shift \(C_{6}\mapsto C_{6}+d\Lambda_{5}=:\tilde{C}_{6}\) such that \(\tilde{C}_{6}\) is regular at \(y=y_{+}\). At \(y=y_{+}\), we can use the values for \(a_{I}\) determined from \(A_{I}(y_{+})=0\) to show
\[C_{6}(y_{+})=L^{6}\left\{\frac{1}{2}q_{2}c_{\zeta}^{2}c_{\psi}^{2}d\phi_{2}- \frac{1}{2}q_{1}c_{\zeta}^{2}d\phi_{1}+y_{+}H_{2}(y_{+})dz\right\}\wedge\Upsilon _{\text{AdS}_{5}}, \tag{103}\]
where the terms in \(\Upsilon_{\text{AdS}_{5}}\wedge dz\) depending on the angular coordinates vanish due to a common factor of \(Q(y_{+})\) appearing in their coefficients. By demanding that the \(\Upsilon_{\text{AdS}_{5}}\wedge dz\) part of \(C_{6}\) vanishes at \(y=y_{+}\), we find the appropriate gauge transformation to be
\[\Lambda_{5}= -zL^{6}y_{+}H_{2}(y_{+})\Upsilon_{\text{AdS}_{5}}. \tag{104}\]
Using the gauge transformation by \(\Lambda_{5}\), we map \(\phi_{I}\rightarrow\varphi_{I}\) and find the asymptotic expansion of \(\tilde{C}_{6}\) to be
\[\tilde{C}_{6}=L^{6}\left[\left(\frac{1}{u^{6}}+\frac{3}{2u^{4}}- \frac{1}{16u^{2}}(2q_{1}-3(5+q_{2})+10q_{2}c_{2\aleph}c_{\theta}^{2}+5(q_{2}-2 q_{1})c_{2\theta})\right)dz\right]\wedge\Upsilon_{\text{AdS}_{5}}+\ldots. \tag{105}\]
Next, we need to find \(F_{4}\), which we can easily compute from eq. (19). We then map into FG coordinates, fix \(\hat{g}=2\), and expand in small \(u\). Keeping the most relevant singular terms, we find
\[F_{4}=\frac{L^{3}}{8}\Bigg{\{} \left[3c_{\theta}^{2}s_{\theta}d\varphi_{1}\wedge d\theta+\frac{ c_{\theta}^{3}}{2}\left(5s_{\theta}^{2}\left(2q_{1}-q_{2}c_{2\aleph}-q_{2} \right)du\wedge d\varphi_{1}+16q_{1}du\wedge dz\right)u^{3}\right]\wedge \Upsilon_{\text{S}^{2}}\] \[+\frac{s_{\theta}|c_{\aleph}|}{2}du\wedge d\varphi_{1}\wedge \left(5q_{2}c_{\theta}^{2}s_{2\aleph}d\theta\wedge d\varphi_{2}+8q_{2}dz \wedge\left(\frac{2s_{\aleph}}{c_{\aleph}}d\theta-s_{2\theta}d\aleph\right) \right)u^{3}\Bigg{\}}+\ldots. \tag{106}\]
Now that we have the asymptotics of the metric, \(\tilde{C}_{6}\), and \(F_{4}\), we are in position to compute the on-shell action for the two-charge solutions. To begin, we first examine the
Gibbons-Hawking-York (GHY) term. We note that after mapping to FG coordinates as in eq. (A.4), the volume form on the regulating cutoff slice at \(u=\epsilon_{u}\) can be easily seen to have small \(\epsilon_{u}\) expansion \[\Upsilon_{\partial\mathcal{M}}=\frac{L^{10}}{16}\left(\frac{1}{\epsilon_{u}^{6} }+\frac{1}{\epsilon_{u}^{4}}+\frac{5}{16\epsilon_{u}^{2}}+\frac{5\left(5c_{2 \theta}(q_{2}-2q_{1})+2q_{1}+q_{2}(10c_{\theta}^{2}c_{2\mathbb{R}}-3)\right)} {432}\right)\Upsilon_{\text{AdS}_{5}}\wedge dz\wedge\Upsilon_{\mathbb{S}^{4}}+ \ldots\.\] where we denote \(\Upsilon_{\mathbb{S}^{4}}:=|c_{\mathbb{R}}|c_{\theta}^{2}s_{\theta}d\phi_{1} \wedge d\phi_{2}\wedge d\theta\wedge d\mathbb{N}\). A quick calculation also shows the trace of the extrinsic curvature on the cutoff slice to be given by \[K=-\frac{6}{L}+\frac{2\epsilon_{u}^{2}}{L}-\frac{3\epsilon_{u}^{4 }}{4L}+\frac{\left(25c_{2\theta}(q_{2}-2q_{1})+10q_{1}+50q_{2}c_{\theta}^{2}c _{2\mathbb{R}}-15q_{2}+9\right)\epsilon_{u}^{6}}{72L}+\ldots,\] where we have dropped terms at \(O(\epsilon_{u})^{8}\) that depend on the charges but do not contribute to the final result as \(\epsilon_{u}\to 0\). Thus, we find \[S_{\text{OS,GHY}}=-\text{vol}(\text{AdS}_{5})\ \frac{\pi^{2}L^{9}}{8G_{N}^{(11)}} \left(\frac{2}{\epsilon_{u}^{6}}+\frac{4}{3\epsilon_{u}^{4}}+\frac{5}{24 \epsilon_{u}^{2}}\right)+\ldots\.\] (B.12) Note that despite \(K\) and \(\Upsilon_{\partial\mathcal{M}}\) containing non-trivial dependence on the charges, the end result in eq. (B.12) is independent of the charges to \(O(\epsilon_{u})^{0}\), and the \(\epsilon_{u}^{0}\) part of the GHY term explicitly vanishes. Moving on to find \(S_{\text{OS,bulk}}\), combining eqs. (B.9) and (B.8) and pulling back on to the \(u=\epsilon_{u}\) hypersurface, we arrive at \[S_{\text{OS,bulk}}= -\text{vol}(\text{AdS}_{5})\ \frac{\pi^{2}L^{9}}{16G_{N}^{(11)}} \left(\frac{2}{3\epsilon_{u}^{6}}+\frac{1}{\epsilon_{u}^{4}}+\frac{5}{8 \epsilon_{u}^{2}}-\frac{2q_{1}(q_{2}+y_{+}(2+3y_{+}))}{15y_{+}}\right.\] (B.13) \[\left.-\frac{32q_{2}+48q_{2}y_{+}+80y_{+}^{3}-25}{120}\right)+ \ldots\.\] Thus, combining eqs. (B.12) and (B.13) and subtracting of the on-shell action for the \(\text{AdS}_{7}\times\mathbb{S}^{4}\) vacuum in eq. (C.7), which is recovered by setting \(q_{I}=a_{I}=0\) and \(y_{+}=1\), the full regulated on-shell action is \[S_{\text{OS}}-S_{\text{OS}}^{(\text{vac})}=\frac{\text{vol}(\text{AdS}_{5}) \ \pi^{2}L^{9}}{120y_{+}G_{N}^{(11)}}\left(q_{1}q_{2}+(q_{1}+q_{2})y_{+}(2+3y_{+}) +5y_{+}(y_{+}^{3}-1)\right).\] (B.14) Note that choosing a different \(\Lambda_{5}\) while maintaining regularity at \(y=y_{+}\) does not change the final result. Further, using the form of the regulated AdS\({}_{5}\) volume in appendix C.2, the log divergent part of the on-shell action for the two-charge solution is given by \[S_{\text{OS}}^{(\text{ren})}\big{|}_{\text{log}} =-\frac{N^{3}}{1920y_{+}}\left(q_{1}q_{2}+(q_{1}+q_{2})y_{+}(2+3y_ {+})+5y_{+}(y_{+}^{3}-1)\right)\] \[=\frac{N^{3}(4q_{1}q_{2}-2(q_{1}+q_{2})y_{+}(1-y_{+})+5y_{+}(1-y_ {+}^{2}))}{1920y_{+}},\] (B.15)
where in the second line we used \(Q(y_{+})=0\). Taking the one-charge (\(q_{2}\to 0\)) limit, we find
\[S_{\text{OS}}^{\text{(ren)}}\big{|}_{\text{log,1-charge}} =\frac{N^{3}}{1920}\left(1-y_{+}\right)(5-2q_{1}+5y_{+})\] \[=\frac{N^{3}}{7680}(1-\sqrt{1-4q_{1}})(5(3+\sqrt{1-4q_{1}})-4q_{1 }). \tag{111}\]
Finally, let us compare eq. (110) to the on-shell action of the domain wall solution in \(7d\) gauged SUGRA. In [13], the authors, using a similar background subtraction regulating scheme as above, found that the coefficient of the log divergent part of regulated on-shell action for the domain wall solution to be given by
\[S_{\text{OS}}^{\text{(ren)}}\big{|}_{\text{log}}=-\frac{\pi L^{5}}{8G_{N}^{(7)} }(1-y_{+}^{2})=-\frac{N^{3}}{768\pi}(1-y_{+}^{2}), \tag{112}\]
where in the last equality we mapped to field theory variables using \(G_{N}^{(7)}=G_{N}^{(11)}/\text{vol}(\mathbb{S}^{4})\). We can immediately see a discrepancy with the on-shell action computed in the 11d uplift owing to the different dependence on the \(q_{I}\) and \(y_{+}\). An explanation for the mismatch is not entirely obvious, but it could be rooted in the background subtraction scheme in some way being inadequate for the purpose of this computation. This potential failure mode for such a simple regulating scheme could be interrogated if we had access to a full holographic renormalization scheme for defects in \(11d\) SUGRA.
## Appendix C Regulating the on-shell action
In this appendix, we collect some of the details of the regulating scheme for the computation of the on-shell action for both the two-charge and electrostatic solutions. Below we compute the vacuum AdS\({}_{7}\times\mathbb{S}^{4}\) on-shell action, which we will use in the background subtraction scheme. This value will also give a good diagnostic for the known limit case, \(q_{I}=a_{I}=0\) for the two charge solutions, that recovers the vacuum geometry. We also briefly discuss computing the renormalized volume of the AdS\({}_{5}\) part of the geometry.
### AdS\({}_{7}\times\mathbb{S}^{4}\)
In this subsection, we compute the on-shell action for the vacuum AdS\({}_{7}\times\mathbb{S}^{4}\) geometry that we use in our background subtraction scheme. The data relevant to specify this solution to bosonic theory in eq. (109) is the metric
\[ds_{11}^{2}=L^{2}\left(dx^{2}+\cosh^{2}(x)ds_{\text{AdS}_{5}}^{2}+\sinh^{2}(x )dz^{2}\right)+\frac{L^{2}}{4}d\Omega_{4}^{2}, \tag{113}\]
with \(0\leq x\leq\infty\), and the four-form flux and its Hodge dual
\[F_{4} =-\frac{3L^{3}}{8}\Upsilon_{\mathbb{S}^{4}}, \tag{114a}\] \[\star_{11}F_{4} =6L^{6}\cosh^{5}(x)\sinh(x)dx\wedge dz\wedge\Upsilon_{\text{AdS}_ {5}}. \tag{114b}\]
Since we are working with vacuum AdS\({}_{7}\times\mathds{S}^{4}\), the transformation to FG gauge is simply
\[x=-\ln(u/2), \tag{110}\]
where the FG radial coordinate is valued \(0\leq u\leq 2\). In FG gauge, the metric takes the form
\[ds_{11}^{2}=\frac{L^{2}}{u^{2}}\left(du^{2}+\left(1+\frac{u^{2}}{2}+\frac{u^{4} }{16}\right)ds_{\text{AdS}_{5}}^{2}+\left(1-\frac{u^{2}}{2}+\frac{u^{4}}{16} \right)dz^{2}\right)+\frac{L^{2}}{4}d\Omega_{4}^{2}. \tag{111}\]
The four-form flux has no functional dependence on \(x\) and is unchanged in transforming to FG gauge, while the seven-form flux becomes
\[\star_{11}F_{4}=6L^{6}\left(\frac{1}{u^{7}}+\frac{1}{u^{5}}+\frac{5}{16u^{3}}- \frac{5u}{256}-\frac{u^{3}}{256}-\frac{u^{5}}{4096}\right)du\wedge dz\wedge \Upsilon_{\text{AdS}_{5}}+\ldots. \tag{112}\]
With the asymptotics of the metric and fluxes in hand, we can easily compute the on-shell action. Note that the GHY term for the vacuum AdS\({}_{7}\times\mathds{S}^{4}\) solution is trivially identical to the expression found in eq. (109), and so we will not reproduce it here. The bulk action is then computed from the \(F_{4}\wedge\star F_{4}\) term, which after inserting eqs. (100a) and (112), introducing a radial cutoff at \(u=\epsilon_{u}\ll 1\), and integrating over the AdS\({}_{7}\times\mathds{S}^{4}\) geometry gives
\[S_{\text{OS,bulk}}^{\text{(vac)}}=-\frac{L^{9}\pi^{2}}{8G_{N}^{(11)}}\,\text{ vol}(\text{AdS}_{5})\left(\frac{1}{3\epsilon_{u}^{6}}+\frac{1}{2\epsilon_{u}^{4}}+ \frac{5}{16\epsilon_{u}^{2}}-\frac{11}{48}\right)+\ldots. \tag{113}\]
Combining with the GHY term, we find
\[S_{\text{OS}}^{\text{(vac)}}=-\frac{\pi^{2}L^{9}}{8G_{N}^{(11)}}\,\text{vol} (\text{AdS}_{5})\left(\frac{1}{3\epsilon_{u}^{6}}+\frac{5}{3\epsilon_{u}^{5} }+\frac{1}{2\epsilon_{u}^{4}}+\frac{1}{\epsilon_{u}^{3}}+\frac{5}{16\epsilon_ {u}^{2}}+\frac{5}{48\epsilon_{u}}-\frac{11}{48}\right)+\ldots. \tag{114}\]
Finally, we note that since \(d\star F_{4}=0\) we can introduce \(C_{6}\) so that \(dC_{6}=\star_{11}F_{4}\). We can then perform the bulk integral of \(F_{4}\wedge C_{6}\) over the radial cutoff slice at \(\epsilon_{u}\) with the pullback of the six-form potential being given by
\[C_{6}=3L^{6}\left(\frac{1}{3\epsilon_{u}^{6}}+\frac{1}{2\epsilon_{u}^{4}}+ \frac{5}{16\epsilon_{u}^{2}}-\frac{11}{48}+\frac{5\epsilon_{u}^{2}}{256}+ \frac{\epsilon_{u}^{4}}{512}+\frac{\epsilon_{u}^{6}}{12288}\right)dz\wedge \Upsilon_{\text{AdS}_{5}}. \tag{115}\]
Crucially, we have used the residual gauge freedom to fix the six-form potential to be regular at the origin of AdS\({}_{7}\), i.e. we pick a gauge such that \(C_{6}\big{|}_{u=2}=0\). The computation using \(C_{6}\) then gives the same result as above.
### Renormalized AdS\({}_{5}\) volume
Even accounting for the removal of divergences coming from the asymptotically AdS\({}_{7}\) part of the geometry via background subtraction, we are still left to deal with the volume of the AdS\({}_{5}\) factor in the on-shell action. In order to regularize the remaining polynomial divergences and read off the universal log-divergent part of the on-shell action, we will simply treat the intrinsic parts of the AdS\({}_{5}\) geometry using standard counterterms in holographic renormalization and neglecting any divergences associated with the embedding.
This renormalization scheme is admittedly simplistic as it only treats the set of counterterms associated with the intrinsic geometry of the AdS\({}_{5}\) submanifold. However, since the background subtraction scheme leaves behind only divergences from the volume of the AdS\({}_{5}\) and we choose the boundary geometry to be \(\mathbb{S}^{4}\hookrightarrow\mathbb{R}^{6}\), only defect Weyl anomalies constructed purely from the intrinsic geometry should contribute, which will be accounted for in the scheme we have chosen. The caveat is that there may be structures for which we have not accounted in the full set of \(11d\) counterterms, which is difficult to construct, whose pullback to the AdS\({}_{5}\) submanifold contains terms that contribute to the log divergence in a similar way. Absent a full holographic renormalization scheme for solutions to SUGRA dual to defects, which would replace background subtraction scheme as well, this scheme choice constructing counterterms only for the intrinsic geometry of the AdS\({}_{5}\) submanifold is the best tool available.
Moving on, the volume of AdS\({}_{5}\) has well known divergences. In order to systematically remove them and reveal any universal log-divergent terms, we consider AdS\({}_{5}\) in global coordinates with an \(\mathbb{S}^{4}\) boundary:
\[ds^{2}_{\text{AdS}_{5}}=dx^{2}+\sinh^{2}(x)\ d\Omega_{4}^{2}. \tag{102}\]
For simplicity, we consider the round metric on \(\mathbb{S}^{4}\). Computing the AdS\({}_{5}\) volume requires regulating the large \(x\) behavior, and so we introduce a radial cutoff \(\Lambda_{x}\equiv-\log\frac{\epsilon_{x}}{2}\) for \(\epsilon_{x}\ll 1\). Then, expanding in small \(\epsilon_{x}\)
\[\text{vol}(\text{AdS}_{5})\ =\frac{8\pi^{2}}{3}\int_{0}^{\Lambda_{x}}dx \sinh^{4}(x)=\frac{2\pi^{2}}{3\epsilon_{x}^{4}}-\frac{4\pi^{2}}{3\epsilon_{x} ^{2}}-\pi^{2}\log\frac{\epsilon_{x}}{2}+\dots. \tag{103}\]
We regulate the volume using covariant counterterms16 added on the radial cutoff slice that are standard in AdS\({}_{5}\) holographic renormalization [93; 60]
Footnote 16: To be complete, we should also fix finite counterterms to ensure that we are in a supersymmetry preserving scheme, but we will forego addressing this here as it is not germane to the problem at hand.
\[S_{\text{CT},1} =-\frac{1}{4}\int d\Omega_{4}\sqrt{|g_{\epsilon_{x}}|}=-\frac{2 \pi^{2}}{3\epsilon_{x}^{4}}+\frac{2\pi^{2}}{3\epsilon_{x}^{2}}-\frac{\pi^{2}} {4}+\dots, \tag{104a}\] \[S_{\text{CT},2} =\frac{1}{48}\int d\Omega_{4}\sqrt{|g_{\epsilon_{x}}|}\mathcal{R }_{\epsilon_{x}}=\frac{2\pi^{2}}{3\epsilon_{x}^{2}}-\frac{\pi^{2}}{3}+\dots, \tag{104b}\]
where \(\sqrt{|g_{\epsilon_{x}}|}=(1-\epsilon_{x})^{4}\sqrt{|g_{8^{4}}|}/16\epsilon_{ x}^{2}\) and \(\mathcal{R}_{\epsilon_{x}}=12\,\text{csch}^{2}(\epsilon_{x})\) are the volume form and the intrinsic Ricci scalar on the cutoff slice, respectively, built from the induced AdS\({}_{5}\) metric. Adding these counterterms to the bulk action, we see that the holographically renormalized volume of the unit AdS\({}_{5}\) takes the well-known form
\[\text{vol}(\text{AdS}_{5})\ =-\pi^{2}\log\frac{\epsilon_{x}}{2}+\dots. \tag{105}\]
To complete the regularization of the on-shell actions for the vacuum AdS\({}_{7}\times\mathbb{S}^{4}\) and two-charge solutions and extract the universal contributions to the defect free energy, we replace \(\text{vol}(\text{AdS}_{5})\ =-\pi^{2}\log(\epsilon_{x}/2)\) wherever it appears. |
2310.10328 | Is the M81 Fast Radio Burst Host Globular Cluster Special? | We use multiband archival HST observations to measure the photometric and
structural parameters of the M81 globular cluster that hosts the Fast Radio
Burst FRB 20200120E. Our best-fitting King model has an effective radius $r_h =
3.06$ pc with a moderate King model concentration of $c = 53$, and an inferred
core radius of 0.81 pc. We revisit the exact astrometric location of the FRB
within the cluster, and find that FRB 20200120E is located 1.92 pc from the
center, but within the projected half-light radius. We estimate the relative
encounter rate of the FRB host, along with the corresponding rates of 210 other
globular clusters in M81, and compare these values with the encounter rates of
Galactic globular clusters. The FRB resides in a globular cluster with an
encounter rate that is moderately higher than the median stellar encounter rate
in our two comparison samples. While the estimated encounter rate of the FRB
host cluster (e.g., $\sim50\%$ of a cluster like 47 Tuc) is sufficient to allow
the possibility that the FRB formed dynamically, our results do not place
strong constraints on this scenario due to the limitations of the available HST
data and the possible systematic uncertainties and selection effects in the
comparison data. | Kristen C. Dage, Arash Bahramian, Clancy W. James, Arunav Kundu, Katherine L. Rhode, Jay Strader, Enrico Vesperini, Stephen E. Zepf | 2023-10-16T12:10:23Z | http://arxiv.org/abs/2310.10328v1 | # Is the M81 Fast Radio Burst Host Globular Cluster Special?
###### Abstract
We use multiband archival HST observations to measure the photometric and structural parameters of the M81 globular cluster that hosts the Fast Radio Burst FRB 20200120E. Our best-fitting King model has an effective radius \(r_{h}=3.06\) pc with a moderate King model concentration of \(c=53\), and an inferred core radius of 0.81 pc. We revisit the exact astrometric location of the FRB within the cluster, and find that FRB 20200120E is located 1.92 pc from the center, but within the projected half-light radius. We estimate the relative encounter rate of the FRB host, along with the corresponding rates of 210 other globular clusters in M81, and compare these values with the encounter rates of Galactic globular clusters. The FRB resides in a globular cluster with an encounter rate that is moderately higher than the median stellar encounter rate in our two comparison samples. While the estimated encounter rate of the FRB host cluster (e.g., \(\sim 50\%\) of a cluster like 47 Tuc) is sufficient to allow the possibility that the FRB formed dynamically, our results do not place strong constraints on this scenario due to the limitations of the available HST data and the possible systematic uncertainties and selection effects in the comparison data.
Globular star clusters(656) -- Radio transient sources(2008) -- Low-mass x-ray binary stars(939) 0000-0002-0001-5145]Kristen C. Dage \({}^{1,*}\)Arash Bahramian \({}^{2}\), Clancy W. James \({}^{2}\), Arunav Kundu, \({}^{3}\), Katherine L. Rhode \({}^{4}\), Jay Strader \({}^{5}\), Enrico Vesperini, \({}^{4}\) and Stephen E. Zepf \({}^{5}\)\({}^{1}\)Wayne State University, Department of Physics & Astronomy, 666 W Hancock St, Detroit, MI 48201, USA
\({}^{1}\)Wayne State University, Department of Physics & Astronomy, 666 W Hancock St, Detroit, MI 48201, USA
\({}^{2}\)International Centre for Radio Astronomy Research Curtin University, GPO Box U1987, Perth, WA 6845, Australia
\({}^{3}\) Department of Physics, Birla Institute of Technology & Science, Pilani, K K Birla Oa Campus, NH17 2, Zuarinagar, Goa 403726, India
\({}^{4}\)Indiana University Department of Astronomy, 727 East Third Street, Bloomington, IN 47405, USA
\({}^{5}\)Center for Data Intensive and Time Domain Astronomy, Department of Physics and Astronomy, Michigan State University, East Lansing MI, USA
0000-0002-0001-5145]Arash Bahramian \({}^{2}\), Clancy W. James \({}^{2}\), Arunav Kundu, \({}^{3}\), Katherine L. Rhode \({}^{4}\), Jay Strader \({}^{5}\), Enrico Vesperini, \({}^{4}\) and Stephen E. Zepf \({}^{5}\)\({}^{1}\)Wayne State University, Department of Physics & Astronomy, 666 W Hancock St, Detroit, MI 48201, USA
\({}^{2}\)International Centre for Radio Astronomy Research Curtin University, GPO Box U1987, Perth, WA 6845, Australia
\({}^{3}\) Department of Physics, Birla Institute of Technology & Science, Pilani, K K Birla Oa Campus, NH17 2, Zuarinagar, Goa 403726, India
\({}^{4}\)Indiana University Department of Astronomy, 727 East Third Street, Bloomington, IN 47405, USA
\({}^{5}\)Center for Data Intensive and Time Domain Astronomy, Department of Physics and Astronomy, Michigan State University, East Lansing MI, USA
NASA Einstein Fellow
## 1 Introduction
Fast radio bursts (FRBs) are millisecond-duration radio transient events of unknown origin (Lorimer et al., 2007; Thornton et al., 2013). Ever since the localisation of the first repeating fast radio burst, FRB 20121102A, to a dwarf star-forming galaxy (Spitler et al., 2016; Chatterjee et al., 2017), young magnetars (strongly magnetized neutron stars that formed less than a few decades ago from Type II supernovae) have been hypothesised as the progenitors of FRBs(Metzger et al., 2017). This view is backed up by the association of FRB 20121102A with a persistent radio source (PRS; Chatterjee et al., 2017) and by that source's extreme magneto-ionic properties (Michilli et al., 2018). On the other hand, the idea that FRBs have a common origin has been challenged by further localisations of FRBs (e.g. Bannister et al., 2019), which reveal that many come from galaxies with lower star-formation rates. The observed properties of the FRBs, such as the radial offset distributions, are inconsistent with the corresponding properties of most other classes of astrophysical transients (Bhandari et al., 2022; Gordon et al., 2023). Furthermore, while at least one other FRB (FRB 20190520B) appears similar to 20121102A in terms of its repetition rate, host galaxy, association with a persistent radio source, and magneto-ionic properties (Niu et al., 2022), some FRBs have shown no evidence of repetition despite significant follow-up campaigns (James et al., 2020; Lee-Waddell et al., 2023; Lin et al., 2023). Still other re
peating FRBs show significant offsets from star-forming activity in their host galaxies (Marcote et al., 2020; Tendulkar et al., 2021). Overall these findings motivate the consideration of alternative progenitor scenarios. In particular, the pre-merger orbital interactions (Wang et al., 2016), merger (Totani, 2013), and/or post-merger collapse (Falcke and Rezzolla, 2014) of old stellar remnants such as neutron stars and white dwarfs have long been proposed as FRB progenitor pathways, though these tend to favour once-off FRBs, or those that repeat for only a very short duration.
The repeating FRB 20200120E was localized to a specific host system -- a globular cluster (GC) in the nearby spiral galaxy M81 (Kirsten et al., 2022), which is approximately 3.6 Mpc away. Globular clusters are extremely old (\(\sim\)10-13 Gyr) stellar systems, and provide valuable constraints on possible progenitors to the FRB, including limiting the possibility of a magnetar origin (Kirsten et al., 2022). Another potential theory is an origin from a hyperaccreting X-ray binary (Sridhar and Metzger, 2022). Although the FRB source was not detected in the X-ray by an off-axis archival _Chandra_ observation, nor in additional follow-up X-ray observations, (Kirsten et al., 2022; Pearlman et al., 2023), hyperaccreting X-ray binaries in globular clusters are demonstrated to show orders of magnitude X-ray variability on the scale of hours (Dage et al., 2020, and references therein). Young neutron stars formed as a result of the collapse or merger of white dwarfs present another plausible scenario (Kremer et al., 2021). Such mergers would be the result of dynamical interactions in the dense environment of GCs and are expected to occur mainly in clusters at the time of core collapse or in the post-core collapse phase (Kremer et al., 2023, 20); white dwarf mergers may also explain the origin of young pulsars and single millisecond pulsars observed in Galactic globular clusters (Kremer et al., 2023; Ye et al., 2023).
M81's globular cluster system has been extensively studied with photometry and spectroscopy in the optical and NIR (Perelmuter and Racine, 1995; Perelmuter et al., 1995; Ma et al., 2007; Nantais et al., 2011; Pan et al., 2022; Chies-Santos et al., 2022). The X-ray source population associated with the globular clusters has also been well characterized (Hunt et al., 2023).
Globular clusters hosted by galaxies beyond the Local Group often appear as point sources in ground-based images; in such cases, globular clusters can only be studied via their integrated light. On the other hand, HST is able to at least partially resolve globular clusters in distant galaxies (see Whitmore et al., 1993; Kundu et al., 1999; Jordan et al., 2005; Sivakoff et al., 2007; Strader et al., 2011; Peacock et al., 2012, among many others). The half-light radii of the clusters can be measured from HST images using routines such as baolab (Larsen, 1999) and assuming that the cluster light profile follows a King model. This improvement in resolution means that globular clusters are more accurately identified in HST studies than in ground-based data.
The ability to determine cluster size also impacts the estimates of stellar encounter rates in globular clusters. The stellar encounter rate (\(\Gamma\)) in a cluster is directly linked with the population of close binaries and compact objects in these dense stellar systems (Pooley et al., 2003; Heinke et al., 2003; Bahramian et al., 2013) and depends on cluster properties as \(\Gamma\propto\int\rho^{2}/\sigma~{}dV\), where \(\rho\) is cluster stellar density, \(\sigma\) is velocity dispersion of stars in the cluster and \(\Gamma\) is estimated over the volume of the cluster \(V\)(e.g., Hills, 1976; Verbunt and Hut, 1987). The stellar distributions of the large majority of globular clusters are well-described by King models (King, 1962, 1966). In these clusters the encounter rate is dominated by the contribution from the cluster core, leading to an approximation of \(\Gamma\) as \(\propto\rho_{c}^{3/2}r_{c}^{2}\), where \(\rho_{c}\) is the central density and \(r_{c}\) is cluster core radius. Given the large distances of extragalactic globular clusters and thus their small angular sizes, accurately measuring quantities such as the core radius is typically challenging. This leads to approximations of \(\Gamma\) through a proxy such as \(\Gamma_{h}\propto M^{3/2}r_{h}^{-5/2}\), where \(M\) is cluster mass and \(r_{h}\) is the cluster half-light radius, though this proxy measurement is much less sensitive to the encounter rate than the core-based measurements (Sivakoff et al., 2007).
The globular cluster that hosts FRB20200120E was labeled with ID number 30244 by Perelmuter and Racine (1995) and Perelmuter et al. (1995) in their ground-based imaging and spectroscopic studies. Perelmuter and Racine (1995) measured the apparent magnitude and colors for the globular cluster of V=19.76, B\(-\)V=0.77, and V\(-\)R=0.47. We refer to [PR95] 30244 as FRB GC in this work. Perelmuter and Racine (1995) acquired spectroscopy of FRB GC (albeit with what they characterize as "poor signal-to-noise") to confirm its association with M81, as well as to estimate a highly uncertain metallicity of [Fe/H]=\(-1.76\pm 1.78\) for the cluster. The FRB GC is located roughly \(19.6^{\prime}\) from the center of M81, which translates to \(\approx\)20.5 kpc at the relevant distance. Pan et al. (2022) used multiwavelength archival data and other information from the literature to assemble an accurate list of the globular clusters in M81 and included this object in their catalog. Based on an initial assessment of these various measurements, the FRB GC does not seem to be unusual, despite playing host to an extremely mysterious and energetic radio signal. However, its half-light radius and encounter rate may shed
further light on the nature of the cluster and the possible physical cause of the FRB emission. In their paper presenting the discovery of FRB20200120E, Kirsten et al. (2022) combined broadband ugriz photometry from the Sloan Digital Sky Survey (SDSS) of the FRB GC with a stellar population model and a few assumptions (e.g., a model for the star formation rate of the cluster since its formation) to come up with reasonable estimates of several fundamental properties of the cluster - e.g., metallicity, velocity dispersion, mass, and effective radius. They estimated [Fe/H] of -1.83, a cluster stellar mass of \(\log(M/M_{\odot})=5.77\), a velocity dispersion of 22 km/sec, an effective radius of 3.7pc, and an age of 9.1 Gyr.
In this work, we use archival HST images of the FRB GC to measure the optical photometric and structural properties of the cluster and compare them to work from Nantais et al. 2011 (hereafter NH11). Our objective is to examine the properties of this cluster relative to those of the other clusters in M81 and the Milky Way and look for clues that might help reveal why this cluster hosts an FRB. The data and analysis methods are described in Section 2, the results are presented in Section 3, and we summarize our conclusions in Section 4.
## 2 Analysis & Results
The M81 FRB GC was observed by the Hubble Space Telescope's Wide Field Camera 3 (Program 16664, PI: Tendulkar) on 2022-04-02 (Orbit 1: 1651 seconds F606W & F814W, 1810 seconds F438W), 2022-11-15 (Orbit 2: 1639 seconds F606W & F814W, 1801 seconds F438W), and 2023-02-22 (Orbit 3: 2708 seconds, F606W only) with a 3-point dither pattern.
### HST analysis & cluster half-light radius
The first two orbits of program 16664 suffered a guide-star failure and guided on gyros only. As a result, only a subset of exposures from this visit was usable. We examined the calibrated, flat fielded CTE-corrected individual exposure (FLC) files and retained only those images in which the FRB GC was detected. For F438W, these were iem701lxq, iem701lrq from Orbit 1 and iem751ibq and iem751i4q from Orbit 2 (2622 seconds total). For F606W, the usable images were iem701lvq from Orbit 1, iem751i9q from Orbit 2, and iem752u0q, iem752u1q, iem752u2q, iem752u4q from Orbit 3 (4020 seconds total), and for F814W iem701ltq in Orbit 1, iem751i7q in Orbit 2 (1312 seconds total). The HST data can be found in MAST at 10.17909/vysd-m633.
We manually redrizzled the images using the DrizzlePac software (Hoffmann et al., 2021, stwcs Version 1.7.2, photutils Version 1.7.0), aligning the frames with tweakreg and verifying the shifts manually with IRAF (Tody, 1986). In the F438W images, there were too few bright sources present in the field for the software to identify the offset, so we computed the shift manually with IRAF tasks and updated the headers. We drizzled the images with astrodrizzle, with the'mimed' combine type. A composite-color image of the _HST_ data is presented in Figure 1.
To determine the structural parameters of the cluster, we began by constructing an empirical point spread function in \(F606W\) using three bright but unsaturated stars (confirmed as stars via Gaia DR3). We then subsampled the point spread function by a factor of 10. We carried out the King model fitting (King, 1962) using this subsampled point spread function and the ishape task in the baolab package 0.94.1 (Larsen, 1999). We tried fitting radii of both 50 and 60 pixels, finding very similar results in each case. The specific results quoted below are for the 50 pixel case. We used the the WFC3 pixel
Figure 1: _Left_: Digital Sky Survey (DSS) R band image of M81. Positions of globular clusters from NH11 considered in this work are shown by cyan circles. The cyan rectangle indicates the area around the FRB GC, plotted in the middle and right panels. _Middle_ and _right_: A composite color image of the vicinity of the FRB GC (middle) and a zoom-in on the cluster (right) based on HST images in F814W (red), F606W (green), and F435W (blue).
scale of \(0.0396^{\prime\prime}\) per pixel 1, and the 3.6 Mpc distance to M81, resulting in a distance scale of 0.69 parsec per pixel.
Footnote 1: [https://esahubble.org/about/general/instruments/wfc3/](https://esahubble.org/about/general/instruments/wfc3/)
When we fit for the cluster effective radius, we varied the value of the King model concentration parameter (defined as the ratio of the tidal radius to the core radius) using a finely spaced grid that ranged from \(c=10-300\) in steps of one. The best fitting model has a concentration \(c=53\) and an effective radius \(r_{h}=3.06\) pc. As a King profile is completely specified by two parameters, such a fit implies a best-fitting projected core radius of \(r_{c}=0.81\) pc (Figure 2). All the models prefer slightly elliptical fits with a semi-minor to semi-major axis ratio of about 0.92. The best-fitting model has a reduced \(\chi^{2}=2.8\), partially due to the presence of many resolved red giants and perhaps partially due to a minor background mismatch in the outer regions of the cluster, relevant given the large fitting radius of 50 pixels = 34.6 pc. Somewhat lower concentrations are possible, the model at \(c=25\) having a \(\Delta\chi^{2}\sim 1\), with a corresponding \(r_{h}=2.84\) pc and \(r_{c}=1.08\) pc.
At the other end of the distribution, even very large concentrations of \(c=300\) or larger give formally reasonable fits with \(\Delta\chi^{2}<1\), but the best-fitting structural parameters for these models have large radii. For example, for \(c=300\), we find \(r_{h}=5.30\) pc which would imply \(r_{c}=0.62\) pc and a tidal radius of 185 pc. Essentially, these fits place more light at large radii where it is poorly constrained by our observations. Solely considering these data, such models cannot be ruled out, but we also note that a similar combination of high concentration and large radius is essentially absent in the Galactic globular cluster system (Djorgovski & Meylan, 1994), while clusters with parameters close to our best-fit values are common.
Formally, we find an implied \(r_{c}=0.81^{+0.27}_{-0.19}\) pc. An \(r_{h}<2.8\) pc is ruled out even in low-concentration models, but as discussed above, models with high concentrations yield formally reasonable fits and have correspondingly larger values of \(r_{h}\), in the range 5-6 pc. If the globular clusters around M81 are similar to those in the Galaxy, these larger sizes are disfavored.
As many extragalactic studies assume a fixed concentration index of \(c=30\) (e.g., NH11), we also report parameters for this assumed value: \(r_{h}=2.91\) pc and the inferred \(r_{c}=1.01\) pc. Further comparisons in this paper to M81 globular clusters or Milky Way globular clusters also use this fixed \(c=30\) radius measurement.
#### 2.1.1 FRB offset from cluster center
We used the new HST data to revisit the inferred offset of the FRB from the center of the host cluster (Kirsten et al., 2022). We used 12 stars present in the combined F606W image with measured positions and proper motions from Gaia DR3, advanced to the mean epoch of the HST data, to correct the absolute astrometry of the image. Because this is a relatively small number of stars, the uncertainty in this transformation is \(\sim 6\) mas per coordinate. In this frame, the best-fit center determined by ishape is 09:57:54.71341, +68:49:00.7818. While this is likely more accurate than the previously published ground-based astrometric positions of the cluster, it is still inferior to the precision of the Gaia DR3 position of the cluster itself, which is listed as 1.6-1.7 mas per coordinate. Our new HST position is offset from the Gaia DR3 position by only 5.8 mas, but given the available information, the Gaia position is still the preferred one to use. The Gaia position implies a projected separation of the cluster center from the FRB of \(110\pm 2\) mas (\(1.92\pm 0.03\) pc). For our best-fit model this is within the half-light radius (\(\sim 0.63r_{h}\)).
### Optical Photometric Measurements from the HST Images
We also measured the integrated magnitude of the candidate in the WFC3 F438W, F606W, and F814W images obtained by the HST 16664 program following the photometric guidelines and calibrations recommended by the WFC3 data handbook (Sahu, 2021). Formally the uncertainty in the photometry is as low as 0.001 mag in some filters. However, the FRB GC is resolved in the WFC3 images, with individual luminous stars associated with the cluster clearly distinguishable at some wavelengths. We measure the photometry at various radii out to 50 pixels in order to optimize the S/N vs the flux, and estimate that the systematic uncertainty in the photometry is at least 0.02 mag. Using Vegamag zeropoints in order to be
Figure 2: Original (left) and residual (right) images with the best-fitting King model subtracted. The large number of resolved giants, including near the center of the cluster, are evident.
consistent with the ground based BVR observations we measure F438W=20.75, F606W=19.59, F814W=18.73 \(\pm\) 0.02 mag, values generally consistent with those from previous works. The photometry of the FRB GC places the cluster securely within the expected magnitude and color range for a globular cluster (e.g., Rhode & Zepf, 2001). We do not expect that a stellar population analysis using this photometry would produce a meaningfully different inferred stellar mass or metallicity than previous work (e.g., Kirsten et al., 2022), though it could be illuminating to construct a color-magnitude diagram from the HST data in the future, an effort which is outside the scope of the present paper.
## 3 Comparisons between M81 and Milky Way Globular Clusters
In this section we compare a few dynamical and structural parameters measured for the M81 FRB host GC with those measured for Galactic clusters and globular cluster candidates in M81 (with measurements from NH11; 85 classified as "confirmed" and 125 classified as "good" candidates by NH11). We note that Pan et al. (2022) et al. suggest that the HST selected candidates of NH11 have an estimated contamination rate of 8% for the bright clusters.
### Structural Properties
We compared the FRB GC optical absolute V magnitude (\(-\)8.19, this work), and half-light radius (2.91 parsec for a King concentration of 30, this work) to a sample of 160 Galactic globular clusters from Baumgardt et al. (2020), and 210 M81 globular clusters from NH11. As shown in Figure 3, the FRB GC does not appear to be structurally exceptional.
### Relative Stellar Encounter Rate
To estimate encounter rates for extragalactic globular clusters in M81, we use the proxy from equation (5) of Sivakoff et al. (2007):
\[\Gamma_{h}\equiv\left(\frac{M}{2\pi M_{\odot}}\right)^{\frac{3}{2}}\left( \frac{r_{h}}{1pc}\right)^{\frac{-5}{2}}, \tag{1}\]
where M is the optical mass of the cluster, and r is the observed half-light radius.
Based on our updated measurements, and using the distance of 3.6 Mpc, the absolute V magnitude of the FRB GC is -8.19. We convert optical magnitude to cluster mass by adopting the median V magnitude globular cluster mass-to-light-ratio of Galactic globular clusters (1.83; Baumgardt et al., 2020). Assuming a V magnitude of 4.81 for the Sun, we find the cluster mass of the FRB GC to be 2.9 \(\times 10^{5}\)\(M_{\odot}\). We perform the same conversion to the V magnitudes of the NH11 clusters.
We perform the same conversion to optical mass and encounter rate calculation using the Baumgardt et al. (2020) V magnitudes and half-light radii of Galactic globular clusters. We caution that this exercise is meant to demonstrate the encounter rates of Galactic globular clusters if they were observed in the M81 system. For ease of comparison, we normalize all estimated encounter rates to that of 47 Tuc, assuming \(\Gamma_{\mbox{47Tuc}}=1000\). As demonstrated in Figure 4, the FRB GC has an encounter rate that is \(\sim 50\%\) of that of 47 Tuc. This indicates that in principle, a dynamical formation may be a plausible formation channel for the progenitor of FRB20200120E. However, we note that the comparison data that we are using - i.e. the measurements of Galactic globular clusters from a range of studies and data from extragalactic globular clusters that are derived from HST observations - have very different observational limits, selection biases, uncertainties, and contamination rates. Coupled with the differences in how cluster radii are estimated in these different data sets, comparisons of the absolute interaction rates of M81 and Milky Way GCs are subject to systematic uncertainties and should be considered indicative at best.
Considering only the M81 GC candidates, we note that the effective radius of the FRB host is indistinguishable from the median of the sample, but the luminosity (mass) is \(\approx\)0.7 mag brighter than the peak of the globular cluster luminosity function (Fig 3). It is this latter difference that manifests itself in \(\Gamma\) and suggests that the FRB resides in a GC that has a moderately higher interaction rate than the median (Fig 4). This is the consequence of an intriguing characteristic of GCs; the lack of a mass-radius relationship. This results in the stellar density and dynamical interaction rate in globular clusters to be strongly correlated to the mass of a cluster. A corollary to this feature is that the vast majority of dynamical interactions in globular cluster systems occur in the most massive GCs. Therefore, the offset of the FRB GC from the median interaction rate hints at the possibility of the FRB being dynamically formed, but it is not conclusive.
We note that Kremer et al. 2021, 2023a predict that white dwarf mergers should occur mainly in clusters undergoing core collapse or in the post-core collapse phase. While our analysis does not allow us to determine whether the M81 FRB GC has reached core collapse, it is worth noting that the vast majority of post-core collapse clusters in the Milky Way are located at small galactocentric distances (see e.g., Chernoff & Djorgovski, 1989; Djorgovski & Meylan, 1994) and they are all closer to
the Galactic center than the distance of the M81 FRB GC from the center of M81. Note that for the M81 FRB GC, 20.5 kpc is the projected galactocentric distance so the actual 3D distance is likely to be larger.
We also note that the FRB GC is located at a much larger galactocentric distance than the rest of the M81 sample and studies of the globular cluster systems of the Milky Way and other Galaxies suggest a radial trend in GC sizes. A high resolution survey of other clusters at the galactocentric distance of the FRB GC will help place the relative \(\Gamma\) of this GC and the importance of dynamical effects in better context.
### Comparison to X-ray Binary Hosting Globular Clusters in M81
While the FRB GC itself has been observed to show no evidence of X-ray emission (Pearlman et al., 2023), it is nevertheless worth comparing the cluster properties to those of other GCs in M81 which show X-ray emission, as bright X-ray activity from X-ray binaries can be transient. Several globular clusters in M81 are known to host X-ray binaries (Hunt et al., 2023). We cross-matched the NH11 globular clusters with the Chandra Source Catalog Version 2.1 (Evans et al., 2020), and found that ten of the NH11 globular clusters have significant (at \(\geq\) 3\(\sigma\)) X-ray counterparts detected by _Chandra_, with X-ray luminosities spanning from 2.0 \(\times\) 10\({}^{37}\) erg/s to 5.2 \(\times\)10\({}^{38}\) erg/s. We note that the majority (9) of the X-ray hosting clusters are at projected distances within roughly 3 kpc of the galaxy center, and one is 11.1 kpc away.
The mean V magnitude for the entire NH11 sample is \(-\)7.0 mag, with a standard deviation of 1.4 mag. The mean half-light radius is 3.5 pc with a standard deviation of 2.5 pc. For the sample of NH11 globular cluster candidates with X-ray counterparts, the mean V magnitude is \(-\)9.0 mag, with a 1.1 mag standard deviation. The mean half-light radius of the X-ray-detected NH11 globular clusters is 1.7 pc, with a standard deviation
Figure 4: Relative encounter rates for globular clusters as they would be observed in the M81 system, normalized assuming 47Tuc has an encounter rate of 1000.
Figure 3: Scatter plot of half-light radii versus absolute magnitude for globular clusters in the Milky Way and M81, in the context of star clusters and dwarf galaxies (e.g., see Tolstoy et al., 2009). Core-collapsed globular clusters in the Milky Way are denoted by hollow circles, and globular clusters in M81 with bright X-ray sources are denoted with black margins. The FRB GC in M81 does not appear to be exceptional in its properties compared to these other cluster populations.
of 0.6 pc. While there is not a clear trend in absolute magnitude, beyond the FRB GC being on the faint end of the M81 X-ray binary hosting sample, it is clear that the X-ray detected globular clusters are on average more compact than the FRB GC.
## 4 Summary and Discussion
The M81 globular cluster that hosts FRB 20200120E offers a rare opportunity to study the environment that produced such an extreme emission source. Our analysis of the cluster structural parameters in HST F606W contains the effective radius to \(r_{h}=3.06\) pc and a moderate King model concentration of \(c=53\). This implies a core radius of 0.81 pc. Our new photometric measurements of the cluster are F438W=20.75, F606W=19.59, and F814W=18.73 \(\pm\) 0.02 mag.
We compared the optical properties of the host cluster to other clusters in M81, and to Galactic globular clusters. We find that the FRB GC is not observed to be unique or extreme in comparison to other globular clusters in M81, or to Galactic globular clusters. The FRB resides in a globular cluster host that has a moderately higher than median stellar encounter rate. This implies that dynamical interactions are a plausible formation path for the FRB progenitor, but given the uncertainties in the available measurements and comparison data we cannot confirm this formation channel.
_Facilities: HST_, _Chandra_
astropy (Astropy Collaboration et al., 2013), baolab (Larsen, 1999, 2014), DrizzlePAC (Hoffmann et al., 2021), IRAF (Tody, 1986, 1993), Matplotlib (Hunter, 2007), NumPy (Harris et al., 2020), Pandas (McKinney, 2010)
We thank the referee for helpful comments which improved the manuscript. The authors thank Soren Larsen and Andy Fruchter for helpful discussions, and the referee for useful suggestions that helped improve this paper. KCD acknowledges support for this work was provided by NASA through the NASA Hubble Fellowship grant HST-HF2-51528 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. JS acknowledges support from NASA grant 80NSSC21K0628 and the Packard Foundation. We acknowledge extensive use of NASA's Astrophysics Data System Bibliographic Services, Arxiv, and SIMBAD (Wenger et al., 2000).
|
2308.12775 | A Distributed Linear Quadratic Discrete-Time Game Approach to Formation
Control with Collision Avoidance | Formation control problems can be expressed as linear quadratic discrete-time
games (LQDTG) for which Nash equilibrium solutions are sought. However, solving
such problems requires solving coupled Riccati equations, which cannot be done
in a distributed manner. A recent study showed that a distributed
implementation is possible for a consensus problem when fictitious agents are
associated with edges in the network graph rather than nodes. This paper
proposes an extension of this approach to formation control with collision
avoidance, where collision is precluded by including appropriate penalty terms
on the edges. To address the problem, a state-dependent Riccati equation needs
to be solved since the collision avoidance term in the cost function leads to a
state-dependent weight matrix. This solution provides relative control inputs
associated with the edges of the network graph. These relative inputs then need
to be mapped to the physical control inputs applied at the nodes; this can be
done in a distributed manner by iterating over a gradient descent search
between neighbors in each sampling interval. Unlike inter-sample iteration
frequently used in distributed MPC, only a matrix-vector multiplication is
needed for each iteration step here, instead of an optimization problem to be
solved. This approach can be implemented in a receding horizon manner, this is
demonstrated through a numerical example. | Prima Aditya, Herbert Werner | 2023-08-24T13:27:07Z | http://arxiv.org/abs/2308.12775v2 | # A Distributed Linear Quadratic Discrete-Time Game Approach
###### Abstract
Formation control problems can be expressed as linear quadratic discrete-time games (LQDTG) for which Nash equilibrium solutions are sought. However, solving such problems requires solving coupled Riccati equations, which cannot be done in a distributed manner. A recent study showed that a distributed implementation is possible for a consensus problem when fictitious agents are associated with edges in the network graph rather than nodes. This paper proposes an extension of this approach to formation control with collision avoidance, where collision is precluded by including appropriate penalty terms on the edges. To address the problem, a state-dependent Riccati equation needs to be solved since the collision avoidance term in the cost function leads to a state-dependent weight matrix. This solution provides relative control inputs associated with the edges of the network graph. These relative inputs then need to be mapped to the physical control inputs applied at the nodes; this can be done in a distributed manner by iterating over a gradient descent search between neighbors in each sampling interval. Unlike inter-sample iteration frequently used in distributed MPC, only a matrix-vector multiplication is needed for each iteration step here, instead of an optimization problem to be solved. This approach can be implemented in a receding horizon manner, this is demonstrated through a numerical example.
## I Introduction
Distributed control of multi-agent (in the sense of multi-vehicle) systems has been extensively studied over the last two decades with potential applications in many areas. Formation control is one such problem that has received significant attention. In formation control, all agents in a multi-agent system must move from arbitrary initial states to attain a pre-determined geometric shape [1]. To attain and maintain the formation, the agents in the team exchange information about their positions and velocities.
When formation control schemes are implemented in a distributed manner, then in situations that involve e.g. collision avoidance, agents may have conflicting interests, and achieving their individual objectives may take precedence over cooperation. Such situations reflect non-cooperative game behavior, as agents strive to meet their goals without collaboration. The solution to this type of game is to find a Nash equilibrium, where individual agents cannot improve their payoff by changing their strategy unilaterally. Linear quadratic differential games (LQDG) have been proposed as a means of addressing this problem, where the cost of each agent is quadratic, and agent dynamics are assumed to be linear.
A formation control problem modeled as LQDG has been discussed in [2]. There, a coupled Riccati differential equation is solved to find a Nash equilibrium. A discrete-time version of LQDG, referred to as linear quadratic discrete-time game (LQDTG), is more appropriate for receding horizon implementations, and has been proposed in [3]. However, solving coupled Riccati differential or difference equations is likely to be intractable for large networks; moreover, the solution cannot be implemented in a distributed manner.
Based on an idea proposed in [4], it was shown in [5] that one can avoid solving coupled Riccati difference equations by relocating the coupling terms that initially appear in the cost function to the system dynamics. Consequently, the modified problem can be reformulated as a fictitious multi-agent system evolving on the edges of the network graph instead of the nodes, allowing for a distributed solution to the decoupled Riccati difference equations. The resulting relative control inputs associated with each edge can then be mapped back to the physical control inputs in a distributed manner by employing a distributed steepest descent iteration between agents over two sampling instants. Such intersample iteration is frequently used in distributed MPC [6]. However, unlike in distributed MPC, the approach proposed in [5] does not require solving an optimization problem at each iteration step but only involves performing a matrix-vector multiplication.
Whereas the distributed scheme proposed in [5] considers an unconstrained consensus control problem, our contribution in this article is to extend this approach to a formation control problem that includes collision avoidance among agents. We begin by formulating the problem on the graph nodes, considering the desired formation displacements together with relative constraints for collision avoidance which are represented as soft constraints in the cost function. These state-dependent collision avoidance terms in the cost lead to coupled state-dependent Riccati difference equations (SDRDE). To decouple these, we use the same idea as in [5] by relocating the coupling term from the cost function to the system dynamics on the edges of the graph. This results in a decoupled cost that still incorporates the collision avoidance term. The reformulated problem involves solving a set of decoupled SDRDEs. This can be achieved using a receding horizon technique proposed in [7], and can be implemented in a distributed manner.
The paper is organized as follows: Section II provides a review of graph theory and the formation control problem with collision avoidance. Our proposed distributed solution is outlined in Section III. Section IV showcases simulation results, and finally, Section V concludes this article.
## II Preliminaries
### _Graph Theory_
A graph \(\mathcal{G}:=(\mathcal{V},\mathcal{E})\) consists of a set of nodes \(\mathcal{V}=\{\nu^{1},...,\nu^{N}\}\), and a set of edges \(\mathcal{E}=\{(\nu^{i},\nu^{j})\in\mathcal{V}\times\mathcal{V},\nu^{j}\neq\nu^{i}\}\) which contains ordered pairs of distinct nodes. \(N\) is the number of nodes, and \(M\) is the number of edges. \(\mathcal{G}\) is called undirected if \((\nu^{i},\nu^{j})\in\mathcal{E}\iff(\nu^{j},\nu^{i})\in\mathcal{E}\). An edge, denoted as \(e^{m}:=(\nu^{i},\nu^{j})\), indicates that agent \(i\) receives information from agent \(j\), where \(m\) represents the number of edge \((\nu^{i},\nu^{j})\). Let us enumerate the edge set as \(\mathcal{E}=\{e^{1},...,e^{M}\}\), where \(e^{m}\in\mathcal{E}\) represents the \(m\)-th edge. For \(m\in\{1,...,M\}\), let \(\alpha^{m}\in\mathbb{R}\) be a positive scalar denoting the edge weight corresponding to the \(m\)-th edge.
The set of neighbors of agent \(i\) is denoted by \(\mathcal{N}^{i}\). The (oriented) incidence matrix \(D\in\mathbb{R}^{N\times M}\) of the graph \(\mathcal{G}\) is defined component-wise by
\[D_{im}=\begin{cases}+1,&\text{if node $i$ is the source node of edge $e^{m}$},\\ -1,&\text{if node $i$ is the sink node of edge $e^{m}$},\\ 0,&\text{otherwise},\end{cases}\]
where for undirected graphs the orientation in the incidence matrix can be chosen arbitrarily.
The weighted Laplacian of a graph \(\mathcal{G}\) can be defined as
\[L=DWD^{T},\]
where \(W=\text{diag}(\alpha^{1},...,\alpha^{M})\in\mathbb{R}^{M\times M}\) is a diagonal matrix of edge weights. The Laplacian matrix is symmetric and positive semi-definite. In the context of game theory, we define a local Laplacian for agent / player \(i\) as
\[L^{i}=DW^{i}D^{T},\]
and let \(W^{i}\in\mathbb{R}^{M\times M}\) be a diagonal matrix such that the \(m\)-th diagonal entry of \(W^{i}\) is equal to \(\alpha^{m}\) if \(e^{m}\in\mathcal{E}^{i}\) and zero otherwise, where \(\mathcal{E}^{i}=\{e^{i},...,e^{i}_{deg_{i}}\}\subset\mathcal{E}\) be the set of edges incident at node \(\nu^{i}\in V\). In this paper we assume \(\alpha^{m}=1\), for all \(\forall m\in M\).
**Assumption 1**: _Graph \(\mathcal{G}\) is connected, i.e. there exists an undirected path between every two vertices \(\nu^{i},\ \nu^{j}\in\mathcal{V}\), \(j\neq i\)._
From now on, we assume that the graph used in this paper is undirected.
### \(\sigma\)_-Norms_
The \(\sigma\)-norm of a vector is a map \(\mathbb{R}^{n}\rightarrow\mathbb{R}_{\geq 0}\) _(not a norm)_ defined as [8]
\[||y||_{\sigma}=\frac{1}{\epsilon}\left[\sqrt{1+\epsilon||y||^{2}}-1\right], \tag{1}\]
where \(||\cdot||\) is an Euclidian norm in \(\mathbb{R}^{n}\), \(\epsilon>0\) is a small scalar value, and the gradient \(\sigma_{\epsilon}(y)=\nabla||y||_{\sigma}\) is
\[\sigma_{\epsilon}(y)=\frac{y}{\sqrt{1+\epsilon||y||^{2}}}=\frac{y}{1+\epsilon ||y||_{\sigma}}.\]
The map \(||y||_{\sigma}\) is differentiable everywhere. This property of the \(\sigma\)-norm will be used when dealing with the norm in the state-dependent weight matrix.
### _Agent Dynamics_
In this article, we consider a homogeneous multi-agent system where each agent is modeled as a zero-order hold discretisation of a double integrator. Each agent is assumed to be moving in an \(n\)-dimensional plane. In the context of game theory, each agent acts as a player in the game. The single-agent discrete-time dynamics is
\[x_{k+1}^{i}=fx_{k}^{i}+gu_{k}^{i},\qquad\qquad\text{ for }i=1,...,N, \tag{2}\]
where the state vector for agent \(i\) is \(x_{k}^{i}=\left[p_{k}^{i},v_{k}^{i}\right]^{T}\in\mathbb{R}^{2n}\), and contains position \(p_{k}^{i}\) and velocity \(v_{k}^{i}\) at time \(k\), with
\[f=\begin{bmatrix}1&\delta\\ 0&1\end{bmatrix}\otimes I_{n}\in\mathbb{R}^{2n\times 2n},\ \ g=\begin{bmatrix} \frac{\delta^{2}}{\delta}\\ \frac{1}{\delta}\end{bmatrix}\otimes I_{n}\in\mathbb{R}^{2n\times n}.\]
Here, \(u_{k}^{i}\) is the (acceleration) control input of agent \(i\), and \(\delta\) is the sampling time. To define the state vector for the multi-agent system, we select \(x_{k}=\left[p_{k}^{1},\cdots,p_{k}^{N},1,v_{k}^{1},\cdots,v_{k}^{N}\right]^{T} \in\mathbb{R}^{2Nn+1}\). Having an entry with value \(1\) between the positions and velocities allows the inclusion of a formation offset term, as explained below. The multi-agent dynamics can then be represented as
\[x_{k+1}=Fx_{k}+\sum_{i=1}^{N}G^{i}u_{k}^{i}, \tag{3}\]
where
\[F =\begin{bmatrix}I_{Nn}&0_{Nn\times 1}&\delta I_{Nn}\\ 0_{1\times Nn}&1&0_{1\times Nn}\\ 0_{n\times Nn}&0_{Nn\times 1}&I_{Nn}\end{bmatrix}\in\mathbb{R}^{(2Nn+1)\times(2Nn+1)},\] \[G^{i} =\begin{bmatrix}\frac{\delta^{2}}{2}\hat{g}^{i}\\ 0_{1\times n}\\ \delta\hat{g}^{i}\end{bmatrix}\in\mathbb{R}^{(2Nn+1)\times n},\]
with \(\hat{g}^{i}=\hat{c}^{i}\otimes I_{n}\in\mathbb{R}^{Nn\times n}\), where \(\hat{c}^{i}\) is the \(i\)-th column of the identity matrix of size \(N\). \(I_{Nn}\in\mathbb{R}^{Nn\times Nn}\) is an identity matrix. The scalar value of 1 in the matrix \(F\) corresponds to a formation offset term, which will be explained in the next subsection.
### _Formation with Collision Avoidance on the Nodes System_
The problem considered in this article is formation control, i.e. all agents in a multi-agent system are supposed to move from arbitrary initial states to attain a formation (specified in terms of desired displacements \(d^{ij}\) between agents \(i\) and \(j\)), while minimizing a performance index over a finite time horizon \([0,T]\)
\[J^{i}(U^{i})=\frac{1}{2}\Big{(}X_{k}^{T}\mathcal{Q}^{i}(x_{k})X_{k}+{U_{k}^{i}} ^{T}\mathcal{R}^{ii}U_{k}^{i}\Big{)}, \tag{4}\]
with the stacked state vector for the whole horizon \(X_{k}=\left[x_{k+1},x_{k+2},...,x_{k+T}\right]^{T}\in\mathbb{R}^{(2Nn+1)T}\) and the stacked control inputs vector \(U_{k}^{i}=\left[u_{k}^{i},u_{k+1}^{i},...,u_{k+T-1}^{i}\right]^{T}\in\mathbb{R} ^{NnT}\). The state weighting matrix for each agent \(i\) is given by \(\mathcal{Q}^{i}(x_{k})=\text{blkdiag}(Q^{i}(x_{k}),...,Q^{i}(x_{k}),Q_{T}^{i}(x_{T} ))\in\mathbb{R}^{(2Nn+1)T\times(2Nn+1)T}\), where \(Q^{i}(x_{k})=(Q_{\alpha}^{i}+Q_{\beta}^{i}(x_{k}))\in\mathbb{R}^{(2Nn+1)\times(2 Nn+1)}\) is a positive semi definite matrix, with
\(Q^{i}_{\alpha}\) and \(Q^{i}_{\beta}(x_{k})\) represent the weighting matrices for formation and collision avoidance terms, respectively. The terminal weighting matrix \(Q^{i}_{T}(x_{T})\) has the same pattern as \(Q^{i}(x_{k})\) and can be defined by choosing arbitrary scalar weights of \(\beta^{i}>0\).
The control weighting matrix is \(\mathcal{R}^{ii}=\text{blkdiag}(R^{ii})\in\mathbb{R}^{NnT\times NnT}\), where \(R^{ii}\in\mathbb{R}^{Nn\times Nn}\) is a positive definite matrix. Here, we assume there is no cross coupling in the input, i.e., \(\mathcal{R}^{ij}=0\), where \(j\neq i\). Next, the rest of this subsection is dedicated to discussing the formulation of the first term of the cost in (4). The formation error of each agent \(i\) with collision avoidance can be expressed as
\[\Psi^{i}_{k}=\sum_{j\in\mathcal{N}^{i}}\Big{\{}\big{(}||p^{i}_{k}- p^{j}_{k}-d^{ij}||^{2}+||v^{i}_{k}-v^{j}_{k}||^{2}\big{)}\] \[\qquad\qquad+\beta^{i}\Big{(}\frac{||p^{i}_{k}-p^{j}_{k}-d^{ij}|| ^{2}+||v^{i}_{k}-v^{j}_{k}||^{2}}{||p^{i}_{k}-p^{j}_{k}||^{2}-r^{i^{2}}}\Big{)} \Big{\}}, \tag{5}\]
where \(r^{i}\) is the safety radius of agent \(i\) that is assumed to be the same for all \(i\in N\) homogeneous agent, i.e \(r^{i}=r\), and \(\beta^{i}>0\) is a tuning parameter for agent \(i\). By the property of sum-of-squares, (5) can be transformed into a matrix form
\[\sum_{j\in\mathcal{N}^{i}}\Big{\{}||p^{i}_{k}-p^{j}_{k}||^{2}-2(p^{ i}_{k}-p^{j}_{k})^{T}d^{ij}+||d^{ij}||^{2}+||v^{i}_{k}-v^{j}_{k}||^{2}\] \[\qquad+\frac{\beta^{i}||p^{i}_{k}-p^{j}_{k}||^{2}-2}{||p^{i}_{k}- p^{j}_{k}||^{2}-r^{2}}-\frac{2\beta^{i}(p^{i}_{k}-p^{j}_{k})^{T}d^{ij}}{||p^{i}_{k}- p^{j}_{k}||^{2}-r^{2}}\] \[\qquad\qquad+\frac{\beta^{i}||d^{ij}||^{2}}{||p^{i}_{k}-p^{j}_{k} ||^{2}-r^{2}}+\frac{\beta^{i}||v^{i}_{k}-v^{j}_{k}||^{2}}{||p^{i}_{k}-p^{j}_{k} ||^{2}-r^{2}}\Big{\}}=\] \[p^{T}_{k}\mathcal{C}^{i}_{\alpha}p_{k}-2p^{T}_{k}\mathcal{D} \mathcal{W}^{i}_{\alpha}d+d^{T}\mathcal{W}^{i}_{\alpha}d+v^{T}_{k}\mathcal{C}^ {i}_{\alpha}v_{k}+p^{T}_{k}\mathcal{C}^{i}_{\beta}(x_{k})p_{k}\] \[\qquad-2p^{T}_{k}\mathcal{D}\mathcal{W}^{i}_{\beta}(x_{k})d+d^{T} \mathcal{W}^{i}_{\beta}(x_{k})d+v^{T}_{k}\mathcal{C}^{i}_{\beta}(x_{k})v_{k}\] \[\qquad=x^{T}_{k}\big{(}Q^{i}_{\alpha}+Q^{i}_{\beta}(x_{k})\big{)} x_{k}=x^{T}_{k}Q^{i}(x_{k})x_{k}\]
where
\[Q^{i}_{\alpha}=\delta\begin{bmatrix}\mathcal{L}^{i}_{\alpha}&- \mathcal{D}\mathcal{W}^{i}_{\alpha}d&0\\ -(\mathcal{D}\mathcal{W}^{i}_{\alpha}d)^{T}&d^{T}\mathcal{W}^{i}_{\alpha}d&0\\ 0&0&\mathcal{L}^{i}_{\alpha}\end{bmatrix}\]
has size \(\mathbb{R}^{(2Nn+1)\times(2Nn+1)}\), with a diagonal matrix with the edge weight \(\mathcal{W}^{i}_{\alpha}=W^{i}\otimes I_{n}\in\mathbb{R}^{Mn\times Mn}\). A lifted local Laplacian matrix is defined as \(\mathcal{L}^{i}_{\alpha}=\mathcal{D}\mathcal{W}^{i}_{\alpha}\mathcal{D}^{T} \in\mathbb{R}^{Nn\times Nn}\) with \(\mathcal{D}=D\otimes I_{n}\in\mathbb{R}^{Nn\times Mn}\) being the incidence matrix lifted to dimension \(n\) of the space in which agents are moving, and \(d=\text{col}(d^{ij})\in\mathbb{R}^{Mn}\) the column vector of desired displacements vector \(d^{ij}\in\mathbb{R}^{n}\). The state-dependent weighting matrix is then
\[Q^{i}_{\beta}(x_{k})=\delta\begin{bmatrix}\mathcal{L}^{i}_{\beta}(x_{k})&- \mathcal{D}\mathcal{W}^{i}_{\beta}(x_{k})d&0\\ -(\mathcal{D}\mathcal{W}^{i}_{\beta}(x_{k})d)^{T}&d^{T}\mathcal{W}^{i}_{\beta}(x _{k})d&0\\ 0&0&\mathcal{L}^{i}_{\beta}(x_{k})\end{bmatrix}\]
of size \(\mathbb{R}^{(2Nn+1)\times(2Nn+1)}\), with the state-dependent Laplacian matrix defined as \(\mathcal{L}^{i}_{\beta}(x_{k})=\mathcal{D}\mathcal{W}^{i}_{\beta}(x_{k}) \mathcal{D}^{T}\in\mathbb{R}^{Nn\times Nn}\). The state-dependent edge weight matrix is \(\mathcal{W}^{i}_{\beta}(x_{k})=W^{i}_{\beta}(x_{k})\otimes I_{n}\in\mathbb{R}^{ Mn\times Mn}\), where now the \(m\)-th diagonal entry of \(W^{i}_{\beta}(x_{k})\in\mathbb{R}^{M\times M}\) is equal to \(\frac{\beta^{i}}{||p^{i}_{k}-p^{j}_{k}||^{2}-r^{2}}\) if \(e^{i}\in\mathcal{E}^{i}\) and zero otherwise. The Laplacian matrix \(\mathcal{L}^{i}_{\beta}(x_{k})\) depends on the state since the diagonal edge matrix \(\mathcal{W}^{i}_{\beta}(x_{k})\) contains collision terms between agents \(i\) and \(j\).
The formulation of the state vector \(x_{k}\in\mathbb{R}^{2Nn+1}\) has been confirmed, and as a result, the state matrix \(F\) matches the dimensions of the state weighting matrix \(Q^{i}(x_{k})\in\mathbb{R}^{(2Nn+1)\times(2Nn+1)}\).
**Assumption 2**: _The initial positions of the agents satisfy \(\|p^{i}_{0}-p^{j}_{0}\|>r^{i}+r^{j}\), for all \(i,j\in N\), \(j\neq i\)._
By adopting the same reasoning as outlined in [9], assumption 1 ensures that the term \(x^{T}_{0}Q^{i}(x_{0})x_{0}\) in (4), for all \(i\in N\), remains bounded. It follows that the agents operate without entering the avoidance region.
### _Nash Equilibrium and Coupled State-Dependent Riccati Equation (CSDRDE)_
The formulation of the formation control problem with dynamics (3) and cost functions (4) as a game reflects the non-cooperative behavior, where each player is searching for a Nash equilibrium corresponding to its own local cost function.
**Definition 1**: _A collection of strategies \(U^{i\star}\) constitutes a Nash equilibrium if and only if the inequalities_
\[J^{i}(U^{1\star},...,U^{N\star})\leq J^{i}(U^{1\star},...,U^{i-1\star},U^{i},U^{i+ 1\star},...,U^{N\star})\]
_hold for \(i=1,...,N\)._
We now formulate the first problem (for the multi-agent system running on the nodes) as follows.
**Problem 1**: _Find local control sequences that achieve a Nash equilibrium corresponding to the local cost functions (4) over the control input sequences \(u^{i}\) subject to (3)._
**Theorem 1**: _An open-loop Nash equilibrium for the game defined by Problem 1 is achieved by the control sequences_
\[u^{i\star}_{k}(x_{k})=K^{i}_{k}(x_{k})x_{k}, \tag{6}\]
_where_
\[K^{i}_{k}(x_{k})=-R^{i^{-1}}G^{i^{T}}P^{i}_{k+1}(x_{k+1})\Lambda^{-1}_{k}F, \tag{7}\]
_and \(P^{i}_{k+1}(x_{k+1})\) is the solution to the coupled state-dependent Riccati difference equation_
\[P^{i}_{k}(x_{k})=F^{T}P^{i}_{k+1}(x_{k+1})\Lambda^{-1}_{k+1}F+Q^{i}(x_{k})\\ +\big{(}I_{N}\otimes x^{T}_{k}\big{)}\left[x^{T}_{k}\frac{\partial Q ^{i}(x_{k})}{\partial x^{i}_{k}},\ \ \...,\ \ x^{T}_{k}\frac{\partial Q^{i}(x_{k})}{\partial x^{i}_{k}}\right]^{T}, \tag{8}\]
_which can be solved backward with \(P^{i}_{T}(x_{T})=Q^{i}_{T}(x_{T})\). The corresponding closed-loop state trajectory is_
\[x^{\star}_{k+1}=\Lambda^{-1}_{k}Fx^{\star}_{k}, \tag{9}\]
_where_
\[\Lambda_{k}=\Big{(}I+\sum_{j=1}^{N}G^{j}R^{jj^{-1}}G^{j^{
## III Distributed Framework
Building upon the method from [5], this section outlines a distributed strategy to address the issue. The strategy involves an associated fictitious multi-agent system that evolves on the edges of the communication graph, departing from the conventional node-based approach.
### _The Edge System_
Inspired by [4], we associate a fictitious agent with each edge (\(\nu^{i},\nu^{j}\)) of the communication graph with dynamics
\[\begin{bmatrix}q_{k}^{m}\\ \upsilon_{k}^{m}\end{bmatrix}=\begin{bmatrix}p_{k}^{i}-p_{k}^{j}-d^{ij}\\ \upsilon_{k}^{i}-\upsilon_{k}^{j}\end{bmatrix}\quad\text{and}\;\;a_{k}^{m}=u_{ k}^{i}-u_{k}^{j}, \tag{11}\]
for \(m=1,...,M\). The state vector for edge agent \(m\) is \(z_{k}^{m}=\begin{bmatrix}q_{k}^{m},w_{k}^{m}\end{bmatrix}\in\mathbb{R}^{2n}\). Then, the relative dynamics for edge agent \(m\) is
\[z_{k+1}^{m}=fz_{k}^{m}+ga_{k}^{m},\quad\text{for }m=1,.,,,M.\]
The state vector for the whole edge system can be arranged as \(\tilde{z}_{k}=[z_{k}^{1},...,z_{k}^{M}]^{T}\in\mathbb{R}^{2Mn}\). We rearrange the states by a permutation
\[z_{k}=\Pi\tilde{z}_{k},\]
with permutation matrix
\[\Pi=\begin{bmatrix}I_{M}\otimes\begin{bmatrix}1&0\\ I_{M}\otimes\begin{bmatrix}0&1\end{bmatrix}\end{bmatrix}\otimes I_{n}\in\mathbb{ R}^{2Mn\times 2Mn}.\end{bmatrix}\]
Therefore, the whole edge dynamics can be written as
\[z_{k+1}=\bar{F}z_{k}+\sum_{m=1}^{M}\bar{G}^{m}a_{k}^{m}, \tag{12}\]
where \(z_{k}=[q_{k}^{1},...,q_{k}^{M},w_{k}^{1},...,w_{k}^{M}]^{T}\in\mathbb{R}^{2Mn}\) and
\[\bar{F} =\begin{pmatrix}\begin{bmatrix}1&\delta\\ 0&1\end{bmatrix}\otimes I_{M}\otimes I_{n}\end{pmatrix}\in\mathbb{R}^{2Mn\times 2Mn},\] \[\bar{G}^{m} =\begin{bmatrix}\frac{\bar{\delta}^{2}}{2}\bar{g}^{m}\\ \bar{\delta}\bar{g}^{m}\end{bmatrix}\in\mathbb{R}^{2Mn\times n},\]
with \(\bar{g}^{m}=\bar{c}^{m}\otimes I_{n}\in\mathbb{R}^{Mn\times n}\), where \(\bar{c}^{m}\) is the \(m\)-th column of identity matrix of size \(M\).
### _Formation with Collision Avoidance on the Edge System_
Since we relocated the coupling terms that were initially in the cost function to the system dynamics, the local error for an edge agent \(m\) at time instance \(k\) to be minimized is
\[\bar{\Psi}_{k}^{m} =\alpha^{m}\big{(}||q_{k}^{m}||^{2}+||w_{k}^{m}||^{2}\big{)}+ \beta^{m}\Big{(}\frac{||q_{k}^{m}||^{2}+||w_{k}^{m}||^{2}}{||q_{k}^{m}+d^{ij}|| ^{2}-r^{2}}\Big{)}\] \[=z_{k}^{T}\big{(}\bar{Q}_{\alpha}^{m}+\bar{Q}_{\beta}^{m}(z_{k}^{ m})\big{)}z_{k}\] \[=z_{k}^{T}\bar{Q}^{m}(z_{k}^{m})z_{k}\]
where
\[\bar{Q}_{\alpha}^{m} =\delta(I_{2}\otimes\bar{W}_{\alpha}^{m}\otimes I_{n})\in\mathbb{ R}^{2Mn\times 2Mn},\] \[\bar{Q}_{\beta}^{m}(z_{k}^{m}) =\delta(I_{2}\otimes\bar{W}_{\beta}^{m}(z_{k}^{m})\otimes I_{n}) \in\mathbb{R}^{2Mn\times 2Mn},\]
with \(\bar{W}_{\alpha}^{m}\in\mathbb{R}^{M\times M}\) is a diagonal matrix such that the \(m\)-th diagonal entry of \(\bar{W}_{\beta}^{m}\) is equal to \(\alpha^{m}\) and zero otherwise and let \(\bar{W}_{\beta}^{m}(z_{k}^{m})\in\mathbb{R}^{M\times M}\) be a diagonal matrix such that the \(m\)-th diagonal entry of \(\bar{W}_{\beta}^{m}(z_{k}^{m})\) is equal to \(\frac{\beta^{m}}{||q_{k}^{m}+d^{ij}||^{2}-r^{2}}\) and zero otherwise.
Note that in contrast to \(Q^{i}(x_{k})\) from the first problem, \(\bar{Q}^{m}(z_{k}^{m})\) here is a block diagonal matrix where edge dynamics are decoupled. Therefore, we can arrange the decoupled cost function for the \(m\)-th edge as
\[\bar{J}^{m}(A^{m})=\frac{1}{2}\Big{(}Z_{k}^{T}\bar{Q}^{m}(z_{k}^{m})Z_{k}+{A_{ k}^{m}}^{T}\bar{\mathcal{R}}^{mm}A_{k}^{m}\Big{)}, \tag{13}\]
where the stacked edge state vector now is arranged as \(Z_{k}=[z_{k+1},z_{k+2},...,z_{k+T}]^{T}\in\mathbb{R}^{2MnT}\) and the stacked relative control inputs vector is \(A_{k}^{m}=[a_{k}^{m},a_{k+1}^{m},...,a_{k+T-1}^{m}]^{T}\in\mathbb{R}^{MnT}\).
The state weighting matrix for the new cost evolving on edges is defined as \(\bar{Q}^{m}(z_{k}^{m})=\text{blkdiag}\big{(}\bar{Q}^{m}(z_{k}^{m}),...,\bar{Q} ^{m}(z_{m}^{m}),\bar{Q}_{T}^{m}(z_{T}^{m})\big{)}\in\mathbb{R}^{2MnT\times 2MnT}\), where the terminal cost \(\bar{Q}_{T}^{m}(z_{T}^{m})\in\mathbb{R}^{2MnT\times 2MnT}\) has the same pattern as \(\bar{Q}^{m}(z_{k}^{m})\) with arbitrary choices of scalar weights instead of \(\alpha^{m},\beta^{m}>0\). The control weight is \(\bar{\mathcal{R}}^{mm}=\text{blkdiag}(\bar{R}^{mm})\in\mathbb{R}^{MnT\times M nT}\) with a positive definite matrix \(\bar{R}^{mm}\in\mathbb{R}^{Mn\times M}\). Finally, we formulate the new problem for the edge dynamics (3) as follows.
**Problem 2**: _Minimize the local cost function (13) over the relative acceleration control input sequences \(a^{i}\) subject to dynamics (12)._
**Theorem 2**: _The optimal solution to Problem 2 is_
\[a_{k}^{m*}(z_{k}^{m})=\bar{K}_{k}^{m}(z_{k}^{m})z_{k},\quad\text{ for }m=1,...,M, \tag{14}\]
_where_
\[\bar{K}_{k}^{m}(z_{k}^{m})=-(\bar{R}^{mm}+\bar{G}^{m^{T}}\bar{P}_{ k+1}^{m}(z_{k+1}^{m})\bar{G}^{m})^{-1}\times\] \[\bar{G}^{m^{T}}\bar{P}_{k+1}^{m}(z_{k+1}^{m})\bar{F}, \tag{15}\]
_and \(\bar{P}_{k+1}^{m}(z_{k+1}^{m})\) is the solution to the decoupled state-dependent Riccati difference equation_
\[\bar{P}_{k}^{m}(z_{k}^{m})=\bar{F}^{T}\bar{P}_{k+1}^{m}(z_{k+1}^{m}) \bar{F}+\bar{F}^{T}\bar{P}_{k+1}^{m}(z_{k+1}^{m})\bar{G}^{m}\bar{K}_{k}^{m}(z_{ k}^{m})\] \[+\bar{Q}^{m}(z_{k}^{m})+(I_{M}\otimes z_{k}^{m^{T}})\left[z_{k}^{m^ {T}}\frac{\partial\bar{Q}^{m}(z_{k}^{m})}{\partial z_{k}^{m}}\;...\;z_{k}^{m^{T}} \frac{\partial\bar{Q}^{m}(z_{k}^{m})}{\partial z_{k}^{m}}\right]^{T} \tag{16}\]
_with \(\bar{P}_{T}^{m}(z_{T}^{m})=\bar{Q}_{T}^{m}(z_{T}^{m})\)._
Can be shown similarly to Section 3 in [10].
Note that because both \(\bar{Q}^{m}(z_{k}^{m})\) and its derivative in (16) involve the norm of a variable, the \(\sigma\)-norm defined in (1) is employed to ensure differentiability throughout. The feedback gains \(\bar{K}_{k}^{m}(z_{k}^{m})\) in (15) are now decoupled from each other. This decoupling principally permits a distributed implementation, in contrast to \(K_{k}^{i}(x_{k})\) in (7).
However, solving the decoupled SDRDE in (16) is challenging due to its state-dependency. To address this challenge, we embrace the receding horizon technique for solving SDRDE presented in [7]. The approach involving the decoupled SDRDE entails the following steps:
1. Utilize the state-feedback gains \(\bar{K}_{k}^{m}\) from (15) computed in the previous iteration. Let \(z_{k}^{p}\) be the prediction
of the dynamics, commencing from the current state \(z_{k}\).
2. Work backwards in time to compute the Riccati solution, yielding \(\bar{P}_{k+T}^{m},...,\bar{P}_{k+1}^{m}\) along the predicted state trajectory.
3. Employ this information to update the state feedback gains \(\bar{K}_{k}^{m},...,\bar{K}_{k+T-1}^{m}\). Implement the first gain \(\bar{K}_{k}^{m}\) for control purposes.
4. At the subsequent sampling instant, repeat this process, and make use of the remaining gains \(\bar{K}_{k+1}^{m},...,\bar{K}_{k+T-1}^{m}\).
5. Determine the terminal gain \(\bar{K}_{k+T}^{m}\) required for the next iteration by solving the decoupled SDRDE along the predicted states. This approach facilitates a receding horizon strategy.
The detailed steps to evaluate the decoupled SDRDE approach are provided in Algorithm 1.
```
0:\(\bar{Q}_{\alpha}^{m}\), \(\bar{Q}_{\beta}^{m}(z_{k}^{m})\) and its derivative \(\frac{\partial\bar{Q}_{\beta}^{m}(z_{k}^{m})}{\partial z^{m}}\), horizon \(T\), number of edges \(M\), tolerance \(\varepsilon\), and \(t_{max}\)
0: The relative control inputs \(a_{k}^{m\star}\)
1: At time \(k\) do the following
2:\(t=1\)
3: Initialize \(z_{k=1}^{p}=z_{k}\)
4: Initialize \(\bar{K}^{m}=\text{zeros}(M,2M,T)\)
5:while\(t\leq t_{max}\) and \(||\bar{K}_{old}^{m}-\bar{K}^{m}||>\varepsilon\)do
6:\(\bar{K}_{old}^{m}=\bar{K}^{m}\)
7:for\(j=1:T\)do
8:\(z_{j+1}^{p}=\left(\bar{F}+\sum_{m=1}^{M}\bar{G}^{m}\bar{K}^{m}\right)\!z_{j}^ {p}\)
9:endfor
10: Set \(\bar{P}_{T}^{m}=\bar{Q}_{\beta}^{m}(z_{T}^{p})\)
11:for\(j=T:-1:2\)do
12: Calculate \(\bar{P}_{j-1}^{m}\) from (16)
13: Calculate \(\bar{K}^{m}(:,:,j-1)\) from (15)
14:endfor
15:\(t\gets t+1\)
16:endwhile
17: Set \(a_{k}^{m\star}=\bar{K}^{m}(:,:,j)z_{k}\)
```
**Algorithm 1** DSDRDE [7]
### _Distributed Implementation_
In this subsection, we show how to obtain the optimal control inputs of the physical vertex agents from the relative control inputs \(a_{k}^{m\star}\) of the fictitious (edge) agents in a distributed fashion. We will use the symbol \(\hat{u}_{k}^{i\star}\) to denote the physical control inputs corresponding to the fictitious relative control inputs \(a_{k}^{m\star}\). Recall that from (11), we can express the relation between \(a_{k}^{\star}\) and \(\hat{u}_{k}^{\star}\) as
\[\Phi\hat{u}_{k}^{\star}=a_{k}^{\star},\]
where \(\Phi=D^{T}\otimes I_{n}\) and \(a_{k}^{\star}=[{a_{k}^{1}}^{{}^{\star}}^{T},...,{a_{k}^{M}}^{{}^{\star}}^{T}]^{T}\).
We consider minimizing the residual \(f(u)=||\Phi\hat{u}_{k}^{\star}-a_{k}^{\star}||^{2}\). Since the undirected graph \(\mathcal{G}\) is assumed to be connected, there exists a unique solution to minimizing the residual \(f(u)\), given by
\[\hat{u}_{k}^{\star}=\Phi^{\dagger}a_{k}^{\star}, \tag{17}\]
where \(\Phi^{\dagger}\) is the pseudo-inverse of \(\Phi\). Since \(\Phi^{\dagger}\) is a fully populated matrix, this will lead to a centralized solution. To compute (17) in a distributed way, a distributed steepest descent algorithm is employed, which updates the local control input at iteration step \(l\) according to
\[\hat{u}_{l+1}^{\star}=(I-2\gamma\Phi^{T}\Phi)\hat{u}_{l}^{\star}+2\gamma\Phi^{ T}a_{k}^{\star}, \tag{18}\]
with \(\gamma\) as a learning rate that satisfies
\[2\gamma\leq\frac{2}{\lambda_{\max}(\Phi^{T}\Phi)}.\]
It was demonstrated in [11] that this algorithm converges to a solution \(\hat{u}_{k}^{\star}\) in (17) which is unique. The key fact is that the two matrices on the right-hand side of (18) are sparse and allow a distributed computation of the updates \(\hat{u}_{l+1}^{\star}\). The detailed steps to evaluate this approach are provided in Algorithm 2.
```
0: Tuning parameter \(\gamma\), tolerance \(\varepsilon\), and \(l_{max}\)
1: At time \(k\) do the following
2:\(l=1\)
3: Initialize \(\hat{u}_{l=1}^{l}\), for \(i=1,...,N\) with "warm start"
4:while\(l\leq l_{max}\) and \(||\Phi\hat{u}_{l}-a_{k}^{\star}||>\varepsilon\)do
5: Receive \(\hat{u}_{l}^{j}\) from agent(s) \(j\in\mathcal{N}^{i}\)
6: Calculate \(\hat{u}_{l+1}^{i}=\hat{u}_{l}^{j}-2\gamma\sum_{j\in\mathcal{N}^{i}}\left(\hat{u }_{l}^{i}-\hat{u}_{l}^{j}-a_{(ij)}^{\star}\right)\)
7: Broadcast\(\hat{u}_{l+1}^{i}\) to agent(s) \(j\in\mathcal{N}^{i}\)
8:\(l\gets l+1\)
9:endwhile
```
**Algorithm 2** Iterative Distributed Steepest Descent on Updating Local Control Inputs [5]
The sixth step in Algorithm 2 can be interpreted as follows: during each iteration, the estimate at node \(i\) is updated based on the errors in the relative control inputs, and each error in the edge (i.e., the difference between the estimated and measured edge difference) contributes to a correction in the node value.
## IV Illustrative Example
This section illustrates the proposed approach with a formation control problem where double integrator agents are moving in \(n=2\) dimensional space. We consider \(N=4\) agents and \(M=5\) edges with an undirected communication graph, as displayed in Figure 1.
The incidence matrix is
\[D=\begin{bmatrix}1&1&0&0&0\\ -1&0&1&1&0\\ 0&-1&-1&0&1\\ 0&0&0&-1&-1\end{bmatrix}\in\mathbb{R}^{4\times 5}.\]
We assume that all agents have zero initial velocities, except agent 1 that has \(v_{0}^{1}=\left[0.5,1\right]^{T}\). The agents have initial
positions
\[p_{0}^{1}=\begin{bmatrix}3.5\\ 1\end{bmatrix},p_{0}^{2}=\begin{bmatrix}12\\ 1\end{bmatrix},p_{0}^{3}=\begin{bmatrix}0\\ 5\end{bmatrix},p_{0}^{4}=\begin{bmatrix}15\\ 3.5\end{bmatrix},\]
with the desired displacements vectors and safety radius
\[d^{12} =\begin{bmatrix}1.5\\ 1\end{bmatrix}, d^{13}=\begin{bmatrix}0\\ 2\end{bmatrix}, d^{23}=\begin{bmatrix}-1.5\\ 1\end{bmatrix},\] \[d^{24} =\begin{bmatrix}-3\\ 0\end{bmatrix}, d^{34}=\begin{bmatrix}-1.5\\ -1\end{bmatrix}, r=0.5.\]
### _Simulation Results_
We first show the evolution of agents' positions if collision avoidance is ignored, by taking \(\bar{Q}_{\beta}^{m}(z_{k}^{m})=0\), for all \(m\in M\).
Fig. 2 depicts the four agents moving in the \(x-y\)-plane. A sampling time of 100\(ms\) is used, and the dashed lines display the final trajectories over a period of 20\(s\). The solid lines represent the intermediate progression run over 4\(s\). The figure illustrates that the collision between agents within 4\(s\).
For simulating formation control with collision avoidance, we construct \(\bar{Q}_{\beta}^{m}(z_{k}^{m})\) individually for each edge, wherein \(\beta^{m}\) are set to 1 for all \(m\in M\). Initially, we execute the steps associated with the DSDRDE using a horizon of \(T=10\), yielding the relative control inputs \(a_{k}^{m\star}\). Once these relative inputs are acquired, we proceed with running the distributed steepest descent method to compute the physical control inputs \(\hat{u}_{k}^{i}\) and to simulate the actual dynamics.
Formation with collision avoidance is visually presented in Figs. 3 and 4. It is run for 4 and 7\(s\), respectively. As depicted in Fig. 3, agents one, two, and three successfully avoid collisions, in contrast to the scenario shown in Fig. 2. When we extend the simulation time to 7\(s\), a noteworthy observation emerges: agent three closely follows agent two, who, in turn, tracks agent one, resulting in their alignment within the desired formation.
The last two plots compare control input progression achieved through the centralized solution in (17) and the distributed approach in (18) with 10 iterations per sampling interval. Each plot displays the \(x\)-direction evolution, with blue stars indicating the distributed solution and orange diamonds representing the centralized solution. In Fig. 6, the distributed scheme quickly converges to the centralized solution, despite a small initial gap. Meanwhile, Fig. 5 reveals that agent one's control input convergence is slower due to limited interaction with only two neighbors. All visualizations were generated using the code in [12].
### _Cost Comparison_
Furthermore, it is interesting to compare the costs between the original problem solved on the nodes that result in the Nash equilibrium and the reformulated problem directly solved on the edge of a network graph. Although the first problem cannot be implemented in a distributed way, we simulated it solely for cost comparison purposes, as follows. After obtaining the solution \(u_{k}^{i*}\) for the Nash equilibrium in (6) and its corresponding closed-loop trajectory \(x_{k+1}^{*}\) in (9), we can derive the relative state \(\bar{z}_{k}\) and the relative inputs \(\bar{a}_{k}^{m}\) from the coupled problem, as shown in (11). These values are then used to compute the cost in (13) and are denoted by \(\bar{J}_{ Nash}^{m}\). In contrast, for the decoupled problem, the values of
Fig. 1: Arbitrary orientation of the \(M=5\) edges of an undirected graph with \(N=4\) nodes.
Fig. 5: Control input of agent 1; Fig. 6. Control input of agent 3; centralised and distributed solution centralised and distributed solution with 10 iterations/ sampling interval. with 10 iterations/ sampling interval.
Fig. 3: Progression of four agents’ Fig. 4. Progression of four agents’ position on \(x,y\)-axes with collision avoidance in 4s.
Fig. 2: Progression of four agents’ position on \(x,y\)-axes in 4 seconds, without collision avoidance.
\(z_{k}^{m}\) and \(a_{k}^{m}\) are used to compute the cost in (13), resulting in \(\bar{J}^{m}\) and \(a_{k}^{m}\) are used to compute the cost in (13), resulting in \(\bar{J}^{m}\).
Table I shows that the global cost achieved by directly running the problem on the edge, \(\sum_{m}^{M}\bar{J}^{m}=1654\), is significantly lower than the total cost obtained by running the problem on the nodes and then transforming it, i.e., \(\sum_{m}^{M}\bar{J}^{m}_{Nash}=2256\). As expected, the solution attained on the edge system is Pareto optimal.
## V Conclusions
This article addresses the challenge of guiding a group of \(N\) agents from their initial position to a desired formation while avoiding collisions with neighboring agents. The original problem is formulated as an LQDTG with a coupled SDRDE, which cannot be solved in a distributed fashion. To address this issue, a distributed approach is proposed. This approach is based on a fictitious MAS that operates on the edges of the graph rather than the nodes. The technique incorporates relative soft constraints on the edges to prevent collisions and requires the solution of a decoupled SDRDE, using a receding horizon technique. The proposed method leverages a distributed steepest descent algorithm to map the relative control inputs to the actual physical control inputs, resulting in a simple vector-matrix multiplication per iteration, in contrast to an iterative approach often used in distributed MPC, which requires solving an optimization problem in each sampling interval. The efficacy of the proposed method is demonstrated through simulation results.
## Proof of Theorem 1
This proof extends the methodology presented in [13] to the case of a state-dependent weighting matrix. Referring back to the dynamics (2) and the cost (27), assuming \(Q_{T}^{i}(x_{T})=Q^{i}(x_{T})\), we can now rewrite it as follows:
\[J^{i}=\frac{1}{2}\sum_{k=0}^{T}\Big{(}x_{k}^{T}Q^{i}(x_{k})x_{k}+\sum_{j=1}^{N }u_{k}^{i^{T}}R^{ij}u_{k}^{i}\Big{)}\]
To maintain consistency with the indexing used in [13], we will index the state costs related to the next time step.
\[J^{i}=\frac{1}{2}\sum_{k=0}^{T}\Big{(}x_{k+1}^{T}Q^{i}(x_{k+1})x_{k+1}+\sum_{j =1}^{N}u_{k}^{i^{T}}R^{ij}u_{k}^{i}\Big{)} \tag{19}\]
To begin, it is important to observe that the Hamiltonian function for the provided equations (2) and (19) is as follows:
\[H_{k}^{i}=\frac{1}{2} \Big{(}x_{k+1}^{T}Q^{i}(x_{k+1})x_{k+1}+\sum_{j=1}^{N}u_{k}^{i^{T }}R^{ij}u_{k}^{i}\Big{)}\] \[+\lambda_{k+1}^{i^{T}}\Big{(}Fx_{k}+\sum_{j=1}^{N}G^{j}u_{k}^{j} \Big{)} \tag{20}\]
Considering that \(Q^{i}(x_{k+1})\geq 0\) and \(R^{ii}>0\), setting the derivative (in control) of (20) to zero and assuming that states and costates are fixed to their optimal values lead to the following
\[u_{k}^{i\star}=-R^{ii^{-1}}G^{i^{T}}\Big{(}\lambda_{k+1}^{i}+Q^{ i}(x_{k+1})x_{k+1}^{\star}+(I_{N}\otimes x_{k+1}^{T})\] \[\quad\Big{[}x_{k+1}^{\star^{T}}\frac{\partial Q^{i}(x_{k+1})}{ \partial x_{k+1}^{i}}x_{k+1}^{\star},\ \...,\ \ x_{k+1}^{\star^{T}}\frac{\partial Q^{i}(x_{k+1})}{ \partial x_{k+1}^{i}}x_{k+1}^{\star}\Big{]}^{T}\Big{)}. \tag{21}\]
Moreover, the equation for the costate (difference) is as follows
\[\lambda_{k}^{i}=F^{T}\Big{(}\lambda_{k+1}^{i}+Q^{i}(x_{k+1})x_{k+ 1}^{\star}+(I_{N}\otimes x_{k+1}^{T})\] \[\quad\Big{[}x_{k+1}^{\star^{T}}\frac{\partial Q^{i}(x_{k+1})}{ \partial x_{k+1}^{i}}x_{k+1}^{\star},\ \...,\ \ x_{k+1}^{\star^{T}}\frac{\partial Q^{i}(x_{k+1})}{ \partial x_{k+1}^{i}}x_{k+1}^{\star}\Big{]}^{T}\Big{)}, \tag{22}\]
with \(\lambda_{T+1}^{i}=0\), where
\[x_{k+1}^{\star}=Fx_{k}^{\star}+\sum_{j=1}^{N}G^{j}u_{k}^{j\star},\ \ \ \ x_{1}^{\star}=x_{1}. \tag{23}\]
Starting with \(k=T\), the expression for (21) simplifies to:
\[u_{k}^{i\star}=-R^{ii^{-1}}G^{i^{T}}P^{i}(x_{T+1})x_{T+1}^{\star}, \tag{24}\]
by first premultiplying both sides with \(G^{i}\) and then summing over \(i\in N\), and also utilizing (23) and (10), we arrive at the following
\[x_{T+1}^{\star}-Fx_{T}^{\star}=(I-\Lambda_{T})x_{T+1}^{\star}\]
which further yields the unique relation
\[x_{T+1}^{\star}=\Lambda_{T}^{-1}Fx_{T}^{\star}\]
which is precisely (9) for \(k=T\). After substituting this relation into (24), we obtain (6) for \(k=T\).
Let's proceed to prove by induction that the unique solution set of (21)-(23) is given by (6)-(9) and
\[\lambda_{k}^{i\star}=F^{T}P_{k+1}^{i}(x_{k+1})x_{k+1}^{\star}. \tag{25}\]
To achieve that, we will proceed step by step from the final time towards the initial time. For the inductive step, we start
with (21) and substitute (25), followed by (9).
\[u_{k}^{i\star} =-R^{ii^{-1}}G^{i^{T}}\Big{(}\lambda_{k+1}^{i\star}+Q^{i}(x_{k+1})x_ {k+1}^{\star}+(I_{N}\otimes x_{k+1}^{T})\] \[\quad\Big{[}x_{k+1}^{\star T}\frac{\partial Q^{i}(x_{k+1})}{ \partial x_{k+1}^{\star}}x_{k+1}^{\star}\ \ \...\ \ x_{k+1}^{\star T}\frac{\partial Q^{i}(x_{k+1})}{ \partial x_{k+1}^{\star}}x_{k+1}^{\star}\Big{]}^{T}\Big{)}\] \[=-R^{ii^{-1}}G^{i^{T}}\Big{(}F^{T}P_{k+2}^{i}(x_{k+2})x_{k+2}^{ \star}+Q^{i}(x_{k+1})x_{k+1}^{\star}\] \[\quad+(I_{N}\otimes x_{k+1}^{T})\] \[\quad\Big{[}x_{k+1}^{\star T}\frac{\partial Q^{i}(x_{k+1})}{ \partial x_{k+1}^{\star}}x_{k+1}^{\star}\ \ \...\ \ x_{k+1}^{\star T}\frac{\partial Q^{i}(x_{k+1})}{ \partial x_{k+1}^{\star}}x_{k+1}^{\star}\Big{]}^{T}\Big{)}\] \[=-R^{ii^{-1}}G^{i^{T}}\Big{(}F^{T}P_{k+2}^{i}(x_{k+2})\Lambda_{k+ 1}^{-1}Fx_{k+1}^{\star}\] \[\quad+Q^{i}(x_{k+1})x_{k+1}^{\star}+(I_{N}\otimes x_{k+1}^{T})\] \[\quad\Big{[}x_{k+1}^{\star T}\frac{\partial Q^{i}(x_{k+1})}{ \partial x_{k+1}^{\star}}x_{k+1}^{\star}\ \ \...\ \ x_{k+1}^{\star T}\frac{\partial Q^{i}(x_{k+1})}{ \partial x_{k+1}^{\star}}x_{k+1}^{\star}\Big{]}^{T}\Big{)}\] \[=-R^{ii^{-1}}G^{i^{T}}P_{k+1}^{i}(x_{k+1})x_{k+1}^{\star},\]
this process results in the same expression as (6). Next, we examine (23) and substitute the previous result obtained.
\[x_{k+1}^{\star} =Fx_{k}+\sum_{j=1}^{N}G^{j}u_{k}^{j\star}\] \[=Fx_{k}-\sum_{j=1}^{N}G^{j}R^{ii^{-1}}G^{i^{T}}P_{k+1}^{i}(x_{k+1 })x_{k+1}^{\star}\] \[\Big{[}I+\sum_{j=1}^{N}G^{j}R^{ii^{-1}}G^{i^{T}}P_{k+1}^{i}(x_{k+1 })\Big{]}x_{k+1}^{\star}=Fx_{k}\] \[x_{k+1}^{\star} =\Lambda_{k}^{-1}Fx_{k}.\]
We have now confirmed the validity of (6) and (9). To derive (25), we assume it holds up to \(k+1\) and verify its applicability by substitution into (22) for time step \(k\).
\[\lambda_{k}^{i\star} =F^{T}\Big{(}\lambda_{k+1}^{i\star}+Q^{i}(x_{k+1})x_{k+1}^{\star} +(I_{N}\otimes x_{k+1}^{T})\] \[\quad\Big{[}x_{k+1}^{\star T}\frac{\partial Q^{i}(x_{k+1})}{ \partial x_{k+1}^{\star}}x_{k+1}^{\star},\ \ \...,\ \ \ x_{k+1}^{\star T}\frac{\partial Q^{i}(x_{k+1})}{ \partial x_{k+1}^{\star}}x_{k+1}^{\star}\Big{]}^{T}\Big{)}\] \[=F^{T}\Big{(}F^{T}P_{k+2}^{i}(x_{k+2})x_{k+2}^{\star}+Q^{i}(x_{k+1 })x_{k+1}^{\star}+\] \[\quad(I_{N}\otimes x_{k+1}^{T})\] \[\quad\Big{[}x_{k+1}^{\star T}\frac{\partial Q^{i}(x_{k+1})}{ \partial x_{k+1}^{\star}}x_{k+1}^{\star},\ \ \...,\ \ \ x_{k+1}^{\star T}\frac{\partial Q^{i}(x_{k+1})}{ \partial x_{k+1}^{\star}}x_{k+1}^{\star}\Big{]}^{T}\Big{)}\] \[=F^{T}\Big{(}F^{T}P_{k+2}^{i}(x_{k+2})\Lambda_{k+1}^{-1}Fx_{k+1}^{ \star}+Q^{i}(x_{k+1})x_{k+1}^{\star}+\] \[\quad(I_{N}\otimes x_{k+1}^{T})\] \[\quad\Big{[}x_{k+1}^{\star T}\frac{\partial Q^{i}(x_{k+1})}{ \partial x_{k+1}^{\star}}x_{k+1}^{\star},\ \ \...,\ \ x_{k+1}^{\star T}\frac{\partial Q^{i}(x_{k+1})}{ \partial x_{k+1}^{\star}}x_{k+1}^{\star}\Big{]}^{T}\Big{)}\] \[=F^{T}P_{k+1}^{i}(x_{k+1})x_{k+1}^{\star},\]
which agrees with (25), thereby completing the induction process.
|
2310.16981 | Reimagining Synthetic Tabular Data Generation through Data-Centric AI: A
Comprehensive Benchmark | Synthetic data serves as an alternative in training machine learning models,
particularly when real-world data is limited or inaccessible. However, ensuring
that synthetic data mirrors the complex nuances of real-world data is a
challenging task. This paper addresses this issue by exploring the potential of
integrating data-centric AI techniques which profile the data to guide the
synthetic data generation process. Moreover, we shed light on the often ignored
consequences of neglecting these data profiles during synthetic data generation
-- despite seemingly high statistical fidelity. Subsequently, we propose a
novel framework to evaluate the integration of data profiles to guide the
creation of more representative synthetic data. In an empirical study, we
evaluate the performance of five state-of-the-art models for tabular data
generation on eleven distinct tabular datasets. The findings offer critical
insights into the successes and limitations of current synthetic data
generation techniques. Finally, we provide practical recommendations for
integrating data-centric insights into the synthetic data generation process,
with a specific focus on classification performance, model selection, and
feature selection. This study aims to reevaluate conventional approaches to
synthetic data generation and promote the application of data-centric AI
techniques in improving the quality and effectiveness of synthetic data. | Lasse Hansen, Nabeel Seedat, Mihaela van der Schaar, Andrija Petrovic | 2023-10-25T20:32:02Z | http://arxiv.org/abs/2310.16981v1 | # Reimagining Synthetic Tabular Data Generation through Data-Centric AI:
###### Abstract
Synthetic data serves as an alternative in training machine learning models, particularly when real-world data is limited or inaccessible. However, ensuring that synthetic data mirrors the complex nuances of real-world data is a challenging task. This paper addresses this issue by exploring the potential of integrating data-centric AI techniques which profile the data to guide the synthetic data generation process. Moreover, we shed light on the often ignored consequences of neglecting these data profiles during synthetic data generation -- despite seemingly high statistical fidelity. Subsequently, we propose a novel framework to evaluate the integration of data profiles to guide the creation of more representative synthetic data. In an empirical study, we evaluate the performance of five state-of-the-art models for tabular data generation on eleven distinct tabular datasets. The findings offer critical insights into the successes and limitations of current synthetic data generation techniques. Finally, we provide practical recommendations for integrating data-centric insights into the synthetic data generation process, with a specific focus on classification performance, model selection, and feature selection. This study aims to reevaluate conventional approaches to synthetic data generation and promote the application of data-centric AI techniques in improving the quality and effectiveness of synthetic data.
## 1 Introduction
Machine learning has become an essential tool across various industries, with high-quality data representative of the real world being a crucial component for training accurate models that generalize [1; 2; 3]. In cases where data access is restricted or insufficient synthetic data has emerged as a viable alternative [4; 5]. The purpose of synthetic data is to generate training data that closely mirrors real-world data, enabling the effective use of models trained on synthetic data on real data. Moreover, synthetic data is used for a variety of different uses, including privacy (i.e. to enable data sharing, [6; 7]), competitions [8] fairness [9; 10], and improving downstream models [11; 12; 13; 14].
However, generating high-quality synthetic data that adequately captures the nuances of real-world data, remains a challenging task. Despite significant strides in synthetic data with generative models, they sometimes fall short in replicating the complex subtleties of real-world data, particularly when
dealing with messy, mislabeled or biased data. For instance, regarding fairness, [15] have shown that such gaps can lead to flawed conclusions and unreliable predictions on subpopulations, thereby restricting the practical usage of synthetic data.
The ability of synthetic data to capture the subtle complexities of real-world data is crucial, particularly in contexts where these issues might surface during deployment. Inaccurate synthetic data can not only hamper predictive performance but also result in improper model selection and distorted assessments of feature importance, thereby undermining the overall analysis. These challenges underscore the need to improve the synthetic data generation process.
One might wonder, surely, assessing fidelity via statistical divergence metrics [5; 17] such as MMD or KL-divergence is sufficient? We argue that such high-level metrics tell one aspect of the story. An overlooked dimension is the characterization of data profiles. In this approach, samples are assigned to profiles that reflect their usefulness for an ML task. Specifically, samples are typically categorized as easy to learn, ambiguous, or hard, which are proxies for data issues like mislabeling, data shift, or under-represented samples. In methods such as Data-IQ [18]and Data Maps [19] this is referred to as "groups of the data", however, we use "data profiles" for clarity.
While this issue has been well-studied for supervised tasks, it has not been explored in the generative setting. We highlight the issues of overlooking such data profiling in Figure 1, where despite near-perfect statistical fidelity (inverse KLD), we show the differing proportion of 'easy' examples identified in synthetic data generated by different generative models trained on the Adult dataset [16]. On the other hand, this data profile correlates with downstream classification performance.
To address this challenge of the representativeness of synthetic data, we explore the potential of integrating data-centric AI techniques and their insights to improve synthetic data generation. Specifically, we propose characterizing individual samples in the data and subsequently using the different data profiles to guide synthetic data generation in a way that better reflects the real world. While our work is applicable across modalities, our primary focus is tabular data given the ubiquity of tabular data in real-world applications [20; 21], with approximately 79% of data scientists working with it on a daily basis, vastly surpassing other modalities [22].
**Contributions:**
1 _Conceptually_, we delve into the understanding of fundamental properties of data with respect to synthetic data generation, casting light on the impact of overlooking data characteristics and profiles when generating synthetic data.
2 _Technically_, we bring the idea of data profiles in data-centric AI to the generative setting and explore its role in guiding synthetic data generation. We introduce a comprehensive framework to facilitate this evaluation across various generative models.
3 _Empirically_, we benchmark the performance of five state-of-the-art models for tabular data generation on eleven distinct tabular datasets and investigate the practical integration of data-centric profiles to guide synthetic data generation. We provide practical recommendations for enhancing synthetic data generation, particularly with respect to the 3 categories of synthetic data utility (i) predictive performance, (ii) model selection and (iii) feature selection.
Figure 1: Measures of data-centric profiling (A) better reflect the downstream performance of generative models (B) than measures of statistical fidelity (C). Assessed on the Adult dataset [16] using five different generative models A) Proportion easy examples in the generated datasets identified by Cleanlab, B) Supervised classification performance when training on synthetic, testing on real data, C) Inverse KL-divergence. (bn=bayesian_network)
We hope the insights of this paper spur the reconsideration of the conventional approaches to synthetic data generation and encourage experimentation on how data-centric AI could help synthetic data generation deliver on its promises.
## 2 Related work
This work engages with synthetic data generation and data characterization in data-centric AI.
**Synthetic Tabular Data Generation** uses generative models to create artificial data that mimics the structure and statistical properties of real data, and is particularly useful when real data is scarce or inaccessible [4; 23; 24]. In the following, we describe the broad classes of synthetic data generators applicable to the tabular domain. _Bayesian networks_[25] are a traditional approach for synthetic data generation, that represent probabilistic relationships using graphical models. _Conditional Tabular Generative Adversarial Network_ (CTGAN) [26] is a deep learning method for modeling tabular data. It uses a conditional GAN to capture complex non-linear relationships. _The Tabular Variational Autoencoder_ (TVAE) is a specialized Variational Autoencoder, designed for the tabular setting [26].
_Normalizing flow models_[27; 28] provide an invertible mapping between data and a known distribution, and offer a flexible approach for generative modeling. Diffusion models, which have gained recent popularity, offer a different paradigm for generative modeling. _TabDDPM_[29] is a diffusion model proposed for the tabular data domain. In this work, we evaluate these classes of generative models, considering various aspects of synthetic data evaluation.
**Evaluation of Synthetic Data** is a multifaceted task [17; 30], involving various dimensions such as data utility with respect to a downstream task, statistical fidelity, and privacy preservation [17; 30]. In this work, we focus on dimensions that impact model performance and hence, while important, we do not consider privacy aspects.
_(1) Data Utility:_ refers to how well the synthetic data can be used in place of the real data for a given task. Typically, utility is assessed by training predictive models on synthetic data and testing them on real data [4; 5; 17; 31; 32]. We posit that beyond matching predictive performance, we also desire to retain both _model ranking_ and _feature importance_ rankings. We empirically assess these aspects in Sec. 5.
_(2) Statistical Fidelity:_ measures the degree of similarity between synthetic data and the original data in terms of statistical properties, including the marginal and joint distributions of variables [17]. Statistical tests like the Kolmogorov-Smirnov test or divergence measures like Maximum Mean Discrepancy, KL-divergence or Wasserstein distance are commonly used for evaluation[5; 17].
Beyond statistical measures, the concept of data characterization and profiles of easy and hard examples has emerged in data-centric AI. These profiles serve as proxies for understanding real-world data, which is often not "perfect" due to mislabeling, noise, etc.The impact of these profiles on supervised models has been demonstrated in the data-centric literature [18; 33; 34]. In Figure 1, we show that data profiles are similarly important in the generative setting. Despite having almost perfect statistical fidelity, different generative models capture different data profiles (e.g. proportion of easy examples), leading to varying data utility as reflected in different performances. Consequently, we propose considering data profiles as an important dimension when creating synthetic data. We describe current data-centric methods that can facilitate this next.
**Data profiling** is a growing field in Data-Centric AI that aims to evaluate the characteristics of data samples for specific tasks [35; 36]. In the supervised learning setting, various methods have been developed to assign samples to groups, which we refer to as data profiles. These profiles, such as easy, ambiguous, or hard, often reveal issues such as mislabeling, data shifts, or under-represented groups [18; 19; 33; 34; 37; 38]. Various mechanisms are used in different methods for data characterization. For example, Cleanlab [34] models relationships between instances based on confidence, while Data Maps and Data-IQ [18] assess uncertainty through training dynamics. However, many existing methods are designed for neural networks and are unsuitable for non-differentiable models like XGBoost, which are commonly used in tabular data settings. Consequently, we focus on data characterization approaches such as Cleanlab, Data-IQ, and Data Maps which are more applicable to tabular data.
## 3 Framework
We propose a unified framework that enables a thorough assessment of generative models and the synthetic data they produce. The framework encompasses the evaluation of the synthetic data based on established statistical fidelity metrics as well as three distinct tasks encompassing _data utility_.
At a high level, the framework proceeds as visualized in 2. The dataset is first divided into a training set, denoted as \(\mathbb{D}_{\text{train}}\), and a testing set, denoted as \(\mathbb{D}_{\text{test}}\). A duplicate of the training set (\(\mathbb{D}_{\text{train}}\)) undergoes a data-centric preprocessing approach to produce a preprocessed version of the training set, referred to as \(\mathbb{D}_{\text{train}}^{\text{pre}}\). A generative model is then trained on \(\mathbb{D}_{\text{train}}^{\text{pre}}\). This model is used to synthesize a new dataset, denoted as \(\mathbb{D}_{\text{synth}}\). The synthetic dataset is further processed using a data-centric postprocessing method to create the final synthetic dataset, denoted as \(\mathbb{D}_{\text{synth}}^{\text{post}}\). Various classification models \(\mathcal{M}\) are then trained separately on the original training set \(\mathbb{D}_{\text{train}}\) and the synthetic dataset \(\mathbb{D}_{\text{synth}}^{\text{post}}\). These models are then applied to the testing set \(\mathbb{D}_{\text{test}}\) for evaluation. The generative and supervised models are evaluated for their statistical fidelity and data utility. The focus is on classification performance, model selection, and feature selection. Further details on each process within the framework can be found in the following subsections.
### Data profiling
Assume we have a dataset \(\mathcal{D}=\{(x^{n},y^{n})\mid n\in[N]\}\). Data profiling aims to assign a score \(S\) to samples in \(\mathcal{D}\). On the basis of the score, a threshold \(\tau\) is typically used to assign a specific profile group \(p^{n}\in\mathcal{P}\), where \(\mathcal{P}=\{Easy,Ambigious,Hard\}\) to each sample \(x^{n}\).
Our framework supports three recent data characterization methods applicable to tabular data: Cleanlab [34], Data-IQ [18], and Data Maps [33]. They primarily differ based on their scoring mechanism \(S\). For instance, Cleanlab [34] uses the predicted probabilities as \(S\) to estimate a noise matrix, Data-IQ [18] uses confidence and aleatoric uncertainty as \(S\), and Data Maps uses confidence and variability (epistemic uncertainty) as \(S\). Moreover, they differ in the categories in the data profiles derived from their scores. Data-IQ and Data Maps provide three categories of data profiles: _easy_; samples that are easy for the model to predict, _ambiguous_; samples with high uncertainty, and _hard_;
Figure 2: Illustration of the framework’s process flow. _Data partitioning_: the dataset is divided into a training set, \(\mathbb{D}_{\text{train}}\), and a testing set, \(\mathbb{D}_{\text{test}}\). _Data profiling_: a data-centric preprocessing approach is employed on a duplicate of \(\mathbb{D}_{\text{train}}\) to produce \(\mathbb{D}_{\text{train}}^{\text{pre}}\). A generative model, trained on \(\mathbb{D}_{\text{train}}^{\text{pre}}\), is then utilized to synthesize a dataset, \(\mathbb{D}_{\text{synth}}\), which is further processed using a data-centric postprocessing method to achieve the final synthetic dataset, \(\mathbb{D}_{\text{synth}}^{\text{post}}\). _Classification model training_: various classification models are separately trained on \(\mathbb{D}_{\text{train}}\) and \(\mathbb{D}_{\text{synth}}^{\text{post}}\) and applied to \(\mathbb{D}_{\text{test}}\). _Evaluation_: the generative and supervised models are appraised for their statistical fidelity and utility, focusing on classification accuracy, model selection, and feature selection.
samples that are wrongly predicted with high certainty. Cleanlab provides two profiles: _easy_ and _hard_ examples.
We create data profiles with these three data-centric methods to evaluate the value of data-centric methods to improve synthetic data generation, both _ex-ante_ and _post hoc_. We use the profiles in multiple preprocessing and postprocessing strategies applied to the original and synthetic data.
#### 3.1.1 Preprocessing
Preprocessing strategies are applied to the original data \(\mathbb{D}_{\text{train}}\) i.e., before feeding to a generative model. We investigate three preprocessing strategies: (1) baseline, which applies no processing, and simply feeds the \(\mathbb{D}_{\text{train}}\) to the generative model. (2) easy_hard: Let \(S_{c}:\mathbb{D}_{\text{train}}\rightarrow[0,1]\) denote the scoring function for data-centric method \(c\). We partition \(\mathbb{D}_{\text{train}}\) into \(\mathbb{D}_{\text{train}}^{\text{easy}}\) and \(\mathbb{D}_{\text{train}}^{\text{hard}}\) data profiles using a threshold \(\tau\), such that \(\mathbb{D}_{\text{train}}^{\text{easy}}=\{x^{n}\mid S_{c}(x^{n})\leq\tau\}\) and \(\mathbb{D}_{\text{train}}^{\text{hard}}=\{x^{n}\mid S_{c}(x^{n})>\tau\}\). (3) Analogously, easy_ambiguous_hard 2 splits the \(\mathbb{D}_{\text{train}}\) on the easy, ambiguous, and hard examples. Further details are provided in Appendix A.
Footnote 2: Only defined for data-centric methods that identify ambiguous examples, i.e. Data-IQ and Data Maps.
#### 3.1.2 Generative model
We utilize the data profiles identified in the preprocessing step to train a specific generative model for each data segment, e.g. easy and hard examples separately. Let \(G:\mathbb{D}_{\text{train}}\rightarrow\mathbb{D}_{\text{synth}}\) denote the generative model trained on a dataset \(\mathbb{D}_{\text{train}}\), which produces synthetic dataset \(\mathbb{D}_{\text{synth}}\). In our framework, for each data profile in preprocessed dataset \(\mathbb{D}_{\text{train}}^{\text{pre}}\), we train a separate generative model. We generate data using each generative model and the combined synthetic data is then \(\mathbb{D}_{\text{synth}}=G_{\text{easy}}(\mathbb{D}_{\text{train}}^{\text{ easy}})\cup G_{\text{hard}}(\mathbb{D}_{\text{train}}^{\text{hard}})\), with generation preserving the ratio of the data segments, to reflect their distribution in the initial dataset.
#### 3.1.3 Postprocessing
We define postprocessing strategies as processing applied to the synthetic data after data generation but before supervised model training and task evaluation. We denote the set of postprocessing strategies as \(\mathcal{H}\). Given the synthetic dataset \(\mathbb{D}_{\text{synth}}\), each postprocessing strategy \(h\in\mathcal{H}\) maps \(\mathbb{D}_{\text{synth}}\) to a processed dataset \(\mathbb{D}_{\text{synth}}^{\text{post}}=h(\mathbb{D}_{\text{synth}})\). Two different postprocessing strategies were used: baseline: This is the identity function \(h_{\text{baseline}}(\mathbb{D}_{\text{synth}})=\mathbb{D}_{\text{synth}}\). no_hard: We remove the hard examples from the synthetic data, \(\mathbb{D}_{\text{synth}}^{\text{post}}=\mathbb{D}_{\text{synth}}\setminus\{x _{\text{synth}}^{n}\mid S_{c}(x_{\text{synth}}^{n})>\tau\}\), where \(x_{\text{synth}}^{n}\) is generated synthetic data.
### Classification model training
The training procedure of the supervised classification models \(\mathcal{M}\) comprises two steps, each minimizing a cost function \(\mathcal{L}\). (1) Train on the real data, i.e., \(\mathcal{M}_{\text{real}}=\arg\min\,\mathcal{L}(\mathcal{M}(\mathbb{D}_{\text{ train}}))\). (2) Train on synthetic data, i.e. \(\mathcal{M}_{\text{syn}}=\arg\min\,\mathcal{L}(\mathcal{M}(\mathbb{D}_{\text{ synth}}^{\text{post}}))\) We then compare utility of \(\mathcal{M}_{\text{real}}\) and \(\mathcal{M}_{\text{syn}}\) in the evaluation procedure. Our framework supports any machine learning model \(\mathcal{M}\) compatible with the Scikit-Learn API.
### Evaluation
Finally, the framework includes automated evaluation tools for the generated synthetic data to evaluate the effect of pre- and postprocessing strategies, across datasets, random seeds, and generative models. To thoroughly assess our framework, we establish evaluation metrics that extend beyond statistical fidelity, encapsulating data utility through the inclusion of three tasks.
#### 3.3.1 Statistical fidelity
The quality of synthetic data is commonly assessed using divergence measures between the real and synthetic data [5; 30]. Our framework allows for this assessment using widely adopted methods including inverse KL-Divergence [5], Maximum Mean Discrepancy [39], Wasserstein distance, as
well as Alpha-precision and Beta-Recall [30]. However, as shown in Figure 1, such measures can only tell one aspect of the story. Indeed, despite all generative models providing near-perfect statistical fidelity based on divergence measures, the synthetic data captures the nuances of real data differently, as reflected in the varying data profiles (e.g. proportion easy examples). This motivates us to also assess the data utility and the potential implications of this variability.
#### 3.3.2 Data utility
Three specific metrics were employed to assess data utility: classification performance, model selection, and feature selection.
**Classification performance** To explore the usefulness of the generated synthetic data for model training, we use the train-on-synthetic, test-on-real paradigm to fit a set of machine learning models \(\mathcal{M}\) on the synthetic data, \(\mathbb{D}_{\text{synth}}\), and subsequently evaluate their performance on a real, held-out test dataset, \(\mathbb{D}_{\text{test}}\). By using \(\mathbb{D}_{\text{test}}\) we avoid potential issues from data leakage that might occur from an evaluation on the real training sets, \(\mathbb{D}_{\text{train}}\).
**Model selection** When using synthetic data for model selection, it is imperative that the ranking of classification models \(\mathcal{M}\) trained on synthetic data aligns closely with the ranking of classification models trained on the original data. To evaluate this, we first train a set of \(\mathcal{M}_{\text{real}}\) on \(\mathbb{D}_{\text{train}}\) and evaluate their classification performance on \(\mathbb{D}_{\text{test}}\). Next, we fit the same set of \(\mathcal{M}_{\text{synth}}\) on \(\mathbb{D}_{\text{synth}}^{\text{post}}\) and evaluate their classification performance on \(\mathbb{D}_{\text{test}}\). The rank-ordering of the \(\mathcal{M}_{\text{real}}\) in terms of a performance metric (e.g. AUROC) is compared with the ranking-order of the \(\mathcal{M}_{\text{synth}}\) using Spearman's Rank Correlation.
**Feature selection** Feature selection is a crucial task in data analysis and machine learning, aiming to identify the most relevant and informative features that contribute to a model's predictive power. To evaluate the utility of using synthetic data for feature selection, a similar approach is followed as for model selection. First, a model \(\mathcal{M}_{\text{real}}\) with inherent feature importance (e.g. random forest) is trained on \(\mathbb{D}_{train}\) and the rank-ordering of the most important features is determined. This ranking is then compared to the rank ordering of the most important features obtained from the same model type \(\mathcal{M}_{\text{synth}}\) trained on \(\mathbb{D}_{\text{synth}}^{\text{post}}\) using Spearman's Rank Correlation.
### Extending the framework
The framework presented in this paper is intentionally designed to be modular and highly adaptable, allowing for seamless integration of various generative models, pre- and postprocessing strategies, and diverse tasks. This flexibility enables researchers and practitioners to explore and evaluate e.g. different combinations of generative models alongside various pre- and post-processing strategies. Further, the framework is extensible, allowing for the incorporation of additional generative models, novel processing methods, and emerging tasks, ensuring that it remains up-to-date and capable of accommodating future advancements in the field of synthetic data generation.
## 4 Experiments
To demonstrate the framework, we conduct multiple experiments, aiming to answer the following subquestions in order to investigate: **Can data-centric ML improve synthetic data generation?**:
**Q1:** Is statistical fidelity sufficient to quantify the utility of synthetic data?
**Q2:** Can we trust results from supervised classification models trained on synthetic data to generalize to real data?
**Q3**: Can data-centric approaches be integrated with synthetic data generation to create more realistic synthetic data?
**Q4**: Does the level of label noise influence the effect of data-centric processing for synthetic data generation?
All code for running the analysis and creating tables and graphs can be found at the following links: [https://github.com/HLasse/data-centric-synthetic-data](https://github.com/HLasse/data-centric-synthetic-data) or [https://github.com/vanderschaarlab/data-centric-synthetic-data](https://github.com/vanderschaarlab/data-centric-synthetic-data).
### Data
We assess our framework on a filtered version of the Tabular Classification from Numerical features benchmark suite from [40]. To reduce computational costs, we filter the benchmark suite to only include datasets with less than 100.000 samples and less than 50 features which reduced the number of datasets from 16 to 11. The datasets span several domains and contain a highly varied number of samples and features (see B for more details.). Notably, the datasets have been preprocessed to meet a series of criteria to ensure their suitability for benchmarking tasks. For instance, the datasets have at least 5 features and 3000 samples, are not too easy to classify, have missing values removed, have balanced classes, and only contain low cardinality features.
### Generative models
To cover a representative sample of the space of generative models, we evaluate 5 different models with different architectures as reviewed in 2: bayesian networks (bayesian_network), conditional tabular generative adversarial network (ctgan), tabular variational autoencoder (tvae), normalizing flow (nflow), diffusion model for tabular data (ddpm).
### Supervised classification model training
The variety of models employed in our study includes: extreme gradient boosting (xgboost), random forest, logistic regression, decision tree, k-nearest neighbors, support vector classifier, gaussian naive bayes, and multi-layer perception. It is the ranking of these models that is evaluated for the model selection task. Given the large number of models, we restrict the classification results in the main paper to be from the xgboost model. Feature selection results are reported for xgboost models. Classification and feature selection results for the other classifiers can be found in Appendix C.
### Experimental procedure
**Main study** The experimental process followed the structure outlined in Sec. 3 and Figure 2, repeated for each of the 11 datasets, 5 generative models, 10 random seeds, and all permutations of pre- and postprocessing methods for each of the three data-centric methods (Cleanlab, Data-IQ, and Data Maps). We comprehensively evaluate the results across classification performance, model selection, feature selection, and statistical fidelity. In total, we fit more than **8000** generative models.
**Impact of label noise** To assess the impact of label noise on the effect of data-centric pre- and postprocessing, we carried out an analogous experiment to the main study, on the Covid mortality dataset [41]. Here, we introduce label noise to \(\mathbb{D}_{\text{train}}\) before applying any processing. We study the impact of adding [0, 2, 4, 6, 8, 10] percent label noise.
All results reported in the main paper use Cleanlab as the data-centric method for both pre- and postprocessing. This decision was made to ensure clarity in the reported results and because Cleanlab was found to outperform Data-IQ and Data Maps in a simulated benchmark. For the benchmark of the data-centric methods as well as results using Data-IQ and Data Maps, we refer to Appendix C.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Generative Model & Classification & Model Selection & Feature Selection & Statistical fidelity \\ \hline Real data & 0.866 (0.855, 0.877) & 1.0 & 1.0 & 1.0 \\ \hline bayesian_network & 0.622 (0.588, 0.656) & 0.155 (0.055, 0.264) & 0.091 (-0.001, 0.188) & 0.998 (0.998, 0.999) \\ ctgan & 0.797 (0.769, 0.823) & **0.519** (0.457, 0.579) & 0.63 (0.557, 0.691) & 0.979 (0.967, 0.987) \\ ddpm & **0.813** (0.781, 0.844) & 0.508 (0.446, 0.573) & 0.635 (0.546, 0.718) & 0.846 (0.668, 0.972) \\ nflow & 0.737 (0.713, 0.761) & 0.354 (0.288, 0.427) & 0.415 (0.34, 0.485) & 0.975 (0.968, 0.981) \\ tvae & 0.792 (0.764, 0.818) & 0.506 (0.436, 0.565) & **0.675** (0.63, 0.722) & 0.966 (0.953, 0.978) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summarised performance for the baseline condition (no data-centric processing) across all datasets. Classification is measured by AUROC, model selection and feature selection by Spearman’s Rank Correlation, and statistical fidelity by inverse KL divergence. Numbers show bootstrapped mean and 95% CI. The best-performing model by task is in bold.
#### 4.4.1 Evaluation metrics
The classification performance is evaluated in terms of the area under the receiver operating characteristic curve (AUROC), model selection performance as Spearman's Rank Correlation between the ranking of the supervised classification models trained on the original data and the supervised classification models trained on the synthetic data, and feature selection performance as Spearman's Rank Correlation between the ranking of features in an xgboost model trained on the original data and an xgboost model trained on the synthetic data.
## 5 Results
**Statistical fidelity is insufficient for evaluating generative models.** Measures of statistical fidelity fail to capture variability in performance on downstream tasks, as shown in Table 1. Surprisingly, the worst performing model across all tasks (bayesian network), has the highest inverse KL-divergence of all synthetic datasets, which should indicate a strong resemblance to the original data. Conversely, the lowest inverse KL-divergence is found for ddpm which is one of the consistently best performing models.
**Practical guidance:** The benchmarking results illustrate that when selecting a generative model, even if the statistical fidelity appears similar, different generative models may perform differently on the 3 downstream tasks (classification, model selection, feature selection). Hence, beyond statistical fidelity, practitioners should understand which aspect is most crucial for their purpose to guide selection of the generative model.
**Different generative models for different tasks.** As shown in Table 1, training on synthetic data leads to a marked decline in classification performance compared to real data, as well as highly differing model and feature rankings. The effect differs largely by generative model, where CTGAN, TabDDPM, and TVAE most closely retain the characteristics of the real data. No one model is superior across all tasks. Specifically, TabDPPM achieves the highest classification performance, CTGAN performs best in model selection, and TVAE excels in feature selection. These findings indicate that one should test a range of generative models and consider the trade-offs in data utility before publishing synthetic data. Additionally, Appendix C reveals that although there are slight differences in performance based on the supervised model type, the overall pattern of results remains consistent across generative models.
**Practical guidance:** No generative model reigns supreme (highlighting the inherent challenge of synthetic tabular data). However, over tabular data sets, we show that _CTGAN_ and _TVAE_ offer the best trade-off between high statistical fidelity and strong performance on the three downstream tasks.
**Data-centric methods can improve the utility of synthetic data.** The addition of data-centric pre- and postprocessing strategies has a generally positive effect across all tasks as seen in Table 2 and Figure 3, despite resulting in lower statistical fidelity. In terms of classification performance, 13 out of 15 evaluations showed a net improvement, with gains up to 1.64% better classification
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Generative Model}} & \multicolumn{1}{c}{Preprocessing} & \multicolumn{1}{c}{Poverseessing} & \multicolumn{1}{c}{Classification} & \multicolumn{1}{c}{Model Selection} & \multicolumn{1}{c}{Feature Selection} & \multicolumn{1}{c}{Statistical Fidelity} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{Sturges} & \multicolumn{1}{c}{Sturges} & \multicolumn{1}{c}{Sturges} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \\ \hline bayesian\_network & baseline & no\_hard & 0.31 (4.483,5.980) & \(\uparrow\) & 2.18 (38.237) & \(\uparrow\) & 5.22 (16.602,4.761) & \(\downarrow\) & 0.007 (0.054,0.046) \\ easy\_hard & baseline & no\_hard & 0.35 (4.586,6.00) & \(\uparrow\) & 2.18 (31.99,151.42) & \(\uparrow\) & 9.79 (10.17,29.61) & \(\uparrow\) & 0.013 (0.042,0.027) \\ & & no\_hard & 0.8 (4.44,6.34) & \(\uparrow\) & 2.25 (31.39,90.28) & \(\uparrow\) & 9.39 (13.22,20.179) & \(\downarrow\) & 0.023 (0.067,0.021) \\ \hline ctgan & baseline & no\_hard & 1.23 (2.03,4.44) & \(\uparrow\) & 1.27 (1.92, 142.28) & \(\uparrow\) & 3.66 (4.88, 2.61) & \(\uparrow\) & 0.054 (1.423,0.805) \\ easy\_hard & baseline & 0.78 (4.14, 2.88) & \(\uparrow\) & 1.17 (13.14, 1.47) & \(\uparrow\) & 0.18 (-10.66, 1.103) & \(\uparrow\) & 0.065 (1.061,0.084) \\ no\_hard & no\_hard & 0.37 (-29.36, 3.8) & \(\uparrow\) & 1.47 (23.13, 26.68) & \(\uparrow\) & 0.18 (-10.04, 9.43) & \(\uparrow\) & 0.119 (-1.218, 0.701) \\ \hline ddpm & baseline & no\_hard & 0.63 (3.55, 3.42) & \(\uparrow\) & 6.35 (2.87, 1.86) & \(\uparrow\) & 1.67 (1.607, 1.337) & \(\uparrow\) & 0.165 (1.942, 1.5964) \\ easy\_hard & baseline & no\_hard & 0.68 (2.84, 2.864) & \(\uparrow\) & 5.07 (5.87, 20.31) & \(\uparrow\) & 1.58 (-10.92, 1.329) & 0.366 (1.683, 1.8393) \\ no\_hard & no\_hard & 1.32 (-20.46, 4.68) & \(\uparrow\) & 5.98 (7.27, 18.63) & \(\uparrow\) & 4.19 (-6.95, 16.16) & \(\uparrow\) & 0.284 (-16.721, 14.567) \\ \hline nflow & baseline & no\_hard & 1.05 (-2.43, 3.97) & \(\uparrow\) & 1.22 (16.688, 1.841) & \(\uparrow\) & 5.82 (1.128, 2.268) & \(\uparrow\) & 0.058 (-0.743, 0.688) \\ easy\_hard & baseline & 0.76 (-2.64, 3.81) & \(\uparrow\) & 2.37 (15.26, 2.03) & \(\uparrow\) & 6.93 (3.01, 2.565) & \(\uparrow\) & 0.022 (-0.752, 0.625) \\ no\_hard & no\_hard & 1.64 (-1.76, 4.81) & \(\uparrow\) & 4.66 (-13.67, 22.84) & \(\uparrow\) & 7.28 (-11.54, 25.01) & \(\uparrow\) & 0.052 (-0.705, 0.654) \\ \hline tvae & baseline & no\_hard & 1.1 (-2.39, 3.83) & \(\uparrow\) & 3.58 (-18.44, 11.01) & \(\uparrow\) & 0.13 (4.64, 6.38) & \(\uparrow\) & 0.053 (-1.418, 1.113) \\ easy\_hard & baseline & 0.25 (-3.46, 3.6) & \(\uparrow\) & 3.53 (-19.45, 4.75) & \(\uparrow\) & 6.7 (6.03, 0.37) & \(\uparrow\) & 0.24 (-0.941, 1.296) \\ no\_hard & no\_hard & 0.71 (-2.59, 3.95) & \(\uparrow\) & 0.11 (-13.79, 14.38) & \(\uparrow\) & 3.41 (-3.38, 9.899) & \(\uparrow\) & 0.199 (-0.929, 1.285) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Percentage increase in performance from baseline, i.e., no data-centric pre- or postprocessing (as seen in Table 1), per generative model for each pre- and postprocessing strategy, averaged across all datasets and seeds.
performance compared to not processing the data. Model selection exhibited more pronounced effects, particularly for bayesian networks, which demonstrated the greatest variability overall. While the model selection results for IVAE decreased following data-centric processing, the other generative models saw positive effects, with performance improvements ranging from 4.66% to 92%. Regarding feature selection, 12 out of 15 evaluations demonstrated a net benefit of data-centric processing, resulting in improvements of 3.41% to 9.79% in Spearman's rank correlation. The benefit of data-centric processing was found to be statistically significant for classification and feature selection (see Appendix D for details).
**Practical guidance:** Before releasing a synthetic dataset, practitioners are advised to apply the data-centric methods studied in this paper as an add-on. This will ensure enhanced utility of the synthetic data in terms of classification performance, model selection, and feature selection.
**Data-centric processing provides benefits across levels of label noise.** Data-centric pre- and postprocessing lead to consistently higher performance across tasks for all augmented datasets. As shown in Figure 4, the magnitude of the effect of data-centric processing decreases with higher levels of label noise, particularly above 8%, although this effect is not statistically significant. Even though the level of statistical fidelity decreased marginally by applying data-centric processing, data-centric processing led to statistically significant increases in performance on all three tasks.
**Practical guidance:** Fitting generative models on noisy "real-world" data can lead to sub-optimal downstream performance despite seemingly high statistical fidelity. Data-centric methods are especially useful at reasonable levels of label noise, typically below 8%. Therefore, we recommend their application when fitting generative models on real-world datasets.
### Limitations and future work
Our work delves into the performance-driven aspects of synthetic data generation, focusing primarily on data utility and statistical fidelity, particularly within the realm of tabular data. While tabular data is highly diverse and contains many intricacies, we also recognize several directions for further exploration. Our current framework, while rooted in tabular data, hints at the broader applicability to other data types such as text and images. Accommodating our framework to these modalities would require further work on modality-specific tasks. For instance, images or text do not possess a direct analog to feature selection. Such disparities underscore the need for a bespoke benchmarking methodology tailored to each specific data type.
Figure 3: Average performance across all datasets for each generative model by pre- and postprocessing method.
## 6 Conclusion
This research provides novel insights into integrating data-centric AI techniques into synthetic tabular data generation. First, we introduce a framework to evaluate the integration of data profiles for creating more representative synthetic data. Second, we confirm that statistical fidelity alone is insufficient for assessing synthetic data's utility, as it may overlook important nuances impacting downstream tasks. Third, the choice of generative model significantly influences synthetic data quality and utility. Last, incorporating data-centric methods consistently improves the utility of synthetic data across varying levels of label noise. Our study demonstrates the potential of data-centric AI techniques to enhance synthetic data's representation of real-world complexities, opening avenues for further exploration at their intersection.
## Acknowledgements
This work was partially supported by DeiC National HPC (g.a. 2022-H2-10). NS is supported by the Cystic Fibrosis Trust. LH was supported by a travel grant from A.P. Moller Fonden til Legevidenskabens Fremme.
Figure 4: Performance of a single generative model (TabDDPM) on the Covid mortality dataset with varying levels of label noise across the pre- and postprocessing conditions. |
2305.12951 | Cross-functional Analysis of Generalisation in Behavioural Learning | In behavioural testing, system functionalities underrepresented in the
standard evaluation setting (with a held-out test set) are validated through
controlled input-output pairs. Optimising performance on the behavioural tests
during training (behavioural learning) would improve coverage of phenomena not
sufficiently represented in the i.i.d. data and could lead to seemingly more
robust models. However, there is the risk that the model narrowly captures
spurious correlations from the behavioural test suite, leading to
overestimation and misrepresentation of model performance -- one of the
original pitfalls of traditional evaluation. In this work, we introduce BeLUGA,
an analysis method for evaluating behavioural learning considering
generalisation across dimensions of different granularity levels. We optimise
behaviour-specific loss functions and evaluate models on several partitions of
the behavioural test suite controlled to leave out specific phenomena. An
aggregate score measures generalisation to unseen functionalities (or
overfitting). We use BeLUGA to examine three representative NLP tasks
(sentiment analysis, paraphrase identification and reading comprehension) and
compare the impact of a diverse set of regularisation and domain generalisation
methods on generalisation performance. | Pedro Henrique Luz de Araujo, Benjamin Roth | 2023-05-22T11:54:19Z | http://arxiv.org/abs/2305.12951v1 | # Cross-functional Analysis of Generalisation in Behavioural Learning
###### Abstract
In behavioural testing, system functionalities underrepresented in the standard evaluation setting (with a held-out test set) are validated through controlled input-output pairs. Optimising performance on the behavioural tests during training (_behavioural learning_) would improve coverage of phenomena not sufficiently represented in the i.i.d. data and could lead to seemingly more robust models. However, there is the risk that the model narrowly captures spurious correlations from the behavioural test suite, leading to overestimation and misrepresentation of model performance--one of the original pitfalls of traditional evaluation.
In this work, we introduce, an analysis method for evaluating behavioural learning considering generalisation across dimensions of different granularity levels. We optimise behaviour-specific loss functions and evaluate models on several partitions of the behavioural test suite controlled to leave out specific phenomena. An aggregate score measures generalisation to unseen functionalities (or overfitting). We use to examine three representative NLP tasks (sentiment analysis, paraphrase identification and reading comprehension) and compare the impact of a diverse set of regularisation and domain generalisation methods on generalisation performance.1
Footnote 1: Our code is available on [https://github.com/peluz/beluga](https://github.com/peluz/beluga).
## 1 Introduction
The standard paradigm for evaluating natural language processing (NLP) models is to compute correctness metrics on a held-out test set from the same distribution as the training set (Linzen, 2020). If the test set is large and diverse, this may be a good measure of average performance, but it fails to account for the worst-case performance (Sagawa et al., 2020). By exploiting correlations in the training data, models work well in most cases but fail in those where the correlations do not hold (Niven and Kao, 2019; McCoy et al., 2019; Zellers et al., 2019), leading to overestimation of model performance in the wild (Ribeiro et al., 2020). Furthermore, standard evaluation does not indicate the sources of model failure (Wu et al., 2019) and disregards important model properties such as fairness (Ma et al., 2021).
Behavioural testing (Rottger et al., 2021; Ribeiro et al., 2020) has been proposed as a complementary evaluation framework, where model capabilities are systematically validated by examining its responses to specific stimuli. This is done through test suites composed of input-output pairs where the input addresses specific linguistic or social phenomena and the output is the expected behaviour given the input. The suites can be seen as controlled challenge datasets (Belinkov and Glass, 2019) aligned with human intuitions about how the agent should perform the task (Linzen, 2020).
In this work, we understand test suites as a hierarchy of functionality classes, functionalities, and test cases (Rottger et al., 2021). _Functionality classes_ stand at the highest level, capturing system capabilities like fairness, robustness and negation. They are composed of _functionalities_ that target finer-grained facets of the capability. For example, a test suite for sentiment analysis can include the functionality "negation of positive statement should be negative" inside the Negation class. Finally, each functionality is composed of _test cases_, the input-output pairs used to validate model behaviour. For the functionality above, an example test case could be the input "The movie was not good" and the expected output "negative", under the assumption that the non-negated sentence is positive.
Though behavioural test suites identify model weaknesses, the question of what to do with such feedback is not trivial. While test suite creators
argue that these tools can aid the development of better models (Rottger et al., 2021) and lead to improvements in the tested tasks (Ribeiro et al., 2020), how to act on the feedback concretely is not discussed.
One common approach is fine-tuning on data targeting the failure cases, which previous work has shown can improve performance in these same cases (Malon et al., 2022; Liu et al., 2019; McCoy et al., 2019). But this practice overlooks the possibility of models overfitting to the covered tests and consequently overestimates model performance. Even if one takes care to split the behavioural test cases into disjoint sets for training and testing, models can still leverage data artifacts such as word-label co-occurrences to achieve seemingly good performance that is over-optimistic and does not align with out-of-distribution (OOD) performance.
This creates the following dilemma: either one does not use the feedback from test suites for model development and loses the chance to improve model trustworthiness; or one uses it to address model shortcomings (e.g. by training on similar data)--and run the risk of overfitting to the covered cases. Prior work (Luz de Araujo and Roth, 2022; Rozen et al., 2019) has addressed this in part by employing structured cross-validation, where a model is trained and evaluated on different sets of phenomena. However, the analyses have been so far restricted to limited settings where only one task, training configuration and test type is examined. Moreover, these studies have not examined how different regularisation and generalisation mechanisms influence generalisation.
In this paper, we introduce BeLUGA, a general method for _Be_havioural _L_earning _U_nified _G_eneralisation Analysis. By training and evaluating on several partitions of test suite and i.i.d. data, we measure model performance on unseen phenomena, such as held-out functionality and functionality classes. This structured cross-validation approach yields scores that better characterise model performance on uncovered behavioural tests than the ones obtained by over-optimistic i.i.d. evaluation.
Our main contributions are:
**(1)** We design BeLUGA, an analysis method to measure the effect of behavioural learning. It handles different kinds of behaviour measures, operationalised by labelled or perturbation-based tests. To that end we propose loss functions that optimise the expected behaviour of three test types: minimum functionality, invariance and directional expectation tests (Ribeiro et al., 2020).
**(2)** We extend previous work on behavioural learning by exploring two training configurations in addition to fine-tuning on suite data (Luz de Araujo and Roth, 2022; Liu et al., 2019): training on a mixture of i.i.d. and suite data; and training on i.i.d. data followed by fine-tuning on the data mixture.
**(3)** We design aggregate metrics that measure generalisation across axes of different levels of granularity. From finer to coarser: generalisation within functionalities, to different functionalities and to different functionality classes.
**(4)** We compare the generalisation capabilities of a range of regularisation techniques and domain generalisation algorithms for three representative NLP tasks (sentiment analysis, paraphrase identification and reading comprehension).
This work is not a recommendation to train on behavioural test data, but an exploration of what happens if data targeting the same set of phenomena as the tests is used for model training. We find that naive optimisation and evaluation do yield over-optimistic scenarios: fine-tuning on suite data results in large improvements for seen functionalities, though at the same time i.i.d. data and unseen functionalities performance can degrade, with some models adopting degenerate solutions that pass the tests but lead to catastrophic i.i.d. performance. Including i.i.d. as well as test suite samples was found to prevent this, mitigating i.i.d. performance degradation-- with even improvements in particular cases--and yielding higher scores for unseen functionalities as well.
## 2 Background
### Behavioural testing
We consider a joint distribution \(p\) over an input space \(\mathcal{X}\), corresponding label space \(\mathcal{Y}\) and assume access to an i.i.d. dataset \(\mathcal{D}\) composed of \(n\) examples \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\sim p\}_{i=1}^{n},\;\mathbf{x}_{i}\in \mathcal{X},y_{i}\in\mathcal{Y}\), split into disjoint train, validation and test sets \(\mathcal{D}_{\text{train}}\), \(\mathcal{D}_{\text{val}}\) and \(\mathcal{D}_{\text{test}}\). We also assume access to a behavioural test suite \(\mathcal{T}\), composed of \(m\) test cases \(\{l_{i}\}_{i=1}^{m}\) partitioned into \(n_{\text{func}}\) disjoint functionalities \(\{\mathcal{F}_{i}\}_{i=1}^{n_{\text{func}}}\). Each functionality belongs to one of \(n_{\text{class}}\) functionality classes \(\{\mathcal{C}_{i}\}_{i=1}^{n_{\text{class}}}\), such that \(n_{\text{class}}<n_{\text{func}}<m\).
Each test case belongs to a functionality, \(t\in\mathcal{F}_{i}\)
and is described by a pair \((X,b)\), where \(X\) is a list with \(|X|\) inputs. The expectation function \(b:\mathbb{R}^{|X|\times|\mathcal{Y}|}\to\{0,1\}\) takes a model's predictions for all \(|X|\) inputs and outputs \(1\) if the model behaves as expected and \(0\) otherwise.
The above taxonomy, by Rottger et al. (2021), describes the hierarchy of concepts in behavioural testing: functionality classes correspond to coarse properties (e.g., negation) and are composed of finer-grained functionalities; these assess facets of the coarse property (e.g., negation of positive sentiment should be negative) and are operationalised by individual input-output pairs, the test cases. These concepts align with two of the generalisation axes we explore in this work, functionality and functionality class generalisation (SS 3.3).
We additionally follow the terminology created by Ribeiro et al. (2020), which defines three test types, according to their evaluation mechanism: Minimum Functionality, Invariance and Directional Expectation tests. When used for model training, each of them requires a particular optimisation strategy (SS 3.2).
**Minimum Functionality test (MFT)**: MFTs are input-label pairs designed to check specific system behaviour: \(X\) has only one element, \(\mathbf{x}\), and the expectation function checks if the model output given \(\mathbf{x}\) is equal to some label \(y\). Thus, they have the same form as the i.i.d. examples.
**Invariance test (INV)**: INVs are designed to check for invariance to certain input transformations. The input list \(X\) consists of an original input \(\mathbf{x}_{o}\) and \(|X|-1\) perturbed inputs \((\mathbf{x}_{i})_{i=1}^{|X|-1}\) obtained by applying label-preserving transformations on \(\mathbf{x}_{o}\). Given model predictions \(\hat{Y}:=[\mathbf{\hat{y}}_{i}]_{i=0}^{|X|-1}\) for all inputs in \(X\), then \(b(\hat{Y})=1\) if:
\[\operatorname{argmax}\mathbf{\hat{y}}_{0}=\operatorname{argmax}\mathbf{\hat{y }}_{i}\,, \tag{1}\]
for all \(i\in\{1,\ldots,|X|-1\}\). That is, the expectation function checks if model predictions are invariant to the perturbations.
**Directional Expectation test (DIR)**: The form for input \(X\) is similar to the INV case, but instead of label-preserving transformations, \(\mathbf{x}_{o}\) is perturbed in a way that changes the prediction in a task-dependent predictable way, e.g. prediction confidence should not increase. Given a task-dependent comparison function \(\delta:\mathbb{R}^{|\mathcal{Y}|}\times\mathbb{R}^{|\mathcal{Y}|}\to\{0,1\}\), \(b(\hat{Y})=1\) if:
\[\delta\left(\mathbf{\hat{y}}_{0},\mathbf{\hat{y}}_{1}\right)\wedge\delta\left( \mathbf{\hat{y}}_{0},\mathbf{\hat{y}}_{2}\right)\wedge\cdots\wedge\delta\left( \mathbf{\hat{y}}_{0},\mathbf{\hat{y}}_{|x|-1}\right)\,. \tag{2}\]
For example, if the expectation is that prediction confidence should not increase, then \(\delta(\mathbf{\hat{y}}_{0},\mathbf{\hat{y}}_{i})=1\) if \(\mathbf{\hat{y}}_{i}[c*]\leq\mathbf{\hat{y}}_{0}[c*]\), where \(c*:=\operatorname{argmax}\mathbf{\hat{y}}_{0}\) and \(\mathbf{\hat{y}}[c*]\) denotes the predicted probability for class \(c*\).
**Evaluation**: Given a model family \(\Theta\) and a loss function \(\ell:\Theta\times(\mathcal{X}\times\mathcal{Y})\to\mathcal{R}_{+}\), the standard learning goal is to find the model \(\hat{\theta}\in\Theta\) that minimises the loss over the training examples:
\[\hat{\theta}:=\operatorname*{argmin}_{\theta\in\Theta}\frac{1}{|\mathcal{D}_{ \text{train}}|}\sum_{(\mathbf{x},y)\in\mathcal{D}_{\text{train}}}\ell(\theta, (\mathbf{x},y))\,. \tag{3}\]
Then, general model correctness is evaluated using one or more metrics over the examples in \(\mathcal{D}_{\text{test}}\). The model can be additionally evaluated using test suite \(\mathcal{T}\), which gives a finer-grained performance measure over each functionality.
### Behavioural learning
In behavioural learning, samples from \(\mathcal{T}\) are used for training in a two-step approach: a pre-trained language model (PLM) (Devlin et al., 2019) is first fine-tuned on examples from \(\mathcal{D}_{\text{train}}\), and then fine-tuned further on examples from \(\mathcal{T}\)(Luz de Araujo and Roth, 2022; Liu et al., 2019).
## 3 Beluga
BelUGA is an analysis method to estimate how training on test suite data impacts generalisation to seen and unseen phenomena. Given an i.i.d. dataset \(\mathcal{D}\), a test suite \(\mathcal{T}\), and a training configuration \(\chi\) (SS 3.1), BelUGA trains on several controlled splits of suite data and outputs scores that use performance on unseen phenomena as a proxy measure (SS 3.3) for generalisation.
That is, BelUGA can be formalised as a function \(f\) parametrised by \(\mathcal{D}\), \(\mathcal{T}\), and \(\chi\) that returns a set of metrics \(M\):
\[M=f(\mathcal{D},\mathcal{T},\chi)\,. \tag{4}\]
By including measures of performance on i.i.d. data and on seen and unseen sets of phenomena, these metrics offer a more comprehensive and realistic view of how the training data affected model capabilities and shed light on failure cases that would be obfuscated by other evaluation schemes.
Below we describe the examined training configurations (SS 3.1), how BelUGA optimises the expected behaviours encoded in \(\mathcal{T}\) (SS 3.2), how it estimates generalisation (SS 3.3), and the metrics it outputs (SS 3.4).
### Training configurations
We split \(\mathcal{T}\) into three disjoint splits \(\mathcal{T}_{\text{train}}\), \(\mathcal{T}_{\text{val}}\) and \(\mathcal{T}_{\text{test}}\), such that each split contains cases from all functionalities, and define four training configurations regarding whether and how we use \(\mathcal{T}_{\text{train}}\).
**IID**: The standard training approach that uses only i.i.d. data for training (\(\mathcal{D}_{\text{train}}\)). It serves as a baseline to contrast performance of the three following _suite-augmented_ configurations.
**IID\(\rightarrow\)T**: A two-step approach where first the PLM is fine-tuned on \(\mathcal{D}_{\text{train}}\) and then on \(\mathcal{T}_{\text{train}}\). This is the setting examined in prior work on behavioural learning (SS 2.2), which has been shown to lead to deterioration of i.i.d. dataset (\(\mathcal{D}_{\text{test}}\)) performance (Luz de Araujo and Roth, 2022).
To assess the impact of including i.i.d. samples in the behavioural learning procedure, we define two additional configurations:
**IID\(+\)T**: The PLM is fine-tuned on a mixture of suite and i.i.d. data (\(\mathcal{D}_{\text{train}}\cup\mathcal{T}_{\text{train}}\)).
**IID\(\rightarrow\)(IID\(+\)T)**: The PLM is first fine-tuned on \(\mathcal{D}_{\text{train}}\) and then on \(\mathcal{D}_{\text{train}}\cup\mathcal{T}_{\text{train}}\).
By contrasting the performance on \(\mathcal{D}_{\text{test}}\) and \(\mathcal{T}_{\text{test}}\) of these configurations, we assess the impact of behavioural learning on both i.i.d. and test suite data distributions.
### Behaviour optimisation
Since each test type describes and expects different behaviour, BeLUGA optimises type-specific loss functions:
**MFT**: As MFTs are formally equivalent to i.i.d. data (input-label pairs), they are treated as such: we randomly divide them into mini-batches and optimise the cross-entropy between model predictions and labels.
**INV**: We randomly divide INVs into mini-batches composed of unperturbed-perturbed input pairs. For each training update, we randomly select one perturbed version (of several possible) for each original input.2 We enforce invariance by minimising the cross-entropy between model predictions over perturbed-unperturbed input pairs:
Footnote 2: Note that any amount of perturbed inputs could be used, but using only one allows fitting more test cases in a mini-batch if its size is kept constant.
\[\ell(\mathbf{\hat{y}}_{0},\mathbf{\hat{y}}_{i}):=-\sum_{k=1}^{c}\mathbf{\hat{ y}}_{0}[k]\cdot\log\left(\mathbf{\hat{y}}_{i}[k]\right)\,, \tag{5}\]
where \(c\) is the number of classes. This penalises models that are not invariant to the perturbations (Eq. 1), since the global minimum of the loss is the point where the predictions are the same.
**DIR**: Batch construction follows the INV procedure: the DIRs are randomly divided into mini-batches of unperturbed-perturbed input pairs, the unperturbed input is randomly sampled during training.
The optimisation objective depends on the comparison function \(\delta\). For a given \(\delta\), we define a corresponding error measure \(\epsilon_{\delta}:\mathbb{R}^{|\mathcal{Y}|}\times\mathbb{R}^{|\mathcal{Y}|} \rightarrow[0,1]\). For example, if the expectation is that prediction confidence should not increase, then \(\epsilon_{\delta}(\mathbf{\hat{y}}_{0},\mathbf{\hat{y}}_{i})=\max\left(0, \mathbf{\hat{y}}_{i}[c*]-\mathbf{\hat{y}}_{0}[c*]\right)\). This way, \(\epsilon_{\delta}\) increases with confidence increase and is zero otherwise.
We minimise the following loss:
\[\ell(\mathbf{\hat{y}}_{0},\mathbf{\hat{y}}_{i},\delta):=-\log\left(1-\epsilon _{\delta}(\mathbf{\hat{y}}_{0},\mathbf{\hat{y}}_{i})\right)\,. \tag{6}\]
Intuitively, if \(\epsilon_{\delta}=0\), the loss is zero. Conversely, the loss increases with the error measure (as \(\epsilon_{\delta}\) gets closer to 1).
### Cross-functional analysis
Test suites have limited coverage: the set of covered functionalities is only a subset of the phenomena of interest: \(\mathcal{T}\subset\mathcal{P}\), where \(\mathcal{P}\) is the hypothetical set of all functionalities. For example, the test suite for sentiment analysis provided by Ribeiro et al. (2020) has a functionality that tests for invariance to people's names--the sentiment of the sentence "I do not like Mary's favourite movie" should not change if "Mary" is changed to "Maria". However, the equally valid functionality that tests for invariance to organisations' names is not in the suite. Training and evaluating on the same set of functionalities can lead to overestimating the performance: models that overfit to covered functionalities but fail catastrophically on non-covered ones.
BeLUGA computes several measures of model performance that address generalisation from \(\mathcal{T}_{\text{train}}\) to \(\mathcal{T}_{\text{test}}\) and from \(\mathcal{T}_{\text{train}}\) to \(\mathcal{P}\). We do not assume access to test cases for non-covered phenomena, so we use held-out sets of functionalities as proxies for generalisation to \(\mathcal{P}\).
**I.i.d. data**: To score performance on \(\mathcal{D}_{\text{test}}\), we use the canonical evaluation metric for the specific _dataset_. We detail the metrics used for each examined _task3_ in Section 4.1. We denote the i.i.d. score
**Test suite data**: We compute the pass rate \(s_{\mathcal{F}_{i}}\) of each functionality \(\mathcal{F}_{i}\in\mathcal{T}\):
\[s_{\mathcal{F}_{i}}:=\frac{1}{|\mathcal{F}_{\text{test}_{i}}|}\sum_{(X,b)\in \mathcal{F}_{\text{test}_{i}}}b(\hat{Y})\,, \tag{7}\]
where \(\hat{Y}\) are the model prediction given the inputs in \(X\). In other words, the pass rate is simply the proportion of successful test cases.
We vary the set of functionalities used for training and testing to construct different evaluation scenarios:
**Unseen evaluation**: No test cases are seen during training. This is equivalent to the use of behavioural test suites without behavioural learning: we compute the pass rates using the predictions of an IID model.
**Seen evaluation**: \(\mathcal{T}_{\text{train}}\) is used for training. We compute the pass rate on \(\mathcal{T}_{\text{test}}\) using the predictions of suite-augmented models. This score measures how well the fine-tuning procedure generalises to test cases of _covered_ functionalities: even though all functionalities are seen during training, the particular test cases evaluated (\(\{t|t\in\mathcal{T}_{\text{test}}\}\)) are not the same as the ones used for training (\(\mathcal{T}_{\text{train}}\cap\mathcal{T}_{\text{test}}=\emptyset\)).
**Generalisation to non-covered phenomena**: To estimate performance on non-covered phenomena, we construct a \(l\)-subset partition of the set of functionalities \(U:=\{U_{i}\}_{i=1}^{l}\). For each \(U_{i}\), we use \(\mathcal{T}_{\text{train}}\setminus U_{i}\) for training and then compute the pass rates for \(\mathcal{T}_{\text{test}}\cap U_{i}\): \(\{s_{\mathcal{F}\text{unseen}}|\mathcal{F}\in U_{i}\}\). That is, we fine-tune it on a set of functionalities and evaluate it on the remaining (unseen) functionalities. Since \(U\) is a partition of \(\mathcal{T}\), by the end of the procedure there will be a pass rate for each functionality.
We consider three different partitions, depending on the considered generalisation proxy:
**(1)** Functionality generalisation: a partition with \(n_{\text{func}}\) subsets, each corresponding to a held-out functionality: \(U_{i}=\{\mathcal{F}_{i}\},\ i\in\{1,\dots,n_{\text{func}}\}\). We consider this a proxy of performance on non-covered functionalities: \(\mathcal{F}\in\mathcal{P}\setminus\mathcal{T}\).
**(2)** Functionality class generalisation: a partition with \(n_{\text{class}}\) subsets, each corresponding to a held-out functionality class: \(U_{i}=\{\mathcal{C}_{i}\},\ i\in\{1,\dots,n_{\text{class}}\}\). We consider this to be a proxy of performance on non-covered functionality classes: \(\mathcal{C}\subset\mathcal{P}\setminus\mathcal{T}\).
**(3)** Test type generalisation: a partition with three subsets, each corresponding to a held-out test type: \(U_{i}=\{\mathcal{F}|\mathcal{F}\ \text{has\ type}\,i\},\ i\in\{\text{MFT}, \text{INV},\text{DIR}\}\). We use this measure to examine generalisation across different test types.
### Metrics
For model comparison purposes, BeLUGA outputs the average pass rate (the arithmetic mean of the \(n_{\text{func}}\) pass rates) as the aggregated metric for test suite correctness. Since one of the motivations for behavioural testing is its fine-grained results, BeLUGA also reports the individual pass rates.
In total, BeLUGA computes five aggregated suite scores, each corresponding to an evaluation scenario:
\(s_{\mathcal{T}\text{standard}}\): The baseline score of a model only trained on i.i.d. data: if the other scores are lower, then fine-tuning on test suite data degraded overall model performance.
\(s_{\mathcal{T}\text{seen}}\): Performance on seen functionalities. This score can give a false sense of model performance since it does not account for model overfitting to the seen functionalities: spurious correlations within functionalities and functionality classes can be exploited to get deceivingly high scores.
\(s_{\mathcal{T}\text{func}}\): Measure of generalisation to unseen functionalities. It is a more realistic measure of model quality, but since functionalities correlate within a functionality class, the score may still offer a false sense of quality.
\(s_{\mathcal{T}\text{class}}\): Measure of generalisation to unseen functionality classes. This is the most challenging generalisation setting, as the model cannot exploit correlations within functionalities and functionality classes.
\(s_{\mathcal{T}\text{type}}\): Measure of generalisation to unseen test types. This score is of a more technical interest: it can offer insights into how different training signals affect each other (e.g. if training with MFTs supports performance on INVs and vice-versa).
**Comprehensive generalisation score**: Since performance on i.i.d. data and passing the behavioural tests are both important, BeLUGA provides the harmonic mean of the aggregated pass rates and the i.i.d. score as an additional metric for model comparison:
\[\text{G}:=2\frac{s_{\mathcal{T}}\cdot s_{iid}}{s_{\mathcal{T}}+s_{iid}}\,. \tag{8}\]
There are five G scores (Gstandard, Gseen, Gfunc, Gclass and Gtype), each corresponding to plugging
either \(s_{\mathcal{T}}\)standard, \(s_{\mathcal{T}}\)seen, \(s_{\mathcal{T}}\)func, \(s_{\mathcal{T}}\)class or \(s_{\mathcal{T}}\)type into Eq. 8.
This aggregation makes implicit importance assignments explicit: on the one hand, the harmonic mean ensures that both i.i.d. and suite performance are important due to its sensitivity to low scores; on the other, different phenomena are weighted differently, as i.i.d. performance has a bigger influence on the final score than each single functionality pass rate.
## 4 Experiments on cross-functional analysis
### Tasks
We experiment with three classification tasks that correspond to the test suites made available4 by Ribeiro et al. (2020): sentiment analysis (SENT), paraphrase identification (PARA) and reading comprehension (READ).5 Tables 1 and 2 summarise and show representative examples from the i.i.d. and test suite datasets, respectively.
Footnote 4: [https://github.com/marcotcr/checklist](https://github.com/marcotcr/checklist).
Footnote 5: These test suites were originally proposed for model evaluation. Every design choice we describe regarding optimisation (e.g. loss functions and label encodings) is ours.
**Sentiment analysis (SENT)**: As the i.i.d. dataset for sentiment analysis, we use the Stanford Sentiment Treebank (SST-2) (Socher et al., 2013). We use the version made available in the GLUE benchmark (Wang et al., 2018), where the task is to assign binary labels (negative/positive sentiment) to sentences. The test set labels are not publicly available, so we split the original validation set in half as our validation and test sets. The canonical metric for the dataset is accuracy.
The SENT suite contains 68k MFTs, 9k DIRs and 8k INVs. It covers functionality classes such as semantic role labelling (SRL), named entity recognition (NER) and fairness. The MFTs were template-generated, while the DIRs and INVs were either template-generated or obtained from perturbing a dataset of unlabelled airline tweets. Therefore, there is a domain mismatch between the i.i.d. data (movie reviews) and the suite data (tweets about airlines).
There are also label mismatches between the two datasets: the suite contains an additional class for neutral sentiment and the MFTs have the "not negative" label, which admits both positive and neutral predictions. We follow Ribeiro et al. (2020) and consider predictions with probability of positive sentiment within \([1/3,2/3]\) as neutral.6
Footnote 6: When training, we encode “neutral” and “not negative” labels as \([1/2,1/2]\) and \([1/3,2/3]\), respectively. One alternative is to create two additional classes for such cases, but this would prevent the use of the classification head fine-tuned on i.i.d. data (which is annotated with binary labels).
There are two types of comparison for DIRs, regarding either sentiment or prediction confidence. In the former case, the prediction for a perturbed input is expected to be either not more negative or not more positive when compared with the prediction for the original input. In the latter, the confidence of the original prediction is expected to either not increase or not decrease, regardless of the sentiment. For example, when adding an intensifier ("really", "very") or a reducer ("a little", "somewhat"), the confidence of the original prediction should not decrease in the first case and not increase in the second. On the other hand, if a perturbation adds a positive or negative phrase to the original input, the positive probability should not go down (up) for the first (second) case.
More formally, each prediction \(\hat{\mathbf{y}}\) is a two-dimensional vector where the first and second components are the confidence for negative (\(\hat{\mathbf{y}}[0]\)) and positive (\(\hat{\mathbf{y}}[1]\)) sentiment, respectively. Let \(c*\) denote the component with highest confidence in the _original_ prediction: \(c*:=\operatorname*{argmax}\hat{\mathbf{y}}_{0}\). Then, the comparison function \(\delta\) can take one of four forms (not more negative, not more positive, not more confident and not less confident):
\[\delta_{\uparrow p}(\hat{\mathbf{y}}_{0},\hat{\mathbf{y}}_{i})=1 \text{ if }\hat{\mathbf{y}}_{i}[0]\leq\hat{\mathbf{y}}_{0}[0]\] \[\delta_{\uparrow n}(\hat{\mathbf{y}}_{0},\hat{\mathbf{y}}_{i})=1 \text{ if }\hat{\mathbf{y}}_{i}[1]\leq\hat{\mathbf{y}}_{0}[1]\] \[\delta_{\downarrow c}(\hat{\mathbf{y}}_{0},\hat{\mathbf{y}}_{i})=1 \text{ if }\hat{\mathbf{y}}_{i}[c*]\leq\hat{\mathbf{y}}_{0}[c*]\] \[\delta_{\uparrow c}(\hat{\mathbf{y}}_{0},\hat{\mathbf{y}}_{i})=1 \text{ if }\hat{\mathbf{y}}_{i}[c*]\geq\hat{\mathbf{y}}_{0}[c*]\]
Each corresponding to an error measure \(\epsilon\):
\[\epsilon_{\delta_{\uparrow p}}(\hat{\mathbf{y}}_{0},\hat{\mathbf{ y}}_{i}):=\max\left(0,\hat{\mathbf{y}}_{i}[0]-\hat{\mathbf{y}}_{0}[0]\right)\] \[\epsilon_{\delta_{\uparrow n}}(\hat{\mathbf{y}}_{0},\hat{\mathbf{ y}}_{i}):=\max\left(0,\hat{\mathbf{y}}_{i}[1]-\hat{\mathbf{y}}_{0}[1]\right)\] \[\epsilon_{\delta_{\downarrow c}}(\hat{\mathbf{y}}_{0},\hat{\mathbf{ y}}_{i}):=\max\left(0,\hat{\mathbf{y}}_{i}[c*]-\hat{\mathbf{y}}_{0}[c*]\right)\] \[\epsilon_{\delta_{\uparrow c}}(\hat{\mathbf{y}}_{0},\hat{\mathbf{ y}}_{i}):=\max\left(0,\hat{\mathbf{y}}_{0}[c*]-\hat{\mathbf{y}}_{i}[c*]\right)\]
We compute the \(\max\) because only test violations should be penalised.
**Paraphrase identification (PARA)**: We use Quora Question Pairs (QQP) (Iyer et al., 2017) as the i.i.d. dataset. It is composed of question pairs from the website Quora with annotation for
whether a pair of questions is semantically equivalent (duplicates or not duplicates). The test set labels are not available, hence we split the original validation set into two sets for validation and testing. The canonical metrics are accuracy and the \(F_{1}\) score of the duplicate class.
The PARA suite contains 46k MFTs, 13k DIRs and 3k INVs, with functionality classes such as co-reference resolution, logic and negation. All MFTs are template generated,7 while the INVs and DIRs are obtained from perturbing QQP data.
Footnote 7: The test cases from functionality “Order does matter for asymmetric relations” (e.g. Q1: Is Rachel faithful to Christian?, Q2: Is Christian faithful to Rachel?) were originally labelled as duplicates. This seems to be unintended, so we change their label to not duplicates.
The DIRs are similar to MFTs: perturbed question pairs are either duplicate or not duplicate. For example, if two questions mention the same location and the perturbation changes the location in one of them, then the new pair is guaranteed not to be semantically equivalent. Thus, the comparison function \(\delta\) checks if the perturbed predictions correspond to the expected label; the original prediction is not used for evaluation. So during training, we treat them as MFTs: we construct mini-batches of perturbed samples and corresponding labels and minimise the cross-entropy between predictions and labels.
**Reading comprehension (READ)**: The i.i.d. dataset for READ is the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016), composed of excerpts from Wikipedia articles with crowdsourced questions and answers. The task is to, given a text passage (context) and a question about it, extract the context span that contains the answer. Once again, the test set labels are not publicly available and we repeat our splitting approach for SENT and PARA. The canonical metrics are exact string match (EM) (percentage of predictions that match ground truth answers exactly) and the
\begin{table}
\begin{tabular}{l l} \hline \hline Dataset & Example (label) \\ \hline SST-2 & A sensitive, moving,brilliantly constructed work. (Positive) \\ & By far the worst movie of the year. (Negative) \\ QQP & Q1: Who is king of sports? Q2-Who is the king? (Not duplicate) \\ & Q1: How much does it cost to build an basic Android app in India? Q2: How much does it cost to build an Android app in India? (Duplicate) \\ \hline SQuAD & C: Solar energy may be used in a water stabilisation pond to treat waste [...] although algae may produce toxic chemicals that make the water unusable. Q: What is a reason why the water from a water stabilisation pond may be unusable? (algae may produce toxic chemicals) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Examples for each i.i.d. dataset. The number of train/validation/test samples is 67k/436/436, 363k/20k/20k and 87k/5k/5k for SST-2, QQP and SQuAD, respectively.
\begin{table}
\begin{tabular}{l l l} \hline \hline Task & Example input (expected behaviour) & Class—Functionality (type) \\ \hline SENT & I used to think this is an incredible food. (Not more confident) & Temporal—Prepending “I used to think” to a statement should not raise prediction confidence (DIR) \\ \hline & Hannah is a Christian \(\rightarrow\) Buddhist model. (Same prediction) & Fairness—Prediction should be invariant to religion identifiers (INV) \\ \hline PARA & Q1: Are tigers heavier than computers? Q2: What is heavier, computers or tigers? (Duplicate) & SRL—Changing comparison order preserves question semantics (MFT) \\ \hline & QI: What are the best venture capital firms in India \(\rightarrow\) Albania? Q2: Which is the first venture capital firm in India? (Not duplicate) & NER—Questions referring to different locations are not duplicate (DIR) \\ \hline READ & C: Somewhere around a billion years ago, a free-living cyanobacterium entered an early eukaryotic cell [...] Q: What kind \(\rightarrow\) Who kind of cell did cyanobacteria enter long ago? (Same prediction) & Robustness—Typos should not change prediction (INV) \\ \hline & C: Maria is an intern. Austin is an editor. Q: Who is not an intern? (Austin) &
\begin{tabular}{l} Negation—Negations in question matter for prediction \\ (MFT) \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table}
Table 2: Examples for each test suite. We color-code perturbations as red/green for deletions/additions. The number of train/validation/test samples is 89k/44k/44k, 103k/51k/51k and 35k/17k/17k for the SENT, PARA and READ test suites, respectively.
more lenient \(F_{1}\) score, which measures average token overlap between predictions and ground truth answers.
The READ suite contains 10k MFTs and 2k INVs, with functionality classes such as vocabulary and taxonomy. The MFTs are template generated, while the INVs are obtained from perturbing SQuAD data.
Invariance training in READ has one complication, since the task is to extract the answer span by predicting the start and end positions. Naively using the originally predicted positions would not work because the answer position may have changed after the perturbation. For example, let us take the original context-question pair (C: Paul travelled from Chicago to New York, Q: Where did Paul travel to?) and perturb it so that Chicago is changed to Los Angeles. The correct answer for the original input is (5, 6) as the start and end (word) positions, yielding the span "New York". Applying these positions to the perturbed input would extract "to New". Instead, we only compare the model outputs for the positions that correspond to the common ground of original and perturbed inputs. In the example, the outputs for the tokens "Paul", "travelled", "from", "to", "New" and "York". We minimise the cross-entropy between this restricted set of outputs for the original and perturbed inputs. This penalises changes in prediction for equivalent tokens (e.g. the probability of "Paul" being the start of the answer is \(0.1\) for the original input but \(0.15\) for the perturbed).
### Generalisation methods
We use BeLUGA to compare several techniques used to improve generalisation:
**L2**: We apply a stronger-than-typical \(\ell_{2}\)-penalty coefficient of \(\lambda=0.1\).
**Dropout**: We triple the dropout rate for all fully connected layers and attention probabilities from the default value of \(0.1\) to \(0.3\).
**LP**: Instead of fine-tuning on suite data, we apply linear probing (LP), where the encoder parameters are frozen, and only the classification head parameters are updated. Previous work Kumar et al. (2022) has found this to generalise better than full fine-tuning.
**LP-FT**: We experiment with linear probing followed by fine-tuning, which Kumar et al. (2022) have shown to combine the benefits of fine-tuning (in-distribution performance) and linear-probing (out-of-distribution performance).
**Invariant risk minimisation (IRM)**Arjovsky et al. (2019), a framework for OOD generalisation that leverages different training environments to learn feature-label correlations that are invariant across the environments, under the assumption that such features are not spuriously correlated with the labels.
**Group distributionally robust optimisation (Group-DRO)**Sagawa et al. (2020), an algorithm that minimises not the average training loss, but the highest loss across the different training environments. This is assumed to prevent the model from adopting spurious correlations as long as such correlations do not hold on one of the environments.
**Fish**Shi et al. (2022), an algorithm for domain generalisation that maximises the inner product between gradients from different training environments, under the assumption that this leads models to learn features invariant across environments.
For the last three methods, we treat the different functionalities as different environments. For the IID\(+\)T and IID\(\rightarrow\)(IID\(+\)T) settings, we consider the i.i.d. data as an additional environment. In the multi-step training configurations (IID\(\rightarrow\)T and IID\(\rightarrow\)(IID\(+\)T)), we only apply the techniques during the second step: when training only with i.i.d. data we employ vanilla gradient descent, since we are interested in the generalisation effect of using suite data.
### Experimental setting
We use pre-trained BERT models Devlin et al. (2019) for all tasks. We follow Ribeiro et al. (2020) and use BERT-base for SENT and PARA and BERT-large for READ. All our experiments use AdamW Loshchilov and Hutter (2019) as the optimiser. When fine-tuning on i.i.d. data, we use the same hyper-parameters as the ones reported for models available on Hugging Face's model zoo.8 When fine-tuning on test suite data, we run a grid search over a range of values for batch size, learning rate and number of epochs.9 We select the configuration that performed best on \(\mathcal{T}_{\text{val}}\). To maintain the same compute budget across all methods, we do
not tune method-specific hyper-parameters. We instead use values shown to work well in the original papers and previous work Dranker et al. (2021).
## 5 Results and observations
### I.i.d. and generalisation scores
Table 3 exhibits i.i.d. and aggregate G scores for all tasks, training configurations and generalisation methods. Figure 1 presents pass rates of individual functionalities.
**Seen performance**: Fine-tuning on test suite data led to improvements for all tasks: the G\({}_{\text{seen}}\) scores are generally higher than the baseline scores (first row in Table 3).
That is, models were able to generalise across test cases from covered functionalities (from \(\mathcal{T}_{\text{train}}\) to \(\mathcal{T}_{\text{test}}\)) while retaining reasonable i.i.d. data performance. In some specific training configuration-method combinations this was not the case. We discuss this below when we compare methods and report the degenerate solutions.
**Generalisation performance**: For any given configuration-method pair, G\({}_{\text{seen}}\) is higher than G\({}_{\text{func}}\), G\({}_{\text{class}}\) and G\({}_{\text{type}}\), indicating a generalisation gap between seen and unseen functionalities. Furthermore, for all tasks, average (across methods) G\({}_{\text{func}}\) is higher than average G\({}_{\text{class}}\), which is higher than average G\({}_{\text{type}}\),10 indicating that generalisation gets harder as one moves from unseen functionalities to unseen functionality classes and test types. This aligns with previous work Luz de Araujo and Roth (2022), in which hate speech detection models are found to generalise within--but not across--functionality classes.
Footnote 10: SENT: 85.97/78.15/69.54, PARA: 75.04/72.22/71.55, READ: 49.23/46.66/43.46.
Improvements over the IID baseline were task dependent. Almost all configuration-method pairs achieved G\({}_{\text{func}}\) (22 of 24) and G\({}_{\text{class}}\) (20 of 24) scores significantly higher that the IID baseline for SENT, with improvements over the baseline as high as 18.44 and 12.84 percentage points (p.p.) for each metric, respectively. For PARA, improving over G\({}_{\text{class}}\) proved much harder--only seven configuration-method pairs could do so. Increases in score were also less pronounced, the best G\({}_{\text{func}}\) and G\({}_{\text{class}}\) scores being 6.91 and 2.19 p.p. above the baseline. READ was the one with both rarer and subtler improvements, with a third of the approaches significantly improving functionality and none significantly improving functionality class generalisation. Improvements in each case were as high as 4.70 and 0.51 p.p. over the baseline.
**I.i.d. performance**: Fine-tuning on test suite data only (IID\(\rightarrow\)T configuration) reduced performance for all tasks' i.i.d. test sets. Fine-tuning on both suite and i.i.d. examples (IID\(+\)T and IID\(\rightarrow\)(IID\(+\)T)) helped retain--or improve--performance in some cases, but decreases were still more common. The IID\(\rightarrow\)(IID\(+\)T) configuration was the most robust regarding i.i.d. scores, with an average change (compared to the IID baseline) of \(-1.43/-0.50/-1.73\) for SENT/PARA/READ.
### Training configuration and method comparison
Using a mixture of i.i.d. and suite samples proved essential to retain i.i.d. performance: the overall scores (average over methods and i.i.d. test sets) for each configuration are 67.52, 76.33 and 87.98 for IID\(\rightarrow\)T, IID\(+\)T, and IID\(\rightarrow\)(IID\(+\)T) respectively.
That said, the environment-based generalisation algorithms (IRM, DRO and Fish) struggled in the IID\(+\)T configuration, underperforming when compared with the other methods. We hypothesize that in these scenarios models simply do not see enough i.i.d. data, as we treat it as just one more environment among many others (reaching as much as 54 in PARA). LP also achieves subpar scores, even though i.i.d. data is not undersampled. The problem here is the frozen feature encoder, as BERT features are not good enough without fine-tuning on i.i.d. task data--as was done in the other configurations, with clear benefits for LP.
No individual method performed best for all scores and tasks. That said, IID\(\rightarrow\)(IID\(+\)T) with L2, LP, LP-FT or Fish was able to achieve G\({}_{\text{func}}\) and G\({}_{\text{class}}\) scores higher or not significantly different from the baseline in all tasks, though IID\(\rightarrow\)(IID\(+\)T) with dropout was the best when score is averaged over all tasks and generalisation measures. Considering this same metric, IID\(\rightarrow\)(IID\(+\)T) was the most consistently good configuration, with all methods improving over the average IID baseline.
### DIR applicability
We have found that DIRs, as used for SENT, have limited applicability for both testing and training. The reason for that is that models are generally very confident about their predictions: the average prediction confidence for the test suite predictions is \(0.97\) for the IID model. On the evaluation side,
this makes some DIRs impossible to fail: the confidence cannot get higher and fail "not more confident" expectations. On the training side, DIRs do not add much of a training signal, as the training loss is near zero from the very beginning.11
Footnote 11: Confidence regularisation (Yu et al., 2021) could potentially increase DIR’s usefulness for training and evaluation purposes.
We see an additional problem with DIRs in the SENT setting: they confuse prediction confidence with sentiment intensity. Though prediction confidence may correlate with sentiment intensity, uncertainty also signals difficulty and ambiguousness (Swayamdipta et al., 2020). Consequently, sentiment intensity tests may not be measuring the intended phenomena. One alternative would be to disentangle the two factors: using prediction values only for confidence-based tests, and sentiment intensity tests only for sentiment analysis tasks with numeric or fine-grained labels.
### Negative transfer
Though G\({}_{\text{class}}\) scores are generally lower than G\({}_{\text{func}}\) scores, this is not always the case for the pass rates of individual functionalities. When there are contrastive functionalities within a class--those whose test cases have similar surface form but entirely different expected behaviours--it is very difficult to generalise from one to the other.
For example, the SRL class in PARA contains the functionalities "order does not matter for symmetric relations" and "order does matter for asymmetric relations" (functionalities 41 and 42 in the second row of Fig. 1). Their test cases are generated by nearly identical templates where the only change is the relation placeholder. Examples from
\begin{table}
\begin{tabular}{l l l l l l l l l l l l l l l} \hline \hline Config & Method & \multicolumn{2}{c}{SST2} & \multicolumn{2}{c}{QQP} & \multicolumn{2}{c}{SQuAD} & \multicolumn{2}{c}{SENT} & \multicolumn{2}{c}{PARA} & \multicolumn{2}{c}{READ} \\ \cline{3-14} & & Acc. & Acc. & EM & G\({}_{\text{seen}}\) & G\({}_{\text{func}}\) & G\({}_{\text{class}}\) & G\({}_{\text{type}}\) & G\({}_{\text{seen}}\) & G\({}_{\text{func}}\) & G\({}_{\text{class}}\) & G\({}_{\text{type}}\) & G\({}_{\text{seen}}\) & G\({}_{\text{func}}\) & G\({}_{\text{class}}\) & G\({}_{\text{type}}\) & Avg. \\ \hline \multirow{2}{*}{IID} & Vanilla & 91.74 & 91.28 & 84.58 & 72.94 & 72.94 & 72.94 & 74.70 & 74.70 & 74.70 & 74.70 & 67.58 & 67.58 & 67.58 & 67.58 & 67.58 & 71.74 \\ \hline \multirow{8}{*}{IID} & Vanilla & 82.34 & 89.36 & 3.82 & 90.31 & 86.58 & 80.95 & 65.98 & 93.29 & 80.05 & 75.75 & 73.72 & 7.33 & 7.04 & 6.86 & 6.60 & 56.21 \\ & L2 & 78.90 & 87.70 & 0.83 & 88.17 & 84.62 & 80.51 & 68.24 & 92.34 & 75.55 & 70.93 & 71.35 & 1.65 & 1.63 & 1.63 & 1.62 & 53.19 \\ & Dropout & 83.26 & 86.70 & 1.57 & 90.86 & 88.85 & 84.44 & 68.13 & 91.44 & 78.45 & 72.57 & 69.17 & 3.09 & 3.03 & 3.01 & 3.01 & 54.67 \\ & LP & 86.24 & 88.70 & 84.05 & 80.98 & 77.59 & 74.49 & 65.61 & 78.84 & 74.03 & 71.50 & 69.67 & 76.11 & 68.96 & **68.09** & 65.50 & 72.61 \\ & LP-FT & 80.28 & 90.01 & 1.15 & 89.06 & 87.11 & 84.53 & 64.40 & 93.48 & 79.87 & 75.19 & 72.58 & 2.27 & 2.25 & 2.24 & 2.23 & 54.60 \\ & IRM & 79.36 & 88.77 & 83.05 & 88.48 & 84.42 & 73.63 & 69.18 & 92.87 & 80.51 & 74.61 & 71.58 & 90.11 & 71.36 & 66.23 & 35.90 & 74.91 \\ & DRO & 83.72 & 82.71 & 0.61 & 91.14 & 86.56 & 78.85 & 66.11 & 89.60 & 73.58 & 69.29 & 71.73 & 1.21 & 1.20 & 1.20 & 1.20 & 52.64 \\ & Fish & 84.63 & 88.61 & 84.03 & 91.68 & 87.22 & 74.75 & **70.84** & 92.89 & **81.61** & 75.91 & 74.42 & 90.61 & 68.00 & 66.00 & **65.90** & 78.39 \\ \hline \multirow{8}{*}{IID} & Vanilla & 91.28 & 91.87 & 85.45 & 94.15 & 90.97 & 80.07 & 71.81 & 93.98 & 77.93 & 72.63 & 75.23 & 91.35 & 66.54 & 64.40 & 63.12 & 78.52 \\ & L2 & 89.45 & 91.80 & 86.02 & 93.49 & 88.37 & 77.98 & 70.15 & 94.20 & 78.34 & 73.27 & 74.81 & **91.94** & **72.28** & 61.88 & 63.58 & 78.36 \\ & Dropout & 91.74 & 89.89 & 85.13 & **95.69** & 90.77 & 84.49 & **70.43** & 93.18 & 75.39 & 74.16 & 74.49 & 91.22 & 67.19 & 62.42 & 62.69 & 78.81 \\ & LP & 78.44 & 66.50 & 15.68 & 70.96 & 68.58 & 66.29 & 67.55 & 59.95 & 88.77 & 59.48 & 59.90 & 1.677 & 16.29 & 16.17 & 15.66 & 48.06 \\ & LP-FT & 91.28 & 91.16 & **86.13** & 94.14 & 89.37 & 75.31 & 71.91 & 93.90 & 75.36 & 73.48 & 74.87 & 91.65 & 72.17 & 64.64 & 62.72 & 78.29 \\ & IRM & 57.11 & 50.59 & 10.94 & 72.70 & 70.90 & 69.08 & 64.26 & 66.30 & 50.12 & 52.63 & 51.56 & 19.50 & 11.40 & 10.73 & 10.62 & 45.82 \\ & DRO & 86.24 & 84.28 & 74.51 & 92.61 & 89.44 & 78.99 & 67.52 & 90.43 & 72.25 & 73.09 & 67.89 & 63.21 & 50.68 & 52.06 & 54.35 & 71.04 \\ & Fish & 87.39 & 77.64 & 70.50 & 93.27 & 89.37 & 78.58 & 70.48 & 86.20 & 62.57 & 65.22 & 71.15 & 82.01 & 58.38 & 47.68 & 56.22 & 71.76 \\ \hline \multirow{8}{*}{IID} & Vanilla & 90.83 & 91.79 & 83.41 & 93.92 & 90.04 & 80.35 & 71.93 & 94.25 & 79.16 & 75.89 & 75.27 & 89.82 & 68.17 & 63.54 & 62.94 & 78.77 \\ & L2 & 89.68 & **91.99** & 83.71 & 94.25 & 90.11 & 77.85 & 71.70 & **94.40** & 79.20 & 75.89 & **75.32** & 90.14 & 66.98 & 66.88 & 62.22 & 78.75 \\ \cline{1-1} & Dropout & 90.60 & 90.24 & 84.92 & 94.75 & 89.61 & **85.78** & 71.89 & 93.27 & 79.23 & 74.13 & 72.31 & 91.01 & 68.64 & 63.36 & 66.10 & **79.17** \\ \cline{1-1} & LP & **92.20** & 91.28 & 83.97 & 78.89 & 74.23 & 72.94
the first and second functionalities would include (Q1: Is Natalie dating Sophia? Q2: Is Sophia dating Natalie?) and (Q1: Is Matthew lying to Nicole? Q2: Is Nicole lying to Matthew?) respectively. Though their surface forms are similar, they have opposite labels: duplicate and not duplicate.
To compute \(s\tau_{\text{func}}\), a model is trained with samples from one functionality and evaluated on samples from the other. Consequently, the surface form will be spuriously correlated with the label seen during training and models may blindly assign it to the question pairs that fit the template. This would work well for the seen functionality, but samples from the unseen one would be entirely misclassified. Conversely, when computing the \(s\tau_{\text{class}}\) score, the model will not have been trained on either of the functionalities and will not have the chance to adopt the heuristic, leading to better unseen pass rates.
### Degenerate solutions
Settings where the G\({}_{\text{type}}\) score is higher than the baseline are much rarer than for the other measures, happening only in one case for SENT (IID\(\rightarrow\)T with dropout) and never for READ. One explanation is that training only on perturbation-based tests (with no MFTs) can lead to degenerate solutions, such as passing all tests by always predicting the same class.
To assess if that was the case, we examined the predictions on the SST-2 test set of the IID\(\rightarrow\)T vanilla model fine-tuned only on DIRs and INVs. We have found that \(95.18\%\) of the i.i.d. data points were predicted as negative, though the ground truth frequency for that label is \(47.25\%\). When examining the predictions for MFTs, the results are even more contrasting: \(0.29\%\) of the predictions were negative, with the ground truth frequency being \(43.42\%\). These results show that the model has,
Figure 1: Average and individual pass rates for all tasks, methods and training configurations. From first to third row: results for SENT, PARA and READ. From first to fourth column: seen evaluation, functionality generalisation, functionality class generalisation, and test type generalisation scores. The y-axis correspond to all training configuration-method pairs; the x-axis shows the average functionality pass rate followed by the individual pass rates. The blue horizontal and vertical lines demarcate different training configurations and functionality classes, respectively. The colors in the x-axis designate the different test types: blue for MFTs, red for INVs an green for DIRs.
indeed, adopted the degenerate solution. Interestingly, it predicts different classes depending on the domain, almost always predicting negative for i.i.d. data and positive for suite data.
The gap between G\({}_{\text{class}}\) and G\({}_{\text{type}}\) scores in PARA is not as severe, possibly due to the supervised signal in its DIRs. Since these tests expect inputs to correspond to specific labels--as opposed to DIRs for SENT, which check for changes in prediction confidence--always predicting the same class would not be a good solution. Indeed, when examining the predictions on the QQP test set of the vanilla IID\(\rightarrow\)T model fine-tuned with no MFT data, we see that \(58.70\%\) of question pairs are predicted as not duplicate, which is similar to the ground truth frequency, \(63.25\%\). The same is true when checking the predictions for MFTs: \(64.47\%\) of the data points are predicted as not duplicate, against a ground truth frequency of \(52.46\%\).
The READ scenario is more complex--instead of categories, spans are extracted. Manual inspection showed that some IID\(\rightarrow\)T models adopted degenerate solutions (e.g. extracting the first word, a full stop or the empty span as the answer), even when constrained by the MFT supervised signal. Interestingly, the degenerate solutions were applied only for INV tests (where such invariant predictions work reasonably) and i.i.d. examples (where they do not). On the other hand, these models were able to handle the MFTs well, obtaining near perfect scores and achieving high \(s_{\mathcal{T}\text{seen}}\) scores even though i.i.d. performance is catastrophic. The first grid of the third row in Fig. 1 illustrates this: the high \(s_{\mathcal{T}\text{seen}}\) scores are shown on the first column, and the MFT pass rates on the columns with blue x-axis numbers.
### Summary interpretation of the results
Figure 1Figure 1 supports fine-grained analyses that consider performance on individual functionalities in each generalisation scenario. One can interpret it horizontally to assess the functionality pass rates for a particular method. For example, the bottom left grid, representing seen results for READ, shows that IID\(+\)T with LP behaves poorly on almost all functionalities, confirming the importance of fine-tuning BERT pre-trained features (SS 5.2).
Alternatively, one can interpret it vertically to assess performance and generalisation trends for individual functionalities. For example, models generalised well to functionality 21 of the READ suite (second grid of the bottom row), with most methods improving over the IID baseline. However, under the functionality class evaluation scenario (third grid of the bottom row), improvements for functionality 21 are much rarer. That is, the models were able to generalise to functionality 21 as long as they were fine-tuned on cases from functionalities from the same class (20 and 22)12.
Footnote 12: These functionalities assess co-reference resolution capabilities: 20 and 21 have test cases with personal and possessive pronouns, respectively; 22 tests whether the model distinguishes “former” from “latter”.
Such fine-grained analyses show the way for more targeted explorations of generalisation (e.g. why do models generalise to functionality 21 but not to functionality 20?), which can guide subsequent data annotation, selection and creation efforts, and shed light on model limitations.
Table 3For i.i.d. results, we refer to the SST2, QQP and SQuAD columns. These show that the suite-augmented configuration and methods (all rows below and including IID\(\rightarrow\)T Vanilla) generally hurt i.i.d. performance. However, improvements can be found for some methods in the IID\(+\)T and IID\(\rightarrow\)(IID\(+\)T). **Takeaway: fine-tuning on behavioural tests degrades model general performance, which can be mitigated by jointly fine-tuning on i.i.d. samples and behavioural tests.**
For performance concerning seen functionalities, we refer to the G\({}_{\text{seen}}\) columns. Generalisation scores concerning unseen functionalities, functionality classes and test types can be found in the G\({}_{\text{func}}\), G\({}_{\text{class}}\) and G\({}_{\text{type}}\) columns. Across all tasks, training configurations and methods, the G\({}_{\text{seen}}\) scores are higher than the others. **Takeaway: evaluating only on the seen functionalities**Liu et al. (2019); Malon et al. (2022) is overoptimistic--improving performance on seen cases may come at the expense of degradation on unseen cases. This is detected by the underperforming generalisation scores.
Previous work on generalisation in behavioural learning (Luz de Araujo and Roth, 2022; Rozen et al., 2019) corresponds to the IID\(\rightarrow\)T Vanilla row. It shows deterioration of i.i.d. scores, poor generalisation in some cases, and lower average performance compared with the IID baseline. However, our experiments with additional methods (all rows below IID\(\rightarrow\)T Vanilla), show that some configuration-method combinations improve the
\begin{table}
\begin{tabular}{l l} \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \hline \end{tabular}
\end{table}
Table 3: For i.i.d. results, we refer to the SST2, QQP and SQuAD columns. These show that the suite-augmented configuration and methods (all rows below and including IID\(\rightarrow\)T Vanilla) generally hurt i.i.d. performance. However, improvements can be found for some methods in the IID\(+\)T and IID\(\rightarrow\)(IID\(+\)T). **Takeaway: fine-tuning on behavioural tests degrades model general performance, which can be mitigated by jointly fine-tuning on i.i.d. samples and behavioural tests.**
average performance. **Takeaway: while naive behavioural learning generalises poorly, more sophisticated algorithms can lead to improvements. BeLUGA is a method that detects and measures further algorithmic improvements.**
## 6 Related work
Traditional NLP benchmarks Wang et al. (2018, 2019) are composed of text corpora that reflect the naturally-occurring language distribution, which may fail to sufficiently capture rarer, but important phenomena Belinkov and Glass (2019). Moreover, since these benchmarks are commonly split into identically distributed train and test sets, spurious correlations in the former will generally hold for the latter. This may lead to the obfuscation of unintended behaviours, such as the adoption of heuristics that work well for the data distribution but not in general Linzen (2020); McCoy et al. (2019). To account for these shortcomings, complementary evaluations methods have been proposed, such as using dynamic benchmarks Kiela et al. (2021) and behavioural test suites Kirk et al. (2022); Rottger et al. (2021); Ribeiro et al. (2020).
A line of work has explored how training on challenge and test suite data affects model performance by fine-tuning on examples from specific linguistic phenomena and evaluating on other samples from the same phenomena Malon et al. (2022); Liu et al. (2019). This is equivalent to our seen evaluation scenario, and thus cannot distinguish between models with good generalisation and those that have overfitted to the seen phenomena. We account for that with our additional generalisation measures, computed using only data from held-out phenomena.
Other efforts have also used controlled data splits to examine generalisation: McCoy et al. (2019) have trained and evaluated on data from disjoints sets of phenomena relevant for Natural Language Inference (NLI); Rozen et al. (2019) have split challenge data according to sentence length and constituency parsing tree depth, creating a distribution shift between training and evaluation data; Luz de Araujo and Roth (2022) employ a cross-functional analysis of generalisation in hate speech detection. Though these works address the issue of overfitting to seen phenomena, their analyses are restricted to specific tasks and training configurations. Our work gives a more comprehensive view of generalisation of behavioural learning by examining different tasks, training configurations, test types and metrics. Additionally, we use this setting as an opportunity to compare generalisation impact of both simple regularisation mechanisms and state-of-the-art domain generalisation algorithms.
## 7 Conclusion
We have presented BeLUGA, a framework for cross-functional analysis of generalisation in NLP systems that both makes explicit the desired system traits and allows for quantifying and examining several axes of generalisation. While in this work we have used BeLUGA to analyse data from behavioural suites, it can be applied in any setting where one has access to data structured into meaningful groups (e.g. demographic data, linguistic phenomena, domains).
We have shown that, while model performance for seen phenomena greatly improves after fine-tuning on test suite data, the generalisation scores reveal a more nuanced view, in which the actual benefit is less pronounced and depends on the task and training configuration-method combination. We have found the IID\(\rightarrow\)(IID\(+\)T) configuration to result in the most consistent improvements. Conversely, some methods struggle in the IID\(\rightarrow\)T and IID\(+\)T settings by overfitting to the suite or underfitting i.i.d. data, respectively. In these cases, a model both practically aces all tests and fails badly for i.i.d. data, which reinforces the importance of considering both i.i.d. and test suite performance when comparing systems, which is accounted for by BeLUGA's aggregate scores.
These results show that naive behavioural learning has unintended consequences, which the IID\(\rightarrow\)(IID\(+\)T) configuration mitigates to some degree. There is still much room for improvement, though, especially if generalisation to unseen types of behaviour is desired. Through BeLUGA, progress in that direction is measurable, and further algorithmic improvements might make behavioural learning an option to ensure desirable behaviours and preserve general performance and generalisability of the resulting models. We do not recommend training on behavioural tests in the current technological state. Instead, we show a way to improve research on reconciling the qualitative guidance of behavioural tests with desired generalisation in NLP models.
## Acknowledgements
We thank the anonymous reviewers and action editors for the helpful suggestions and detailed comments. We also thank Matthias Agenmacher, Luisa Marz, Anastasiia Sedova, Andreas Stephan, Lukas Thoma, Yuxi Xia, and Lena Zellinger for the valuable discussions and feedback. This research has been funded by the Vienna Science and Technology Fund (WWTF) [10.47379/VRG19008] "Knowledge-infused Deep Learning for Natural Language Processing".
|
2310.07654 | Audio-Visual Neural Syntax Acquisition | We study phrase structure induction from visually-grounded speech. The core
idea is to first segment the speech waveform into sequences of word segments,
and subsequently induce phrase structure using the inferred segment-level
continuous representations. We present the Audio-Visual Neural Syntax Learner
(AV-NSL) that learns phrase structure by listening to audio and looking at
images, without ever being exposed to text. By training on paired images and
spoken captions, AV-NSL exhibits the capability to infer meaningful phrase
structures that are comparable to those derived by naturally-supervised text
parsers, for both English and German. Our findings extend prior work in
unsupervised language acquisition from speech and grounded grammar induction,
and present one approach to bridge the gap between the two topics. | Cheng-I Jeff Lai, Freda Shi, Puyuan Peng, Yoon Kim, Kevin Gimpel, Shiyu Chang, Yung-Sung Chuang, Saurabhchand Bhati, David Cox, David Harwath, Yang Zhang, Karen Livescu, James Glass | 2023-10-11T16:54:57Z | http://arxiv.org/abs/2310.07654v1 | # Audio-Visual Neural Syntax Acquisition
###### Abstract
We study phrase structure induction from visually-grounded speech. The core idea is to first segment the speech waveform into sequences of word segments, and subsequently induce phrase structure using the inferred segment-level continuous representations. We present the Audio-Visual Neural Syntax Learner (AV-NSL) that learns phrase structure by listening to audio and looking at images, without ever being exposed to text. By training on paired images and spoken captions, AV-NSL exhibits the capability to infer meaningful phrase structures that are comparable to those derived by naturally-supervised text parsers, for both English and German. Our findings extend prior work in unsupervised language acquisition from speech and grounded grammar induction, and present one approach to bridge the gap between the two topics.
Cheng-I Jeff Lai\({}^{1}\), Freda Shi\({}^{2*}\), Puyuan Peng\({}^{3*}\),
Yoon Kim\({}^{1}\), Kevin Gimpel\({}^{2}\), Shiyu Chang\({}^{4}\), Yung-Sung Chuang\({}^{1}\), Saurabhchand Bhati\({}^{1}\),
David Cox\({}^{5}\), David Harwath\({}^{3}\), Yang Zhang\({}^{5}\), Karen Livescu\({}^{2}\), James Glass\({}^{1}\)
\({}^{1}\)MIT \({}^{2}\)TTIC \({}^{3}\)UT Austin \({}^{4}\)UC Santa Barbara \({}^{5}\)MIT-IBM Watson AI Lab
[https://github.com/jefflai108/AV-NSL](https://github.com/jefflai108/AV-NSL)
multi-modal learning, unsupervised learning, grammar induction, speech parsing
## 1 Introduction
Multiple levels of early language acquisition happen without supervisory feedback [1]; it is therefore interesting to consider whether automatic learning of language, from identifying lower-level phones or words to inducing high-level linguistic structure like grammar, can also be done in _natural settings_. In these settings, we have access to parallel data from different modalities, while the amount of data is limited. To this end, two concurrent lines of effort have been pursued:
* Zero-resource speech processing, exemplified by the unsupervised discovery of sub-phones, phones, and words [2], involves constructing speech models without relying on textual intermediates, and models how children naturally learn to speak prior to acquiring reading or writing skills.
* Grammar induction is a process that learns latent syntactic structures, such as constituency [3] and dependency trees [4], without relying on annotated structures as supervision.
In recent years, multi-modal learning has emerged as a promising and effective objective in various domains: in speech processing, [5] proposes leveraging parallel image-speech data to acquire associated words [6] and phones [7]; in syntax induction, [8] proposes to induce constituency parses from captioned images. These successes, coupled with insights from developmental psychology [1], motivate us to develop a computational model that utilizes the visual modality to acquire both low-level words and high-level phrase structures directly from speech waveforms, without relying on intermediate text or any form of direct supervision.
In this paper, we present the Audio-Visual Neural Syntax Learner (AV-NSL; Fig. 1), which induces the syntactic structure of visually grounded speech utterances. The speech utterances are represented by sequences of _continuous_ speech segment representations, which are derived from a pretrained model that simultaneously discovers word-like units and learns segment representations [9]. AV-NSL (1) learns to map the representations of speech segments and images into a shared embedding space, resulting in higher similarity scores for segments and images that convey similar meanings, (2) estimates the visual _concreteness_ of speech segments using the learned embedding space, and (3) outputs speech segments with higher concreteness as the constituents.
To assess the effectiveness of AV-NSL, we compare it with both the ground truth and the grounded text parser VG-NSL [8], as well as several alternative modeling choices such as compound-PCGs [10] over acoustic units. An ablation study supports the reasonability of our approach. As a by-product, we improve over the previous state of the art in unsupervised word segmentation.
Figure 1: We study the process of inducing constituency parse trees on unsupervised inferred word segments from raw speech waveforms. No intermediate text tokens or automatic speech recognition (ASR) is needed. For illustration, here we show the gold parse tree from the given text caption.
## 2 Related Work
**Grounded grammar induction.** Since the proposal of the visually grounded grammar induction task [8], there has been subsequent research on the topic [11, 12, 13, _inter alia_]. To the best of our knowledge, existing work on grammar induction from distant supervision has been based almost exclusively on text input. The most relevant work to ours is [12], where speech features are treated as auxiliary input for video-text grammar induction; that is, [12] still requires text data and an off-the-shelf automatic speech recognition model. In contrast to existing approaches, AV-NSL employs raw speech data and bypasses text to induce constituency parse trees, utilizing distant supervision from parallel audio-visual data.
**Spoken word discovery.** Following the pioneering work in spoken term discovery [14], a line of work has been done to discover repetitive patterns or keywords from unannotated speech [15, 16, 17, _inter alia_]. Other related work has considered tasks such as unsupervised word segmentation and spoken term discovery [18, 19, 20, 21, _inter alia_], and the ZeroSpeech challenges [22] have been a major driving force in the field. In a new line of work, [6, _inter alia_] show that word-like and phone-like units can be acquired from speech by analyzing audio-visual retrieval models. [9] shows that word discovery naturally emerges from a visually grounded, self-supervised speech model, by analyzing the model's self-attention heads. In contrast, AV-NSL attempts to induce phrase structure, in the form of constituency parsing on top of unsupervised word segments.
**Speech parsing and its applications.** Early work on speech parsing can be traced back to SParseval [23], a toolkit that evaluates text parsers given potentially errorful ASR output. In the past, syntax has also been studied in the context of speech prosody [24, 25], and [26, 27, 28] incorporate acoustic-prosodic features for text parsing with auxiliary speech input. [29] trains a text parser [30] to detect speech disfluencies, and [31] trains a text dependency parser from speech jointly with an ASR model. There is concurrent work [32] that extends DIORA [33] to unsupervised speech parsing. On the application side, syntactic parses of text have been applied to prosody modeling in end-to-end text-to-speech [34, 35, 36]. While this work builds upon pre-existing text parsing algorithms, we focus on phrase structure induction in the absence of text.
## 3 Method
Given a set of paired spoken captions and images, the Audio-Visual Neural Syntax Learner (AV-NSL) infers phrase structures from speech utterances without relying on text. The basis of AV-NSL is the Visually-Grounded Neural Syntax Learner (VG-NSL) [8, SS3.1], which learns to induce constituency parse trees by guiding a sequential sampling process with text-image matching. We break down the problem into two steps: (1) obtaining sequences of word segments, and (2) extracting segment-level self-supervised representations. With these simple extensions to VG-NSL, AV-NSL induces phrase structure without reading text, but rather by listening to speech and looking at images.
### Background: VG-NSL
VG-NSL [8] consists of a bottom-up text parser and a text-image embedding matching module. The parser consists of an embedding similarity scoring function _score_ and an embedding combination function _combine_. Given a text caption, denoted by a sequence of word embeddings \(W\!=\!\{w_{i}^{0}\}_{i=1}^{N}\) of length \(N\), the parser synthesizes a constituency parse tree by recursively scoring and combining adjacent embeddings at each step. At step \(t\), VG-NSL (1) evaluates all consecutive pairs of embeddings \(\langle w_{i}^{t},w_{i+1}^{t}\rangle\) and assigns a scalar score to each with _score_, (2) selects a pair \(\langle w_{i^{\prime}}^{t},w_{i^{\prime}+1}^{t}\rangle\) based on the corresponding scores,1 and (3) combines the selected pair of embeddings via _combine_ to form a new phrase embedding for the next step, copying the remaining ones to the next step. In VG-NSL, _score_ is parameterized by a 2-layer ReLU-activated MLP, and _combine_ is defined by the \(L_{2}\)-normalized vector addition of the input embeddings. The resulting tree is inherently binary and there are \(N\!-\!1\) combining steps in total, as the parser must combine two nodes in each step.
Footnote 1: In the training stage, the pair is sampled from a distribution where the probability of a pair is proportional to \(\exp(\textit{score})\); in the inference stage, the \(\operatorname{argmax}\) is selected.
VG-NSL trains the word embeddings \(W\) and a text-image embedding matching module (parameterized with \(\Phi\)) jointly by minimizing the phrase-level hinge-based triplet loss:
\[\mathcal{L}_{\Phi,W} =\!\sum_{\mathbf{c}_{W},\mathbf{i}_{\Phi},\mathbf{c}_{W}^{\prime}} \left[\cos(\mathbf{i}_{\Phi},\mathbf{c}_{W}^{\prime})\!-\!\cos(\mathbf{i}_{ \Phi},\mathbf{c}_{W})\!+\!\delta\right]_{+}\] \[+\sum_{\mathbf{c}_{W},\mathbf{i}_{\Phi},\mathbf{i}_{\Phi}^{ \prime}}\left[\cos(\mathbf{i}_{\Phi}^{\prime},\mathbf{c}_{W})\!-\!\cos( \mathbf{i}_{\Phi},\mathbf{c}_{W})\!+\!\delta\right]_{+},\]
where \(\mathbf{c}\), \(\mathbf{i}\) are the corresponding vector representations to a pair of parallel text constituent and image; \(\mathbf{c}^{\prime}\) is the representation of an imposter constituent that is not paired with \(\mathbf{i}\); \(\mathbf{\dot{\gamma}}\) is an imposter image representation that is not in parallel with \(c\); \(\delta\) is a constant margin; \([\cdot]_{+}\!:=\!\max(\cdot,0)\). By minimizing the above loss function, the embedding space brings semantically similar image and text span representations closer to each other, while pushing apart those that are semantically different. Additionally, the loss function can be adapted to estimate the visual _concreteness_ of a text span: intuitively, the smaller the loss related to a candidate constituent \(c\), the larger the concreteness of \(c\), and vice versa. Taking the additive inverse of values inside both \([\cdot]_{+}\) operators, the concreteness of a constituent \(c\) is defined as
\[\textit{concrete}(\mathbf{c};\mathbf{i}) =\!\sum_{\mathbf{c}^{\prime}}\left[\cos(\mathbf{i},\mathbf{c})\!-\! \cos(\mathbf{i},\mathbf{c}^{\prime})\!-\!\delta\right]_{+}\] \[+\!\sum_{\mathbf{\dot{\nu}}}\left[\cos(\mathbf{i}^{\prime},\mathbf{c })\!-\!\cos(\mathbf{i}^{\prime},\mathbf{c})\!-\!\delta\right]_{+},\]
Finally, the estimated concreteness scores are passed back to the parser as rewards to the constituents. VG-NSL jointly optimizes the visual-semantic embedding loss, and trains the parser with REINFORCE [37].
### Audio-Visual Neural Syntax Learner
AV-NSL extends VG-NSL by: (1) incorporating audio-visual word segmentation to obtain sequences of word segments from unannotated speech, (2) jointly optimizing segment-level embeddings and phrase structure induction, and (3) employing deeper parameterization for the _score_ and _combine_ functions in the parser to handle the noisier speech representations. In AV-NSL, _score_ and _combine_ are parameterized by GELU-activated [38] multi-layer perceptrons (MLPs). Below we describe (1) and (2) in detail.
**Audio-visual word segmentation:** For word segmentation, AV-NSL leverages VG-HuBERT [9] (Fig. 2; bottom), a model trained to associate spoken captions with natural images via retrieval. After training, spoken word segmentation emerges via magnitude thresholding of the self-attention heads of the audio encoder: at layer \(l\), we (1) sort the attention weights from the [CLS] token to other tokens in descending order, and (2) apply a threshold \(p\) to retain the top \(p\%\) of the overall attention magnitude (Fig. 3, top).
Empirically, however, the VG-HuBERT word segmenter tends to ignore function words such as \(a\) and _of_. Therefore, we devise a simple heuristic to pick up function word segments by inserting a short word segment wherever there is a gap of more than \(s\) seconds that VG-HuBERT fails to place a segment (Fig. 3). We additionally apply unsupervised voice activity detection [39] to restrict segment insertion to only voiced regions. The length of the insertion gap \(s\), the VG-HuBERT segmentation layer \(l\), attention magnitude threshold \(p\%\), and model training snapshots across random seeds and training steps, are all chosen in an unsupervised fashion using minimal Bayes' risk decoding (SS3.4).
**Speech segment representations:** We use the word segments output by VG-HuBERT to calculate the representations. Let \(R=\{r_{j}\}_{j=1}^{T}\) denote the frame-level representation sequence, where \(T\) is the speech sequence length. Audio-visual word segmentation returns an alignment \(A(i)=r_{pq}\) that maps the \(i^{th}\) word segment to the \(p^{th}\) to \(q^{th}\) acoustic frames. The segment-level continuous representation for the \(i^{th}\) word is \(w_{i}^{0}=\sum_{t\in A(i)}a_{i,t}r_{i,t}\), where \(a_{i,t}\) is the attention weights over the segments specified by \(A(i)\). In AV-NSL, \(R\) is the layer representation from a pretrained speech model (e.g., VG-HuBERT), and \(a_{i,t}\) is the [CLS] token attention weights over frames within each segment.
### Self-Training with s-Benepar
[40] has shown that self-training can usually improve parsing performance: the approach involves training an additional parser to fit the output generated by a pre-existing learned parser. Con
Figure 3: Example of VG-HuBERT word segmentation (top). Different colors denote different attention heads, and color transparency represents the magnitude of the attention weights. Adjacent attention boundaries (vertical dashed lines) are used as the word boundaries. _Segment insertion_ (bottom): short segments (marked with “+”) are placed in long enough gaps between existing segments to recover function words. Best viewed in color.
Figure 2: Illustration of AV-NSL, which extends VG-NSL [8] to audio-visual inputs. Taking a pair of speech utterance and its corresponding image as the input, AV-NSL encodes spans of speech utterances and images into a joint embedding space. We train AV-NSL by encouraging it to output more visually concrete spans as constituents. Note that no text is used throughout.
cretely, [40] uses Benepar [30], a supervised neural constituency parser, as the base model for self-training, where it (1) takes a sentence as the input, (2) maps it to word representations, and (3) predicts a score for all spans of being in the constituency parse tree. For inference, the model evaluates all possible tree structures and outputs the highest-scoring one.
Following [40], we apply self-training to improve AV-NSL. We extend Benepar to the speech domain and introduce s-Benepar, which takes segment-level continuous mean-pooling HuBERT representations, instead of words, as the input, and outputs the constituency parse trees.
### Unsupervised Decoding
Another key ingredient of AV-NSL is applying consistency-based decoding [8], which is similar in spirit to minimum Bayes risk (MBR) decoding, for both spoken word segmentation and phrase-structure induction. Given a loss function \(\ell_{\textit{MBR}}(O_{1},\!O_{2})\) between two outputs \(O_{1}\) and \(O_{2}\), and a set of \(k\) outputs \(\mathcal{O}\!=\!\{O_{1},\!...,\!O_{k}\}\), we select the optimal output
\[\hat{O}\!=\!\arg\!\min_{O^{\prime}\in\mathcal{O}}\!\sum_{O^{\prime\prime}\in \mathcal{O}}\!\ell_{\textit{MBR}}(O^{\prime},\!O^{\prime\prime}).\]
For word segmentation, we define the loss between two segmentation proposals \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) as \(\ell_{\textit{MBR}}(\mathcal{S}_{1},\mathcal{S}_{2})=-\textsc{mIoU}(\mathcal{ S}_{1},\!\mathcal{S}_{2}),\) where \(\textsc{mIoU}(\cdot,\cdot)\) denotes the mean intersection over union ratio across all matched pairs of predicted word spans. We match the predicted word spans using the maximum weight matching algorithm [41], where word spans correspond to vertices, and we define edge weights by the temporal overlap between the corresponding spans.
For phrase structure induction, the loss function between two parse trees \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\) is \(\ell_{\textit{MBR}}(\mathcal{T}_{1},\mathcal{T}_{2})=1-F_{1}(\mathcal{T}_{1}, \mathcal{T}_{2}),\) where \(F_{1}(\cdot,\cdot)\) denotes the \(F_{1}\) score between the two trees.
## 4 Experiments
### Setup
**Datasets.** We first evaluate models on SpokenCOCO [42], the spoken version of MSCOCO [43] where the text captions in English are read verbally by humans. It contains 83k/5k/5k images for training, validation and testing, respectively. Each image has five corresponding captions.
We also extend our experiments to German, where we synthesize German speech from the Multi30K captions [44].2 It contains 29k/1k/1k images for training, validation and testing, respectively. Each image has one corresponding caption. Following [8], we use pretrained Benepar [30], an off-the-shelf parser, to generate the oracle parse trees for captions.
Footnote 2: Synthesized with pre-trained German Tacotron2 from [https://github.com/thorstenMueller/Thorsten-Voice](https://github.com/thorstenMueller/Thorsten-Voice).
**Preprocessing.** For oracle word segmentation, we use the Montreal Forced Aligner [45] trained on the specific language (i.e., English or German). We remove utterances that have mismatches between ASR transcripts and text captions.
### Baselines and Toplines
We consider the following baselines and modeling alternatives to examine each component of AV-NSL:
**Trivial tree structures.** Following [8], we include baselines without linguistic information: random binary trees, left-branching binary trees, and right-branching binary trees.
**AV-cPCFG.** We train compound probabilistic context free grammars (cPCFG) [10] on word-level discrete speech tokens given by VG-HuBERT. Unlike in AV-NSL, the segment representations are discretized via k-Means to obtain word-level indices; that is, AV-cPCFG leverages visual cues only for segmentation and segment representations, and not for phrase structure induction.
**DPDP-cPCFG.** In contrast to AV-cPCFG, DPDP-cPCFG does not rely on any visual grounding throughout. We use DPDP [46] and pre-trained HuBERT [47] followed by k-Means to obtain discrete word indices.3
Footnote 3: We sweep the number of word clusters over \(\{1\!k,2\!k,4\!k,\!12\!k,16\!k\}\).
**Oracle AV-NSL (**t**opline). To remove the uncertainty of unsupervised word segmentation, we directly train AV-NSL on top of oracle word segmentation via forced alignment. Due to the absence of VG-HuBERT, the frame-level representations \(R\) are obtained from pre-trained HuBERT while the attention weights \(a_{i,t}\) are parameterized by a 1-layer MLP, jointly trained with the tree sampling module instead.
### Evaluation Metrics
**Word segmentation.** We use the standard word boundary prediction metrics (precision, recall and \(F_{1}\)), which are calculated by comparing the temporal position between inferred word boundaries and forced aligned word boundaries. An inferred boundary located within \(\pm 20\textit{ms}\) of a forced aligned boundary is considered a successful prediction.
**Parsing.** For parsing with oracle word segmentation, we use ParsEval[48] to calculate the \(F_{1}\) score between the predicted and reference parse trees. For parsing with inferred word segmentation, due to the mismatch in the number of nodes between the predicted and reference parse trees, we use the structured average intersection-over-union ratio (SAIoU [49]) as an additional metric.
SAIoU takes both word segmentation quality and temporal overlap between induced constituents into consideration. Concretely, the input is two constituency parse trees over the same speech utterance, \(\mathcal{T}_{1}\!=\!\{a_{i}\}_{i=1}^{n}\) and \(\mathcal{T}_{2}\!=\!\{b_{j}\}_{j=1}^{m}\), where \(a_{i}\) and \(b_{j}\) are time spans. Suppose \(a_{i}\) from \(\mathcal{T}_{1}\) is aligned to \(b_{j}\) from \(\mathcal{T}_{2}\). In a valid alignment, the following conditions must be satisfied: (1) any descendant of \(a_{i}\) may either align to a descendant of \(b_{j}\) or be left unaligned; (2) any ancestor of \(a_{i}\) may either align to an ancestor of \(b_{j}\) or be left unaligned; (3) any descendant of \(b_{j}\), may either
align to a descendant of \(a_{i}\) or be left unaligned; (4) any ancestor of \(b_{j}\), may either align to an ancestor of \(a_{i}\) or be left unaligned.
Given a Boolean matrix \(\mathbf{A}\), where \(A_{i,j}\!=\!1\) denotes that \(a_{i}\) aligns to \(b_{j}\), we compute the structured average IoU between \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\) over \(\mathbf{A}\) by
\[\text{SAIoU}(\mathcal{T}_{1},\mathcal{T}_{2};\mathbf{A})\!=\!\frac{2}{n\!+\!m} \left(\sum_{i=1}^{n_{1}}\!\sum_{j=1}^{n_{2}}\!A_{i,j}\text{IoU}(a_{i}\!,\!b_{j} )\right)\!,\]
and the final evaluation result is obtained by maximizing the SAIoU score across all valid alignments. The calculation of the optimal SAIoU score can be done within \(\mathcal{O}(n^{2}m^{2})\) time by dynamic programming.
### Unsupervised Word Segmentation
We validate the effectiveness of our unsupervised word segmentation approach. We first compare our improved VG-HuBERT with segment insertion to the original VG-HuBERT [9] and DPDP [46], a speech-only word segmentation method (Table 2). We find that segment insertion improves recall and hurts precision, and achieves the highest \(F_{1}\) score.
Next, we compare MBR-based and supervised decoding. For efficiency in practice, we implement MBR-based decoding as follows: we first run a pilot hyperparameter selection, performing word segmentation on all candidates in the SpokenCOCO validation set, and subsequently choose the \(10\) most selected sets of hyperparameters to perform another round of MBR selection on the training set.
For German word segmentation, we employ identical models and settings as those used for English, as [50] has shown that the word segmentation capability of English VG-HuBERT demonstrates cross-lingual generalization without any adaptation. On German Multi30K, our method achieves an \(F_{1}\) score of \(37.46\) with MBR, which outperforms that of supervised hyperparameter tuning (\(36.45\)).
### Unsupervised Phrase Structure Induction
We quantitatively show that AV-NSL learns meaningful phrase structure given word segments (Table 1). The best performing AV-NSL is based on our improved VG-HuBERT with MBR top 10 selection for word segmentation, VG-HuBERT layers as the segment representations, and another MBR decoding over phrase structure induction hyperparameters, including training checkpoints and segment representation layers. Comparing AV-NSL against AV-cPCFG and AV-cPCFG against DPDP-cPCFG, we empirically show the necessity of training AV-NSL on _continuous_ segment representation instead of discretized speech tokens, and the effectiveness of visual grounding in our overall model design.
Next, we compare the performance of AV-NSL with and without self-training (Table 3), and find that self-training with an s-Benepar backbone improves the best AV-NSL performance from 0.521 (Table 1) to 0.538.
Thirdly, Table 4 isolates phrase structure induction from word segmentation quality with oracle AV-NSL. Unlike in Table 1, we can adopt Parseval\(F_{1}\) score [48] for evaluation since there is no
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multicolumn{1}{c}{**Model**} & \multicolumn{1}{c}{**Output**} & \multicolumn{1}{c}{**SAIOU**} \\ \hline
**Syntax Induction** & **Segmentation** & **Seg. Representation (continuous/discrete)** & **Selection** & \\ \hline Right-Branching & VG-HuBERT+MBR\({}_{10}\) & & **0.546** \\ Right-Branching & DPDP & & 0.478 \\ AV-cPCFG & VG-HuBERT+MBR\({}_{10}\) & VG-HuBERT\({}_{10}\)+4k km (discrete) & last ckpt. (supervised) & 0.499 \\ AV-cPCFG & VG-HuBERT+MBR\({}_{10}\) & VG-HuBERT\({}_{10}\)+8k km (discrete) & last ckpt. (supervised) & 0.481 \\ DPDP-cPCFG & DPDP & HuBERT\({}_{2}\)+2k km (discrete) & last ckpt. (supervised) & 0.465 \\ AV-NSL & VG-HuBERT+MBR\({}_{10}\) & VG-HuBERT\({}_{10}\) (continuous) & MBR over 10\({}^{\text{th}}\) & 0.516 \\ AV-NSL & VG-HuBERT+MBR\({}_{10}\) & VG-HuBERT\({}_{10,11,12}\) (continuous) & MBR over \(\{10^{\text{th}},11^{\text{th}},12^{\text{th}}\}\) layer & 0.521 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Fully-unsupervised English phrase structure induction results on SpokenCOCO. Subscripts denote layer number, e.g. HuBERT\({}_{10}\) denotes the 10\({}^{\text{th}}\) layer representation from HuBERT. We list the best-performing hyperparameters for each modeling choice.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Method** & **Decoding** & **Precision** & **Recall** & \(F_{1}\) \\ \hline DPDP [46] & supervised & 17.37 & 9.00 & 11.85 \\ VG-HuBERT [9] & supervised & **36.19** & 27.22 & 31.07 \\ VG-HuBERT & supervised & 34.34 & 29.85 & 31.94 \\ w/ seg. ins. (ours) & MBR & 33.31 & **34.90** & **34.09** \\ \hline \hline \end{tabular}
\end{table}
Table 2: English word segmentation results on the SpokenCOCO validation set. Supervised decoding methods require an annotated development set to choose the best hyperparameters. The best number in each column is in boldface. VG-HuBERT with segment insertion and MBR decoding achieves the best boundary \(F_{1}\).
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Segment Representation** & **Output Selection** & **SAIOU** \\ \hline HuBERT & last ckpt. & **0.538** \\ HuBERT\({}_{2,4,6,8,10,12}\) & MBR & 0.536 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of self-training with s-Benepar, trained on outputs from the best AV-NSL model (SAIoU 0.521) from Table 1. Inputs to s-Benepar are segment-level HuBERT representations instead of VG-HuBERT representations.
mismatch in the number of tree nodes. With proper segment-level representations, unsupervised oracle AV-NSL matches or outperforms text-based VG-NSL. Similarly to Table 3, self-training with s-Benepar on oracle AV-NSL trees further improves the syntax induction results, almost matching that of right-branching trees.
Perhaps surprisingly, right-branching trees (RBT) with oracle and VG-HuBERT word segmentation reach the best English SAIoU and \(F_{1}\) scores on SpokenCOCO, respectively. We note that the RBTs highly align with the head-initiality of English [51], especially in our setting where all punctuation marks were removed. In contrast, our experiments on German show that AV-NSL out-performs both RBTs and left-branching trees in terms of SAIoU (Table 5).4
Footnote 4: For German grammar induction with oracle segmentation, oracle AV-NSL attains 33.94 \(F_{1}\) while LBT/RBT attain 26.70/25.30 \(F_{1}\) respectively.
### Analyses
**Unsupervised Constituent Recall:** Following [8], we show the recall of specific types of constituents (Table 6). While VG-NSL benefits from the head-initial (HI) bias, where abstract words are encouraged to appear in the beginning of a constituent, AV-NSL outperforms all variations of VG-NSL on all constituent categories except NP.
**Ablation Study:** We introduce three ablations to evaluate the efficacy of high-quality word segmentation, visual representation, and speech representation (Table 7). Concretely, we train AV-NSL with the following modifications:
1. Given the number of words \(n\), we divide the speech utterances uniformly into \(n\) chunks to get the word segmentation, and use the same visual representations as AV-NSL.
2. We replace visual representations with random vectors, where each pixel is independently sampled from a uniform distribution, and use the oracle word segmentation.
3. We replace the self-supervised speech representations (HuBERT) with log-Mel spectrograms.
We observe significant performance drops in all settings, compared to oracle AV-NSL. This set of results complements Table 1, stressing that precise word segmentation and both high-quality visual and speech representations are all necessary for phrase structure induction from speech.
## 5 Conclusion and Discussion
Previous research has achieved notable progress in zero-resource speech processing and grammar induction by employing multimodal techniques. In our study, we propose an approach to model human language acquisition that leverages the visual modality to acquire language competence. Our approach, AV-NSL, encompasses the extraction of word-level representations from speech and the derivation of syntactic structures from those representations, thereby eliminating the reliance on text. Through quantitative and qualitative analyses, we demonstrate on both English and German that our proposed model successfully infers meaningful constituency parse trees based on continuous word segment representations. Our work represents the initial step in grammar induction within textless settings, paving the way for future research endeavors, which include but are not limited to (1) building end-to-end models that take spoken utterances and produce their syntactic analysis, (2) understanding the contribution of various grounding signals to grammar induction, and (3) modeling human language acquisition in grounded environments.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{\(F_{1}\)} & \multicolumn{3}{c}{**Constituent Recall**} \\ \cline{3-5} & & **NP** & **VP** & **PP** & **ADJP** \\ \hline VG-NSL [8] & 50.4 & **79.6** & 26.2 & 42.0 & 22.0 \\ VG-NSL + HI & 53.3 & 74.6 & 32.5 & 66.5 & 21.7 \\ VG-NSL + HI + FastText & 54.4 & 78.8 & 24.4 & 65.6 & 22.0 \\ oracle AV-NSL & **55.5** & 55.5 & **68.1** & **66.6** & **22.1** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Recall of specific typed phrases, incl. noun phrases (NP), verb phrases (VP), prepositional phrases (PP) and adjective phrases (ADJP), and overall \(F_{1}\) score, evaluated on SpokenCOCO test set. VG-NSL numbers are taken from [8].
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multicolumn{2}{c}{**Model**} & \multicolumn{2}{c}{**Output**} \\ \hline
**Syntax Induction** & **Seg. Representation** & **Selection** & \\ \hline Right-Branching & N/A & N/A & **57.39** \\ \hline VG-NSL & word embeddings & Supervised & 53.11 \\ oracle AV-NSL & HuBERT\({}_{2}\) & Supervised & 55.51 \\ oracle AV-NSL \(\rightarrow\) s-Benepar & HuBERT\({}_{2}\) & MBR & 57.24 \\ \hline \hline \end{tabular}
\end{table}
Table 4: ParsEval \(F_{1}\) scores given oracle segmentation. The best number is in boldface.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multicolumn{2}{c}{**Model**} & \multicolumn{2}{c}{**Output**} \\ \hline
**Induction** & **Segmentation** & **Selection** & \\ \hline Right-Branching & VG-HuBERT+MBR\({}_{10}\) & N/A & 0.456 \\ Left-Branching & VG-HuBERT+MBR\({}_{10}\) & N/A & 0.461 \\ AV-NSL & VG-HuBERT+MBR\({}_{10}\) & MBR & **0.487** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Phrase structure induction results on the German Multi30K test set. The best number is in boldface. |
2302.08776 | A comparison of millisecond pulsar populations between globular clusters
and the Galactic field | We have performed a systematic study of the rotational, orbital and X-ray
properties of millisecond pulsars (MSPs) in globular clusters (GCs) and
compared their nature with those of the MSPs in the Galactic field (GF). We
found that GC MSPs generally rotate slower than their counterparts in the GF.
Different from the expectation of a simple recycling scenario, no evidence for
the correlation between the orbital period and the rotation period can be found
from the MSP binaries in GCs. There is also an indication that the surface
magnetic field of GC MSPs are stronger than those in the GF. All these suggest
dynamical interactions in GCs can alter the evolution of MSPs/their progenitors
which can leave an imprint on their X-ray emission properties. While the MSPs
in both GF and GCs have similar distributions of X-ray luminosity and hardness,
our sample supports the notion that these two populations follow different
relation between the X-ray luminosity and spin-down power. We discuss this in
terms of both pulsar emission model and the observational bias. | Jongsu Lee, C. Y. Hui, J. Takata, A. K. H. Kong, Pak-Hin Thomas Tam, Kwan-Lok Li, K. S. Cheng | 2023-02-17T09:26:33Z | http://arxiv.org/abs/2302.08776v1 | # A comparison of millisecond pulsar populations between globular clusters and the Galactic field
###### Abstract
We have performed a systematic study of the rotational, orbital and X-ray properties of millisecond pulsars (MSPs) in globular clusters (GCs) and compared their nature with those of the MSPs in the Galactic field (GF). We found that GC MSPs generally rotate slower than their counterparts in the GF. Different from the expectation of a simple recycling scenario, no evidence for the correlation between the orbital period and the rotation period can be found from the MSP binaries in GCs. There is also an indication that the surface magnetic field of GC MSPs are stronger than those in the GF. All these suggest dynamical interactions in GCs can alter the evolution of MSPs/their progenitors which can leave an imprint on their X-ray emission properties. While the MSPs in both GF and GCs have similar distributions of X-ray luminosity and hardness, our sample supports the notion that these two populations follow different relation between the X-ray luminosity and spin-down power. We discuss this in terms of both pulsar emission model and the observational bias.
globular clusters: general -- stars: binaries: general -- pulsars: general -- X-rays: general +
Footnote †: journal: ApJ
## 1 Introduction
There is a consensus that millisecond pulsars (MSPs) are formed through the angular momentum transfer from their binary companions (Alpar et al., 1982; Radhakrishnan & Srinivasan, 1982; Fabian et al., 1983). The first MSP, PSR B1937+21 was discovered by Backer et al. (1982). In comparison with the non-recycled canonical pulsars, MSPs are characterized by their fast rotation (\(P\lesssim 20\) ms) and weak surface magnetic fields (\(B_{s}\lesssim 10^{9}\) G). Thanks to the extensive surveys and the synergy of multiwavelength observations (see Hui, 2018, for a review), the currently known MSP population has reached a size of \(\sim 600\).
X-ray emission of MSPs are believed to be originated from the backflow charged particles from the acceleration regions in their magnetospheres (e.g. Zhang & Cheng, 2003). While the relativistic electron/positron cascades emit the non-thermal synchrotron X-rays when they gyrate along the magnetic field lines, thermal X-ray emission can also be generated when these energetic particles follow the open magnetic field lines and deposit their energies on the stellar surface (e.g. Zavlin, 2007; Bogdanov & Grindlay, 2008). For the MSPs reside in the compact binaries, additional X-ray emission component can be resulted from the intrabinary shock (e.g. Huang et al., 2012; Hui et al., 2014).
According to their locations in our Galaxy, MSPs can be divided into two groups: the Galactic field (GF) population and globular cluster (GC) population. For the X-ray properties of MSPs in the GF, Lee et al. (2018) have conveyed a systematic survey. With a left-censored sample of 47 detections and 36 upper limits of their X-ray luminosities \(L_{x}\), an empirical relation between \(L_{x}\) and the spin-down power \(\dot{E}\) has found to be \(L_{x}\simeq 10^{31.05}\left(\dot{E}/10^{35}\right)^{1.31}\) erg s\({}^{-1}\) in 2-10 keV. The inferred X-ray conversion efficiency is lower than the previous estimate in the same energy band (e.g. Possenti
et al., 2002) which was subjected to selection bias with the upper-limits excluded in the previous works.
The X-ray properties of different types of MSPs in the GF have also been compared by Lee et al. (2018). The X-ray emission from the redbacks (RBs), which are characterized by their tight orbits with orbital period \(P_{b}\lesssim 1\) day and their non-degenerate late-type companions (see Hui & Li, 2019, for an updated review), are found to be generally harder and more luminous than the other classes. This can be accounted for by the contribution of their intrabinary shocks in the X-ray production (see discussion in Lee et al., 2018).
For the progenitors of MSPs, namely the low-mass X-ray binaries (LMXBs), their formation rate per unit mass in GCs is known to be orders of magnitudes higher than that in the GF (Katz et al., 1975; Clark, 1975). This can be attributed to the frequent dynamical interactions in the central regions of GCs (Hui et al., 2010; Turk & Lorimer, 2013). In some GCs, the stellar density can be high enough that multiple interactions of the binaries can occur (Verbunt & Freire, 2014). With such complications, the evolution of compact binaries in GCs can possibly be different from those in the GF. And hence, it is not unreasonable to speculate that the characteristics of the MSPs, including the rotational, orbital and X-ray properties, in these two populations can be different.
Thanks to the sub-arcsecond angular resolution of _Chandra_ X-ray Observatory, X-ray point sources can be resolved from the dense cores of GCs (e.g. Heinke et al., 2005; Bhattacharya et al., 2017; Bahramian et al., 2020; Oh et al., 2020) This enables us to identify the X-ray counterparts of MSPs by matching their radio timing positions with the X-ray source positions. With this sample, we can compare the X-ray properties of GC MSPs with their counterparts in the GF.
In this study, we first collected the updated samples of both X-ray and radio selected GC MSPs and normalized their X-ray properties. These allow us to convey a systematic analysis and compare their properties with those in the GF for investigating if there is any difference between these two populations of MSPs.
## 2 Data Collection and Normalization
To be consistent with Lee et al. (2018), we define MSPs as the pulsars with rotational period \(P<20\) ms in this work. With this criterion, a sample of 204 radio selected GC MSPs are collected from the online catalogue compiled by P. Freire.1 On the other hand, we obtained an updated radio selected sample of 386 MSPs in the GF from the online catalogue maintained by West Virginia University. 2 The rotation period \(P\) and the orbital period \(P_{b}\) of the MSPs are also collected from these two catalogues. For obtaining the reliable estimates of spin-down rate \(\dot{P}\) from a sub-sample of GC MSPs, please refer to Section 4.
Footnote 1: [http://www.naic.edu/](http://www.naic.edu/)\(\sim\)pfreire/GCpsr.html
To identify the X-ray counterparts of GC MSPs, we consider the sources detected by the Advanced CCD Imaging Spectrometer (ACIS) onboard _Chandra_. With its sub-arcsecond spatial resolution, ACIS enables the X-ray counterparts to be resolved from the crowded environment in GCs and provide their temporal and spectral information. The information of ACIS observations of MSP-hosting GCs are summarized in Table 1.
We found that 56 GC MSPs have their X-ray counterparts previously reported in the literature. Their properties and the relevant literature are summarized in Table 2. In order to compare the X-ray properties of GF MSPs reported by Lee et al. (2018), we have to normalize the data with the same procedures adopted in their study. In the following, we describe the strategy for collecting the X-ray parameters in this work.
If a source has its X-ray spectrum characterized by an absorbed power-law (PL) model and with the spectral parameters reported in the existing literature, we adopt these reported properties in our work. However, since different studies have adopted different energy ranges in their X-ray analyses, it is necessary to normalize our X-ray fluxes of GC MSPs in the same energy band.
With the aid of PIMMS, we computed the absorption-corrected X-ray fluxes \(f_{x}\) by integrating the spectral model in two energy ranges: 0.3-8 keV and 2-10 keV. While 0.3-8 keV is a standard band for analysing _Chandra_ ACIS data, \(f_{x}\) in 2-10 keV allow us to compare with those of GF MSPs reported by Lee et al. (2018). Using the distances of the GCs \(d\) (see Table 2), we computed the X-ray luminosities as \(L_{x}=4\pi d^{2}f_{x}\).
This method allows us to obtain \(L_{x}\) and the effective X-ray photon indices \(\Gamma\) for the X-ray emitting MSPs in M13, M62, NGC 6397, Terzan5, and M22. Their parameters and the corresponding references are given in Table 2.
For the sources that do not fulfill the criteria above, we analyzed the data directly by using CIAO (v.4.12). All the data were firstly reprocessed by using the chandra_repro script with updated calibration (CALDB v.4.9.2.1) and was filtered in the 0.3-8 keV energy band using dmcopy task. All the data were reprocessed with subpixel event repositioning in order to
facilitate a high angular resolution analysis. For the GCs with more than one observation, we first combined the data at different epochs by the merge_obs script. The images were subsequently produced with a binning factor of 0.5. By running a wavelet detection algorithm (wavdetect) on the merged images with a range of scales (1.0, 1.414, 2.0, 2.828, 4.0), X-ray counterparts of the GC MSPs were identified if the sources were detected at a significance larger than 3\(\sigma\) at the radio timing positions.
The X-ray spectra of these counterparts were extracted by using specextract in each individual observation. All the response files were generated by the same tool. The source extraction regions were selected so as to minimize the contamination of nearby sources. And the background regions were sampled in the circular source-free regions around the GCs with the radii in a range of 10-20 arcsec. All the spectral fittings in this work were performed in 0.3-8 keV with XSPEC (v.12.9). In view of low-counts data for most of the cases, all the analyses were performed with Cash statistics (Cash, 1979), which enables us to perform fitting with unbinned data (cf. Eq. 7 in Cash, 1979). This should give us less biased results than the binned analysis. If a source has been observed more than once (see Table 1), its spectra obtained from different observations were simultaneously fitted so as to obtain tighter constraints on its X-ray properties.
In order to better constrain the spectral parameters and hence the X-ray fluxes, we took the column absorption \(N_{H}\) as a fixed parameter throughout our analysis. If \(N_{H}\) has been reported in the literature, the value is adopted for spectral fitting and computing the absorption-corrected flux. Otherwise, \(N_{H}\) were estimated from the optical extinction \(E(B-V)\) of the GCs (Harris, 1996) through the correlation between these two quantities (Guver & Ozel, 2009).
All the spectra were fitted with a simple absorbed PL model with XSPEC (i.e. TBABS \(\times\) POWERLAW). With the multiplicative component CFLUX ( TBABS\(\times\)CFLUX\(\times\)POWERLAW), we obtained a robust estimate of the unabsorbed flux as well as its 1\(\sigma\) uncertainty in both 0.3-8 keV and 2-10 keV energy bands.
With the aforementioned procedures, we have identified 56 confirmed X-ray detections of MSPs in 12 GCs and obtained the normalized estimates of their \(L_{x}\) and \(\Gamma\) (Table 2). The sample size is comparable with the X-ray selected MSPs found in the GF (Tab. 1 in Lee et al., 2018). The updated statistics of these radio and X-ray selected MSPs in different GCs are shown in the upper panel of Figure 1. Following Lee et al. (2018), we divided the MSPs into four different classes, isolated (Iso), red-back (RB), black widow (BW) and non-"spider" binaries (Oth). Their corresponding fractions of the radio/X-ray selected samples in the GF and GCs are shown in the lower panel of Figure 1.
Among our X-ray selected MSPs in 12 GCs, the samples in 7 GCs (i.e. M4, M13, M62, NGC 6397, Terzan5, M28, and M22) have also been covered by the catalogue compiled by Bahramian et al. (2020). This allows us to cross-check the validity of our results. Within the tolerance of the statistical uncertainties, our estimates are found to be consistent with those given in Bahramian et al. (2020).
We found that 60 additional GC MSPs with known radio timing positions have been covered by the archival _Chandra_ ACIS data serendipitously. This enables us to also search for their X-ray counterparts. Using the same procedures of data reduction and source detection described above, we did not find any additional X-ray emitting GC MSPs with detection significance larger than 3\(\sigma\) in the merged images.
Despite the non-detections, the archival data still allow us to constrain the limiting \(L_{x}\) of these 60 GC MSPs. In examining the \(L_{x}-\dot{E}\) relation for the MSPs in the GF, Lee et al. (2018) have shown that a less biased relation can be obtained from a survival analysis with the upper-limits of \(L_{x}\) included (see Section 4). To obtain the 1\(\sigma\) limiting fluxes, we assumed a simple PL model with \(\Gamma\)=2 and \(N_{H}\) adopted from the literature or inferred from the \(E(B-V)\) of the corresponding GC. Together with the distances towards these GCs, we placed 1\(\sigma\) limiting luminosities of these additional 60 GC MSPs in \(0.3-8\) keV and \(2-10\) keV. The results are summarized in Table 3.
## 3 Variability Analysis
Apart from the periodic variations across the orbit resulted from different causes (e.g. intrabinary shock, eclipse of emission region, heating of companion surface), secular changes can also occur in a pulsar. The discoveries of RBs show that the properties of MSPs can vary considerably in different wavelengths as they switch between rotation-powered state and accretion-powered state (e.g. Papitto et al., 2013; Takata et al., 2014). On the other hand, evidence of variable X-ray/\(\gamma-\)ray emission have also been observed from some isolated pulsars in GF (e.g. Lin et al., 2021; Takata et al., 2020; Hermsen et al., 2013).
All these indicate that emission from a pulsar might not be as stable as previously thought. Since a number of GC MSPs in our sample have been observed by _Chandra_ more than once, we are able to characterize their X-ray variabilities.
Bahramian et al. (2020) have included the results of variability test (i.e. \(p-\)values for Kolmogorov-Smirnov
(K-S) test) for 7 out of 12 GCs in our sample. Among all 56 X-ray counterparts of GC MSPs in Table 2, 6 sources have \(p-\)values \(<0.05\) for the K-S test as reported in Bahramian et al. (2020) (M62 B, NGC6397 A, M28 A, M28 I, M28 L and Terzan5 P). This indicates the possible variable X-ray emission from these sources.
Our variability analysis was divided into into two parts: (1) long-term variability search and (2) short-term variability search. For (1), we searched for the possible X-ray flux variations of the targets among observations in different epochs. For (2), we searched for the possible variability within a single observation.
For the long-term variability analysis, we only consider the observations in which the X-ray counterparts are detected with a significance \(>3\sigma\) in a single exposure. In order to compare the \(f_{x}\) of a given target in different epochs, we fitted its X-ray spectra obtained from individual observations. The response files generated in each observations can account for the possible instrumental variation among them. By fitting a simple absorbed PL model (with \(N_{H}\) fixed) for each spectra, we obtained the estimates of absorption-corrected \(f_{x}\) of a source in different epochs. Using these estimates, we constructed the long-term background-subtracted X-ray light curves for the subsequent analysis.
In order to identify the candidates that demonstrated long-term X-ray variability, we employed the Bayesian block algorithm that generates the optimal adaptive-width blocks (Scargle et al., 2013). Even if the sequential data is not evenly sampled, the block-wise representation generated by this method can help to indicate local variability (e.g. Ahnen et al., 2016). Using the routine modeling.bayesian_blocks.bayesian_blocks in the python library HEPSTATS, we have identified 10 sources require more than one block in modeling their long-term X-ray light curves. These are 47Tuc E, 47Tuc W, NGC6397 A, NGC6752 F, M28 A, M28 I, M28 L, Terzan5 A, Terzan5 P and Terzan5 ad.
To scrutinize the significance of these variability candidates, we used two-sample Kuiper test (Stephens, 1970) to compare their light curves with the uniform distributions determined by their corresponding mean fluxes. For this analysis, we utilized the routine kuiper_twoside in ASTROPY package (v.5.1). In this work, we consider a source has possible long-term X-ray variability if the \(p-\)value inferred from Kuiper test is \(<0.05\). We found that only two sources from the short-list obtained from the Bayesian block analysis, M28 I and NGC6752 F, fulfill this criterion. Their long-term X-ray light curves are shown in Figure 2 with the identified Bayesian blocks illustrated. In the followings, we describe their temporal behaviors in further details.
M28 I (IGR J18245-2452), which is a RB has been found to swing between accretion-powered state and rotation-powered state (Papitto et al., 2013), is the most significantly variable X-ray source in our sample (\(p\sim 6.8\times 10^{-24}\)). This is consistent with the results reported by Linares et al. (2014) which has presented a detailed analysis of this source.
The X-ray counterpart of the isolated MSP NGC6752 F can be detected in 6 out of 7 archival _Chandra_ observations. The non-detection in the observation on 2017 July 25 (MJD 57959.83) can be ascribed to its relatively short exposure time (\(\sim\)18 ks). For this epoch, we placed a \(1\sigma\) limiting \(L_{x}\) of \(3.9\times 10^{30}\) erg/s and \(1.9\times 10^{30}\) erg/s in 0.3-8 keV and 2-10 keV respectively. In most of these observations, NGC6752 F behaves as a steady X-ray source except for the recent observation in 2017. In this epoch, its \(L_{x}\) is found to increase by a factor of \(\sim 5\) in comparison with its previous level (Figure 2). Kuiper test gives a \(p-\)value of 0.011 and suggests the variability can be significant. We further investigated whether such X-ray flux variation can be contaminated by the nearby bright sources. One MSP (NGC6752 D) and two cataclysmic variables (CVs) (CX 1 & CX 5 in Forestell et al., 2014) are bright sources located at \(\sim 4.6"\),\(\sim 7.4"\), and \(\sim 6.5"\) away from NGC6752 F respectively. There is no evidence of long-term X-ray variation found for NGC6752 D. Moreover, we do not find any resemblance between the long-term X-ray variation of NGC6752 F and its nearby CVs. Therefore, we concluded that the long-term X-ray variation of NGC6752 F is unlikely a result from the contamination of these bright sources.
We have also searched for the possible short-term variability within each observation windows by utilizing Gregory-Loredo variability algorithm (Gregory and Loredo, 1992) for computing the odd ratios that the arrival times are not uniformly distributed in time. The algorithm is implemented in the CIAO tool glvary which assigns a variability index according to the odd ratios. In this work, we set the criterion that a source demonstrates variability within a single observation if the inferred variability index is larger than 6. This implies the probability of this source to be variable is \(\gtrsim 90\%\). For avoiding the false alarm which results from the fluctuation due to low count statistics, we only consider the cases with more than 50 counts.
Using the unbinned event lists and the corresponding effective areas, we have identified 3 GC MSPs, M28 I, NGC6397 A and Terzan5 P, which satisfy the aforementioned criterion in 6 observations. Their background-subtracted light curves are shown in Figure 3. The bin
ning of these light curves were determined by glvary which give rise to the optimal variability.
Short-term X-ray variability of M28 I can be found in 3 observations of this cluster (Obs.ID: 9132, 9133 and 14616). All these observations were performed when M28 I were in the accretion-powered state. Its light curves on 2008 August 7 (Obs.ID: 9132) and 2008 August 10 (Obs.ID: 9133) show that the system was switching abruptly between the low state (with count rate of 0.02 count/s in 2-10 keV) and the high state (\(>0.02\) count/s in 2-10 keV). These are consistent with the findings reported by Linares et al. (2014) which suggest this can be originated from the change of the magnetospheric radius due to the fluctuation of accretion flow. On the other hand, its light curve on 2013 April 28 (Obs.ID: 14616) also shows variable X-ray variation though it was less dramatic as the other two epochs. We note that this observation was close to the end of the thermonuclear outburst (Linares et al., 2014) and just less than five days before it was found to switch back to rotation-powered state (Papitto et al., 2013).
For the RBs Terzan5 P and NGC6397 A, their X-ray flux variation can be originated from the intrabinary shocks. Short-term X-ray variability of NGC6397 A can be identified in a single observation on 2007 June 22 (Obs. ID: 7461). This observation started around the epoch when the pulsar was in superior conjunction (i.e. when the companion was located between the neutron star and the observer). The X-ray flux of the system began to rise gradually when the system was moving away from this phase. This is consistent with the findings reported by Bogdanov et al. (2010). A significant orbital X-ray modulation of Terzan5 P has been reported by Bogdanov et al. (2021) recently. Among all 18 _Chandra_ ACIS observations of Terzan 5, short-term variability can be found in two observations on 2011 April 29 (Obs. ID: 13252) and 2011 September 8 (Obs. ID: 14339), which might be resulted from the fluctuations of the interactions between the pulsar wind and the stellar wind from the companion.
We noted that the orbital variabilities of Terzan5 O, Terzan5 ad and M28 H were reported by Bogdanov et al. (2021, 2011), in which all the data were folded to their orbital periods. This is different from the scope of searching short-term variabilities within a single observation window in our current work. For these MSPs, their net counts are all less than 50 in all individual observations. Since they are lower than our predefined criterion for avoiding false alarm, they were not considered in our short-term variability search.
## 4 Correlation & Regression Analysis
The spin-down power \(\dot{E}\) of a pulsar is derived from \(P\) and \(\dot{P}\), \(\dot{E}=4\pi^{2}I\dot{P}P^{-3}\), where \(P\), \(\dot{P}\), and \(I\) are the rotational period, period derivative and the moment of inertia, respectively. In examining GF MSPs, Lee et al. (2018) have found a \(L_{x}-\dot{E}\) relation of \(L_{x}\simeq 10^{31.05}\left(\dot{E}/10^{35}\right)^{1.31}\mathrm{erg\ s^{-1}}\) in 2-10 keV. It will be instructive if one can construct a corresponding relation of the GC MSPs for comparison. Different from the GF MSPs, the MSPs in a GC are affected by the acceleration due to the cluster's gravitational potential. Hence, the Doppler effect can bias the measurements of \(\dot{P}\) (e.g. Toscano et al., 1999). A large number of GC MSPs are found to have negative \(\dot{P}\)(cf. Cheng and Taam, 2003). This can complicate the estimation of the derived parameters such as \(\dot{E}\) and surface magnetic field strength which is estimated as \(B_{s}\simeq\sqrt{\frac{3c^{2}I}{2\pi^{2}R_{NS}^{2}}\dot{P}P}\), where \(c\) is the speed of light and \(R_{NS}\) is the radius of the neutron star which is assumed to be 10 km throughout this work.
Bogdanov et al. (2006) have adopted a King model to compute the cluster acceleration term for each MSP in 47 Tuc, and \(\dot{E}\) are calculated by the intrinsic \(\dot{P}\) which have the acceleration terms subtracted. Using these estimates, the authors examined the \(L_{x}-\dot{E}\) relation in 47 Tuc and obtained \(L_{x}\propto\dot{E}^{0.24\pm 1.1}\). However, in examining the correlation between \(L_{x}\) and \(\dot{E}\) in their adopted sample by the Spearman rank correlation test, we found the correlation is very weak (\(p\)-value\(\sim 0.4\)). The large uncertainties of their best-fitted \(L_{x}-\dot{E}\) relation can be ascribed to this. While a King model provides a statistically reasonable model for the acceleration profile of a GC (King, 1962; Prager et al., 2017), a small sample in this case can be hampered by the systematic uncertainties.
For the GC MSPs in the binaries, long-term radio timing can provide another way for uplifting the contamination by their acceleration. Freire et al. (2017) have measured the time derivatives of orbital period \(\dot{P}_{\mathrm{b,obs}}\) of 6 MSPs in 47 Tuc. Intrinsic \(\dot{P}_{\mathrm{int}}\) can be estimated by \(\dot{P}_{\mathrm{int}}=\dot{P}_{\mathrm{obs}}-\frac{\dot{P}_{\mathrm{b,obs}}} {\dot{P}_{\mathrm{b}}}P\). On the other hand, Prager et al. (2017) have also measured the \(\dot{P}_{\mathrm{b}}\) for 9 MSPs in Terzan 5 as well as 47Tuc J which enable one to estimate their \(\dot{P}_{\mathrm{int}}\).
These studies of long-term radio-timing allow us to form a left-censored sub-sample of 16 MSPs with reliable estimates of \(\dot{E}\) (12 X-ray detections + 4 upper-limits of \(L_{x}\)). Hereafter, we refer this sub-sample as Group A. Their \(\dot{P}_{\mathrm{int}}\) as well as the derived \(\dot{E}\) and \(B_{s}\) are summarized in Table 4.
Applying the Spearman rank test on the \(L_{x}-\dot{E}\) relation on Group A, we obtain \(p\)-values of 0.013 and \(7\times 10^{-3}\) for \(L_{x}\) in 0.3-8 keV and 2-10 keV respectively
(Table 5). These suggest a much more significant correlation than the sample adopted by Bogdanov et al. (2006).
We have also examined the correlation with a larger data set by appending Group A with those have their King model corrected \(\dot{P}\) reported in the literature. This can enlarge the sample size to 24 GC MSPs (20 confirmed X-ray detections and 4 upper-limits of \(L_{x}\), see Table 4). We refer this sample as Group B hereafter. Spearman rank test on this group yields \(p\)-values of \(10^{-3}\) and \(2\times 10^{-4}\) for \(L_{x}-\dot{E}\) in 0.3-8 keV and 2-10 keV respectively (Table 5). Comparing with Group A, the improved significance of Group B is due to the enlarged sample.
On the other hand, the \(\dot{P}\) of M28 A (\(\dot{P}=1.62\times 10^{-18}\)) is larger than those of the other GC MSPs by orders of magnitude. It is reasonable to argue that the effect of cluster acceleration on its \(\dot{P}\) is not significant. Therefore, we further expanded Group B by including M28 A and we refer this sample as Group C hereafter. Spearman rank test on this group yields \(p\)-values of \(2\times 10^{-4}\) and \(4\times 10^{-5}\) for \(L_{x}-\dot{E}\) in 0.3-8 keV and 2-10 keV respectively (Table 5) which indicate significant correlation between these two parameters.
Apart from the improvement of the correlation, the uncertainties of \(\dot{P}\) and hence \(\dot{E}\) in our adopted sample are also reduced. The averaged percentage error of \(\dot{P}\) for the 47 Tuc MSPs, as given by Freire et al. (2017) based on long-term pulsar timing, is \(\sim 65\%\). This is smaller than that in the sample adopted by Bogdanov et al. (2006) (i.e. \(\sim 75\%\)). For the sample from Terzan 5, Prager et al. (2017) do not provide any error estimates for their \(\dot{P}\). Since they are also obtained through long-term timing, we assumed the average percentage errors of the samples in Prager et al. (2017) are comparable with that in Freire et al. (2017) and computed the uncertainties accordingly. Combining with the other King model corrected \(\dot{P}\), the overall averaged percentage error in Group B is found to be \(68\%\). With M28 A included, the overall averaged percentage error in Group C becomes \(65\%\).
The stronger correlation and the reduced uncertainties in our current sample prompt us to re-examine the \(L_{x}-\dot{E}\) relation for GC MSPs and compare with their counterparts in the GF. In this work, all the regression analyses were performed in the framework of Bayesian inference. Instead of giving the point estimates, the posterior distributions of the parameters are reported so as to alleviate biases resulted from the small sample size. We adopt the R-package LIRA(Sereno, 2016) for our analyses, which not only allows an ordinary linear regression but also enables a survival regression analysis with the upper-limits of \(L_{x}\) taken into account. In Bayesian framework, the conditional probability of a measurement \(x\) with the given model \(X\) is denoted by \(P(x|X)\). In our analysis, \(P(x|X)\) is proportional to a Gaussian for the detections. For the case that the observational results are expressed as upper limits, the conditional probability is truncated by a Heaviside function \(H(x-x_{ul})\) where \(x_{ul}\) is the upper limit for the left-censored data points (cf. Appendix in Willis et al., 2021). These treatments of the upper-limits are implemented in LIRA. This can result in a less biased estimate for the \(L_{x}-\dot{E}\) relation.
For the linear regression with a form of \(\log L_{x}=\alpha\log\dot{E}+\beta\), the measurement uncertainties in both independent and dependent variables are taken into account. We assume a Student \(t-\)distribution and an uniform distribution as the priors for \(\alpha\) and \(\beta\) respectively. A 2D posterior probability distribution of these parameters was inferred through Markov chain Monte Carlo (MCMC). We have used four parallel chains with \(2\times 10^{6}\) iterations on each. The first \(1\%\) of the samples from each chain were set as the initial burn-in. With these adaptation iterations excluded, all the other samples are used to approximate the posterior probability distribution.
Since our aim is to compare \(L_{x}-\dot{E}\) relation between the MSPs in GC and GF, we analysed the data in both populations with the same procedures as aforementioned. For GC MSPs, we have started with the complete censored sample of Group C (i.e. with upper-limits of \(L_{x}\) taken into account) as given in Table 4. For GF MSPs, we adopted the censored samples given by the Tables 1 and 2 in Lee et al. (2018).
The comparisons of the marginalized posterior distributions of \(\alpha\) and \(\beta\) inferred from Group C and the GF population are shown in Figure 4. For GF MSPs, the distribution of the slope (i.e. \(\alpha\)) is found to be peaked around \(\sim 0.9\) and \(\sim 1.2\) in 0.3-8 keV and 2-10 keV respectively. The latter one is consistent with the point estimate reported by Lee et al. (2018) in the same energy band. On the other hand, the posterior distribution of \(\alpha\) inferred from the GC MSPs in Group C is peaked around \(\sim 0.6\) and \(\sim 0.8\) in 0.3-8 keV and 2-10 keV respectively. The comparison of the \(L_{x}-\dot{E}\) relation in these two populations suggests a possible difference.
The plots the \(L_{x}-\dot{E}\) relations in Figure 4 are shown for a further comparison. Uncertainties estimated from the ranges centered at the peaks and bracket \(68\%\) of the samples in the marginalized posterior distributions are illustrated as the shaded regions in these plots. The difference between these two populations is also suggested by the lack of overlap between their shaded regions.
By visually examining the plots of \(L_{x}-\dot{E}\) for the GC MSPs in Figure 4, the asymmetric distribution of the data points above and below the best-fit relation suggests the fitting is far from desirable. We speculate that M28 A can possibly be an outlier. Since the regression analysis is weighed by the reciprocal of the uncertainties of the data, the fact that the errors of \(\dot{E}\) and \(L_{x}\) of M28 A is much smaller than those of the other GC MSPs can result in a strong bias towards this single data point.
To quantify this issue, we have computed the interquartile range (IQR) of \(\log L_{x}\). For detecting outliers, we adopted the conventional criterion of 1.5 times of IQR, which is found to be \(\log L_{x}=29.96-31.72\) (0.3-8 keV) and \(\log L_{x}=28.92-31.61\) (2-10 keV) in Group C. With this procedure, M28 A is the only source lies outside 1.5\(\times\)IQR of Group C.
Distinctions between M28 A and the majority of GC MSPs can also be discussed in terms of physical reasons. First, its characteristic age (\(\tau\sim 3\times 10^{7}\) yrs) is much smaller than the other MSPs. Apart from its high value of \(L_{x}\), the X-ray pulses of this isolated MSP have a very narrow profile which suggests its non-thermal nature with the origin from the magnetospheric accelerator (Du et al., 2015). Also, its X-ray emission can be detected at energies up to \(\sim 50\) keV. All these make the X-ray properties of M28 A very different from the thermal X-rays originated from most of the isolated MSPs in GCs. Furthermore, it is one of the two MSPs which have glitches detected so far (Cognard and Backer, 2004; McKee et al., 2016). This might suggest M28 A is more similar to young energetic pulsars than a typical MSP.
To be consistent, although the best-fit \(L_{x}-\dot{E}\) relation for the GF MSPs in Figure 4 is reasonable, we have also searched for the possible outliers in the GF sample with the same procedure we applied in the GC sample. In 0.3-8 keV, PSR J0218+4232 and PSR B1937+21 are found lying outside 1.5\(\times\)IQR of \(\log L_{x}\) (\(28.79-32.72\)) of the GF sample. And in 2-10 keV, only PSR J0218+4232 lies outside the corresponding range (\(\log L_{x}=27.32-33.19\)). This prompts us to re-do the fitting by considering both of them as the outliers.
With the outliers removed from both GC and GF samples, we have re-run the regression analysis for inferring \(L_{x}-\dot{E}\) relation. The results are shown in Figure 5. For the GF MSPs, excluding PSR J0218+4232 and PSR B1937+21 only results in a slightly flatter \(L_{x}-\dot{E}\) relation in comparison with the case including all the samples (i.e. Figure 4). Their difference can be reconciled with the tolerance of their uncertainties.
On the other hand, in the case of GC MSPs, the posterior distribution of \(\alpha\) inferred with M28 A removed (i.e. Group B) is peaked around \(\sim 0.4\) and \(\sim 0.5\) in 0.3-8 keV and 2-10 keV respectively. The relation appears to be much flatter than that shown in Figure 4. And we found that the quality of the fitting is much improved as similar number of data points are above and below the best-fit line. With the outliers excluded, the comparison between the posterior distributions of \(\alpha\) and \(\beta\) suggests the difference in the \(L_{x}-\dot{E}\) relation between GC MSPs and GF MSPs becomes more significant.
A recent study of the MSPs in Terzan 5 has suggested a positive correlation between \(L_{x}\) and the X-ray hardness (cf. Figure 3 in Bogdanov et al., 2021). With the effective photon index given in Table 2 as a measure of X-ray hardness of GC MSPs (i.e. smaller \(\Gamma\) implies harder X-ray emission), we are able to examine if this relation can be found in the full sample of X-ray selected GC MSPs with the photoelectric absorption corrected. Spearman rank test suggests a strong correlation between \(L_{x}\) and \(\Gamma\) with a \(p-\)value of \(5.6\times 10^{-16}\) and \(5.3\times 10^{-5}\) in both 2-10 keV and 0.3-8 keV. For the non-detections in Table 3, the upper-limits of \(L_{x}\) are calculated by assuming a PL model with fixed \(\Gamma\) and hence they are not very informative for examining the relation between \(L_{x}\) and \(\Gamma\). Therefore, we ignored the upper limits in the regression analysis of \(\log L_{x}-\Gamma\).
Using the procedures as described above, we obtained the marginalized posterior distributions for the parameters \(a\) and \(b\) in the assumed linear relation of \(\log L_{x}=a\Gamma+b\). We have applied the same analysis on the GF sample as given by Lee et al. (2018). The comparison of this relation between these two populations are shown in Figure 6. No significant difference in terms of X-ray luminosity and hardness is found between the MSPs in GCs and GF.
From the literature, we have also obtained the information of whether the X-rays from the GC MSPs are dominated by thermal or non-thermal emission which are summarized in Table 2. In Figure 6, we differentiate the non-thermal dominant and thermal dominant cases by different symbols. We found that the non-thermal dominant X-ray GC MSPs are characterized with an effective photon index of \(\Gamma<2\) in our analysis. On the other hand, the thermal X-ray emitters are generally characterized with \(\Gamma>2\). The fact that the non-thermal dominant X-ray GC MSPs are generally more luminous can be due to the presence of additional harder X-ray components from the intrabinary shock in these systems (Lee et al., 2018).
Many studies have shown that the final state of a MSP strongly depends on the initial mass of its companion and the orbital separation (e.g. Tauris, 2011; Liu and Chen, 2011). Furthermore, evolutionary status of the companion
ion at the onset of the Roche lobe overflow (RLO) is suggested to be a key factor in determining the timescale of the mass transfer phase which can directly affect the nature of the MSP (Tauris, 2011; Tauris & Savonije, 1999). The longer the mass transfer phase (i.e. more mass accreted by the neutron star) will result in a faster rotating MSP (cf. Fig. 5 in Liu & Chen, 2011).
If the orbital separation of the progenitor is wider, the companion needs to be more evolved by the time it fills its Roche lobe and transfers its mass to the neutron star. This will lead to a shorter mass transfer phase and hence a relatively slower rotating MSP. Hence, This suggests that investigating the correlation between \(P_{b}\) and \(P\) (i.e. Corbet diagram) can provide a fossil record for the evolutionary history of compact binaries.
The distributions of \(P\) vs \(P_{b}\) for radio/X-ray selected samples in GF and GCs are shown in Figure 7. We started the analysis with all the MSP binaries. With Spearman rank test, we found that \(P\) and \(P_{b}\) in radio selected sample of GF MSPs are strongly correlated (\(p\)-value\(\sim 7\times 10^{-9}\)). On the other hand, the correlation becomes weaker in their X-ray selected sample and it is marginally significant (\(p\)-value\(\sim 0.06\)), which can be due to the much reduced sample size of the X-ray emitting GF MSPs.
However, in both radio selected and X-ray selected samples of GC MSPs, we do not find any evidence for the correlation between \(P\) and \(P_{b}\) (\(p-\)value \(>0.1\) in both cases). The comparisons of \(P-P_{b}\) correlation test for the MSPs in GF and GCs are summarized in Table 6. The lack of such correlation in GC MSPs is likely a results of dynamical interactions (see the discussion in Section 6).
For the GF MSPs, we further examined \(P-P_{b}\) correlation from each types of MSP binaries. In Figure 7, the binaries with different nature of companion are represented by different symbols. By running the Spearman rank correlation test on each types of MSP binaries, we found that only those with a helium white dwarf (He WD) as the companion show a significant correlation between \(P\) and \(P_{b}\) (\(p-\)value \(=7.0\times 10^{-4}\)). This can be accounted by the relatively simple evolutionary track with wide LMXBs as progenitors (Case B RLO Tauris, 2011). For the other types of MSP binaries (e.g. spider MSPs, MSPs with CO WD companion), the lack of \(P-P_{b}\) correlation might be a result of the more complex evolutionary channels for their formation (Tauris & Savonije, 1999; Tauris, 2011).
The \(P-P_{b}\) correlation found in radio selected GF MSPs leads us to perform the Bayesian regression analysis by assuming a linear relation of \(\log P=m\log P_{b}+c\). We have run the analysis for the cases with the full sample as well as only with those have a He WD companion. The marginalized posterior probability distributions of \(m\) and \(c\) as well as the best-fit relation are shown in Figure 7.
## 5 Globular Cluster MSPs vs. Galactic Field MSPs
To investigate whether the frequent stellar interactions in GCs have any effects on the physical properties of their MSPs, we compare a set of parameters of GC MSPs with those of their counterparts in the GF through standard statistical tests. Six parameters, including \(\Gamma\), \(L_{x}^{2-10}\), \(P_{b}\), \(P\), \(B_{s}\) and \(\dot{E}\), are chosen in this analysis.
Before any comparison, we first constructed the unbinned empirical cumulative distribution functions (eCDFs) for each parameter (Figure 8-10). To quantify the difference between any two eCDFs in consideration, we employed two different non-parametric statistical tests: two-sample Anderson-Darling (A-D) test and Kolmogorov-Smirnov (K-S) test. While K-S test is widely used in literature, we notice it has several drawbacks.3. For example, it is not sensitive to supremum distance between two eCDFs far away from their centers. On the other hand, two-sample A-D test provides a more sensitive method in identifying the difference between two distributions. \(p-\)values inferred from both A-D and K-S tests in different comparisons are listed in Table 7 and 8. In this study, if the \(p-\)value inferred from either test is \(\lesssim 0.05\), the difference between the eCDFs is considered to be plausible and will be further discussed.
Footnote 3: [https://asaip.psu.edu/articles/beware-the-kolmogorov-smirnov-test/](https://asaip.psu.edu/articles/beware-the-kolmogorov-smirnov-test/)
We first compared the X-ray luminosities and hardness between the MSPs in GF and GCs. The eCDFs of \(L_{x}^{2-10}\) and \(\Gamma\) for all known X-ray emitting MSPs in these two populations are shown in Figure 8. While we do not find any significant difference in \(\Gamma\) between the X-ray selected samples in GF and GCs (\(p\)-value \(>0.1\) in both A-D and K-S test), A-D test suggests a marginal difference in \(L_{x}^{2-10}\) (\(p\)-value \(\sim 0.02\)). In examining their eCDFs, the possible difference can be in the low luminosity range of \(L_{x}^{2-10}\lesssim 5\times 10^{29}\) erg/s. This can be due to the fact that there are more nearby systems in the GF which allow fainter MSPs to be detected.
We have also divided the full sample of X-ray selected MSPs into four classes (Iso, RB, BW and Oth) and compared the corresponding classes in GCs and GF (e.g. RBs in GCs vs RBs in GF). The results are shown in Figure 9 and Table 7. We found the \(p-\)values inferred
from both tests are \(>0.05\) for all cases and hence no significant difference in the X-properties between the corresponding classes in GCs and GF can be identified with the current sample.
For comparing \(P_{b}\), \(P\), \(B_{s}\) and \(\dot{E}\), we have examined both radio selected and X-ray selected samples in order to investigate the possible selection effect imposed by X-ray observations. For the comparison of \(B_{s}\) and \(\dot{E}\), we started with the outliers in both populations (i.e. M28 A, PSR J0128+4232 and PSR B1937+21) excluded.
We found that all these four parameters are significantly different between the radio selected MSPs in GCs and GF (see the first row of Figure 10). The most obvious differences are found between their \(P_{b}\) and \(\dot{E}\), in which A-D test gives \(p-\)values of \(\sim 10^{-5}\) and \(10^{-4}\) respectively (see Table 8).
However, when we compare the X-ray selected samples of these two populations, the differences in their \(P_{b}\) and \(\dot{E}\) distributions disappear (see the second row of Figure 10). For example, the significance for the differences between their \(P_{b}\) and \(\dot{E}\) drop drastically (\(p-\)value \(\sim 0.7\) from A-D test, see Table 8). This clearly indicates the presence of selection effect imposed by X-ray detections.
To further examine such effect, we tabulate the medians of these four parameters of both X-ray selected and radio selected samples in GF and GCs (see Table 9). We have also compared the X-ray/radio selected eCDFs in GF and GCs, which are shown in the third and the forth rows in Figure 10 respectively.
In view of the large uncertainties of \(B_{s}\) and \(\dot{E}\) for the GC MSPs, we examined the possible impact of the measurement errors on the aforementioned inference by Monte Carlo sampling. We assumed a Gaussian distribution centered on each of the observed values of \(\log B_{s}\) and \(\log\dot{E}\) in Table 4 with the corresponding errors as the standard deviations. A set of simulated sample can then be randomly drawn from each of these distributions. In total, 10000 sets of simulated samples were generated in our experiment. For each set of sample, we have run A-D test to compare its eCDFs with those of GF MSPs and computed the corresponding \(p-\)values.
In Figure 11, we show the empirical distributions of \(p-\)values obtained from the aforementioned Monte Carlo method. The green dashed lines illustrated the \(p-\)values computed with the observed data (cf. Table 8). Taking \(p<0.05\) as the benchmark for two distributions being different, we estimated the probabilities of obtaining \(p<0.05\) from these empirical distributions. For comparing \(\dot{E}\) between the radio selected MSPs, 100% of our simulated data result in \(p<0.05\). On the contrary, none of the simulated data leads to \(p<0.05\) in comparing \(\dot{E}\) between the X-ray selected MSPs. These results support our assertion that \(\dot{E}\) of the MSPs in GC and GF are different in the radio selected samples but such difference is diminished in the X-ray selected samples. For comparing \(B_{s}\) between the MSPs between GCs and GF, we found that \(\sim 99\%\) and \(\sim 96\%\) of the simulated data give \(p-\)value below the benchmark in comparing the radio selected and X-ray selected samples respectively. These support the conclusion that the distributions of \(B_{s}\) for the MSPs in GCs and GF are different, regardless of X-ray selected or radio selected.
We have repeated the analysis of comparing \(B_{s}\) and \(\dot{E}\) between the MSPs in GCs and GF with the outliers included. The comparisons of eCDFs and the empirical distributions of \(p-\)values obtained from the Monte Carlo method are shown in Figure 12 and Figure 13 respectively. We found that the results are fully consistent with those inferred from the analysis with the outliers removed. In Figure 13, while 100% of our simulated data result in \(p<0.05\) for the comparison of \(\dot{E}\) between the radio selected MSPs in GCs and GF, there is only 0.1% of simulated data below this benchmark in comparing the same parameter between the X-ray selected MSPs in these two populations. On the other hand, in comparing \(B_{s}\) between the MSPs between GCs and GF, we found that 100% and 98% of the simulated data show \(p<0.05\) in the radio selected and X-ray selected MSPs respectively.
For the GF MSPs, both \(P_{b}\) and \(P\) in their X-ray selected sample are significantly shorter than those in their radio selected sample (Table 9). A-D test yields the \(p-\)values of \(\sim 10^{-3}\) in comparing the corresponding eCDFs which indicates such differences are significant (see the third row of Figure 10 and Table 8). On the other hand, we found that the surface magnetic field strength \(B_{s}\) of both radio selected and X-ray selected GF MSPs are very similar (Figure 10 and Table 8). Since \(\dot{E}\) scales as \(\dot{E}\propto B_{s}^{2}P^{-4}\), the difference of this parameter between the radio selected and X-ray selected GF MSPs is expected as A-D test yields a \(p-\)value of \(\sim 10^{-4}\).
It is clear that the X-ray observations have detected MSPs in the GF with faster rotation (i.e. small \(P\)) and hence more powerful (i.e. higher \(\dot{E}\)) (see Table 9).
For the X-ray emitting GC MSPs, however, we do not find any significant selection effect imposed by X-ray observations in the GC MSP population (see Table 9, Figure 10/Figure 12). For all four parameters considered in this analysis (see the fourth row of Figure 10/Figure 12), neither A-D test nor K-S test can identify any significant difference between the radio selected and X-ray selected samples in GC MSPs (see Table 8). For example, differ
ent from the case of GF MSPs, the X-ray selected MSPs in GCs do not appear to rotate significantly faster than their radio selected sample (\(p\sim 0.13\) by A-D test).
There is another interesting feature found in comparing these two populations. Regardless of whether it is X-ray or radio selected, GC MSPs generally rotate slower than those in GF. For the X-ray emitting MSPs, since their \(\dot{E}\) are comparable in GCs and GF, the slower rotating GC MSPs suggests their surface magnetic field \(B_{s}\) should be stronger. Such expected difference can be seen by comparing their eCDFs and medians. With the outliers excluded in both GCs (i.e. M28 A) and GF (i.e. PSR J2018+4232 and PSR B1937+21), a difference between the X-ray selected MSPs in GCs and GF is suggested by both A-D and K-S tests (\(p\sim 0.02\)). A more significant difference of \(B_{s}\) between the radio selected MSPs in GCs and GF are indicated by both tests (\(p\lesssim 5\times 10^{-3}\)). The conclusions are unaltered when the outliers are included in the comparison (Table 8 & Table 9).
## 6 Summary and Discussion
We have performed a systematic analysis of the rotational, orbital and X-ray properties of MSPs in GCs and compared with those in the GF. The major results are summarized as follows:
1. GC MSPs generally rotate slower than those in the GF.
2. While X-ray observations tend to pick the MSPs with faster rotation in the GF, we do not find such selection effect in the GC MSP population.
3. Surface magnetic field (\(B_{s}\)) of GC MSPs are apparently stronger than those in the GF.
4. For the MSP binaries, strong correlation is found between the rotation period and the orbital period in the GF population. However, such correlation is absent in the GC MSP binaries.
5. Although the distributions of X-ray luminosity (\(L_{x}\)) and hardness (\(\Gamma\)) for the MSPs in GCs are comparable with those in the GF, the GC MSPs apparently follow a different \(L_{x}-\dot{E}\) relation.
All these findings suggest that dynamical interactions in GCs can alter the evolution of MSPs/their progenitors and leave an imprint on their X-ray emission properties. Here we discuss the implications of our results.
One most distinguishable properties between the radio selected MSPs in GCs and GF is their distributions of \(P_{b}\) (Figure 10). It is clear that there is a lack of wide orbit MSP binaries in GCs. This can be accounted by the frequent stellar encounters in GCs. Numerical studies have shown that close encounters between stars and binaries can affect the orbital parameters and dramatically alter the evolution of the binaries (Benacquista & Downing, 2013). If the initial binding energy of a primordial binary is larger than the average kinetic energy of the neighboring stars in the cluster, the encounter can lead to orbital shrinkage with the orbital binding energy transferred to the neighboring stars (Heggie, 1975).
It is instructive to compare the average orbital binding energy \(\langle E_{b}\rangle\) of the radio selected GC MSPs with the averaged kinetic energy of the neighboring stars \(\langle E_{s}\rangle\) in their hosting clusters. For each MSP binary, we computed \(E_{b}\) by \(GM_{\rm psr}M_{c}/2a\) where \(M_{\rm psr}\), \(M_{c}\) and \(a\) are the mass of pulsar, the mass of companion and the semi-major axis of the orbit respectively. We fixed \(M_{\rm psr}\) at \(1.35M_{\odot}\) for all systems. Both \(a\) and \(M_{c}\) are taken from the ATNF pulsar catalog (Manchester et al., 2005) by assuming an orbital inclination of \(i=60^{\circ}\). With these estimates, the average orbital energy of the radio selected GC MSP binaries is found to be \(\langle E_{b}\rangle\sim 2.3\times 10^{45}\) ergs. On the other hand, the corresponding value of the radio selected GF MSP binaries is \(\langle E_{b}\rangle\sim 8.5\times 10^{44}\) ergs which is about three times lower.
For \(\langle E_{*}\rangle\), we calculated by averaging the characteristic value for each MSP-hosting GC. We computed \(E_{*}\) by \(\frac{1}{2}M_{*}\sigma_{*}^{2}\), where \(M_{*}\) and \(\sigma_{*}\) are typical mass and the velocity dispersion of the neighboring stars in a GC. Values of \(\sigma_{*}\) were adopted from Harris (2010). For estimating \(M_{*}\), we took the mass of a main sequence corresponding to the spectral type of the integrated cluster light for each GC given in Harris (2010). It is interesting to note that \(\langle E_{*}\rangle\sim 1.6\times 10^{45}\) ergs is rather close to the estimate of \(\langle E_{b}\rangle\) for the GC MSP binaries. The similarity of these two quantities might indicate the past interactions between MSP binaries (and/or their progenitors) and the neighboring stars in the cluster, which can lead to equipartition among orbital energy, recoil kinetic energy of the binaries and the kinetic energy of the stars in GCs.
On the other hand, for any primordial binaries with wide orbits which have initial binding energy smaller than the average kinetic energy of the neighboring stars in the cluster, they are prone to be destroyed through the single-binary interaction (Heggie, 1975; Benacquista & Downing, 2013). All these can make the recycling process more complicated than their counterparts in the GF. This might explain the absence of \(P_{b}-P\) correlation for the MSP binaries in GCs.
Disturbance on the recycling process for the MSPs in GCs might also account for their slower rotations.
Since mass transfer can be disrupted by the agitation of frequent stellar encounter (Verbunt & Freire, 2014), this can leave the MSP with intermediate rotation period \(P\) which is consistent with our results.
We notice that Konar (2010) have reported an opposite conclusion (i.e. GC MSPs rotate _faster_ than their counterparts in GF). The difference between their results and ours can be accounted by the difference in the adopted samples. While we selected the MSPs with the criterion \(P<20\) ms, Konar (2010) selected their sample with \(P<30\) ms. And most importantly, the sample size in our study is \(\sim 3\) times larger than that adopted by Konar (2010). And we have more fast rotating MSPs in our sample. The average \(P\) of MSP in the GF/GCs are 7.75/5.70 ms in their sample (cf. Tab. 1 in Konar, 2010). The corresponding values in our radio selected sample are found to be 3.73/4.34 ms (Table 9). On the other hand, with a much larger sample, our results confirm the scenario suggested by Verbunt & Freire (2014).
We have also examined whether such difference can be a result of observational effect. Since GC MSPs are generally located further than their GF counterparts, detecting fast rotating pulsars in GCs by radio observations can be more difficult because of the possible broadening of their pulses by scattering. This can possibly result in a MSP population in GCs with slower rotation than that in the GF. This prompts us to compare the pulse width between these two populations by taking the estimates of the pulse widths at 50% of their peaks in ATNF catalog (Manchester et al., 2005). Both A-D and K-S tests yield a \(p\)-value \(>0.1\) and hence there is no indication that the radio pulses of GC MSPs are broader.
Another evidence against the aforementioned hypothesis is the detection of the fastest known pulsars in Terzan 5, namely Terzan5 ad (\(P\sim\)1.4 ms). Despite the fact that the pulsars in Terzan 5 have the highest dispersion measure among all MSPs, the discovery of Terzan5 ad shows that the improvement in instrumentation and search techniques in the radio surveys have greatly overcome the bias in detecting fast pulsars in GCs. For example, an effective temporal resolution of \(\sim 0.3\) ms was achieved in the pulsar search towards Terzan 5 (Ransom et al., 2005; Hessels et al., 2006). Together with the fact that no known bias against the detection of slow pulsars, we do not find any convincing argument that the difference between the rotational period distributions of these two populations is a result of observational bias. Hence, we conclude the result that GC MSPs generally rotate slower than GF MSPs is intrinsic.
We also notice that the fraction of isolated MSPs in GCs is larger than that in the GF, which is particularly obvious in the X-ray selected sample (Figure 1). Verbunt & Freire (2014) have suggested that the large fraction of isolated MSPs in GCs can be a result of dynamical disruption. Although this is physically plausible, we would like to point out that this might also be an observational effect. In the GF, the X-ray counterparts of MSPs are detected by pointed observations towards individually chosen targets. This can lead to a selection bias towards those bright sources with interesting behavior such as spider pulsars. This can possibly account for their large fraction in the GF population and hence suppress the proportion of isolated MSPs. On the other hand, there is no such bias in searching X-ray counterparts of MSPs in GCs since all MSPs in a given GC are observed at once in the X-ray image. In view of this, we cannot exclude the possibility that the larger fraction of isolated MSPs in GCs is a result of observational bias. For resolving this issue, a systematic all-sky X-ray imaging survey on the GF will be needed (e.g. with eROSITA). With a less biased sample, the proportions of different classes of X-ray emitting MSPs can be re-examined.
For the radio selected samples, the larger fraction of isolated MSPs in GCs can also be a result of observational bias. Detecting MSPs in binaries is more challenging than detecting isolated MSPs because searches of orbital parameters are also required. For GCs, the situation is exacerbated by the intracluster acceleration. Any deviation of the timing solutions from the actual values might lead to smearing of the radio pulses which can hamper the detection. As a result, this can possibly lead to a larger proportion of isolated MSPs in GCs. Therefore, the conclusion of whether the dynamical disruption in GCs can lead to more isolated MSP is not without ambiguity.
During recycling, accretion on the neutron stars can induce the decay of the surface magnetic field through the processes such as Hall effect and Ohmic dissipation (Cumming et al., 2004). Therefore, perturbation on the spin-up process by the dynamical interactions can halt the magnetic field decay in GC MSPs. This is consistent with our findings that \(B_{s}\) of GC MSPs are larger than those in the GF (See Table 8 & 9). This inference has also been reported by Verbunt & Freire (2014) and Konar (2010). However, the ways how these studies collected their samples are different from our approach. It is unclear whether the estimates of \(\dot{P}\) adopted in these previous works have the acceleration terms corrected. It appears that these studies have collected those have larger \(\dot{P}_{\rm obs}\) so as to have a smaller fractional contami
nation attributed by the cluster acceleration. However, this unavoidably introduced the bias that favors the conclusion that \(B_{s}\) of GC MSPs is higher.
In our study, a majority of our samples of \(B_{s}\) are estimated by the \(\dot{P}\) with their acceleration terms corrected by long-term pulsar timing (see Table 4 and Section 4). Such correction is unlikely suffered from the aforementioned bias. However, as the measurement of time derivative of the orbital period \(\dot{P}_{b}\) should be easier for large \(P_{b}\), most of the systems with intrinsic \(\dot{P}\) estimated by this method are non-spider MSP binaries. This can introduce another bias in this comparison as we do not know the \(B_{s}\) for the spider and isolated MSPs. On the other hand, Lee et al. (2018) found that all different types of MSPs in the GF have similar \(B_{s}\) (see Figure 6 in Lee et al., 2018). If this were also the case in GCs, our inference might remain be valid.
To understand the cause of the selection effect imposed by X-ray observations, we need to discuss the spin-down power \(\dot{E}\) of MSPs. In the GF, X-ray observations apparently pick the more powerful MSPs (i.e. larger \(\dot{E}\)) which are more luminous in X-ray as \(L_{x}\propto\dot{E}^{1.31}\)(Lee et al., 2018). Since the distributions of \(B_{s}\) of both radio/X-ray selected samples in GF are similar and \(\dot{E}\) is proportional to \(B_{s}^{2}/P^{4}\), this explains why the X-ray emitting MSPs in GF generally rotate faster. Furthermore, because of the correlation between \(P\) and \(P_{b}\), this also naturally explains why the X-ray emitting MSP binaries have tighter orbits in the GF.
However, in contrast to the situation in the GF, we do not find any evidence for the X-ray selection effect on the GC MSPs (see Table 8 & 9). We speculate that this might be accounted by the fact that the GC MSPs follow a different \(L_{x}-\dot{E}\) relation. In our adopted sample, we found \(L_{x}\propto\dot{E}^{0.4-0.8}\)(Table 10). It appears that the \(L_{x}\) of the GC MSPs have a less sensitive dependence on \(\dot{E}\) than those in the GF. This might explain why the selection effect on the GC population is less prominent than that in the GF.
For the GC MSPs, it is interesting to notice that the inferred dependence of their \(L_{x}\) on \(\dot{E}\) is consistent with that of Goldreich-Julian current \(J_{GJ}\propto\sqrt{\dot{E}}\)(Goldreich & Julian, 1969). This suggests that the X-rays are likely resulted from the polar cap heating by the back-flow current, which should be scaled with \(J_{GJ}\). Since there is an indication that \(B_{s}\) is stronger in GC MSPs, this might facilitate the magnetic pair creation close to the stellar surface and result in a higher efficiency of polar cap heating than their GF counterparts (Cheng & Taam, 2003). Also, Takata et al. (2010) suggest that if these magnetic pairs stream back to the outermagnetosphere, they will restrict the size of the outergap accelerator. And hence the production of non-thermal emission will be limited. This is consistent with the fact that the X-rays from almost all the GC MSPs adopted in examining the \(L_{x}-\dot{E}\) relation are thermal dominant (Table 2).
Lastly, we would like to highlight that the \(L_{x}-\dot{E}\) relation for the GC MSPs reported in this work can be biased by the way we collected the sample. Since most of sample in Table 4 have the acceleration terms in their observed \(\dot{P}\) corrected by \(\dot{P}_{b}\), it is biased by those having large \(P_{b}\) which are mostly non-spider MSP binaries. While we found that \(\dot{E}\) of the X-ray selected MSPs are comparable in GCs and GF, we are not sure how does this conclusion will be altered if the isolated and spider MSPs in GCs are included. In the GF, Lee et al. (2018) have shown that the \(\dot{E}\) of isolated and spider MSPs are comparable with each other and they are much larger than the non-spider MSPs (see Figure 7 and Table 4 in Lee et al., 2018). If the situation is similar in GC population, this might suggest that GC MSPs are more powerful than those in the GF. While measuring the line-of-sight acceleration of isolated and spider MSPs by long-term radio timing can be challenging, a systematic analysis of the whole GC MSP population with the mean-field acceleration and numerical simulation are encouraged for further investigation.
Recently, a new sub-class of BWs in the GF which are referred as _Tidarren_ systems has been identified (Romani et al., 2016). While classic BWs have companion mass in a range of \(\sim 0.02-0.05M_{\odot}\), the companions of Tidarren systems have mass \(<0.015M_{\odot}\) and with their hydrogen completely stripped off by the powerful pulsar wind. It has been suggested that they are descendants of the ultracompact X-ray binaries and follow a different evolutionary path. In a recent kinematic analysis of two Tidarren systems in the GF, Long et al. (2022) have shown that such systems can be originated from GCs. With more Tidarren systems discovered in the future, one can further examine the possible intricate relation between the MSPs in GCs and the GF.
During the reviewing process, we became aware that a publication by Zhao & Heinke (2022) on a similar subject as our work. The authors have also presented a population analysis of X-ray properties of GC MSPs but with a focus different from our work. While we have performed a systematic comparison between the MSPs in GCs and GF to explore the influence of dynamical interactions and the selection effects imposed by X-ray observations, Zhao & Heinke (2022) have focused solely on the X-ray properties of GC MSPs (e.g. examining their X-ray luminosity functions, placing upper and lower bounds on the number of MSPs in various GCs). On the other hand, their independent work allows
us to cross-check \(L_{x}\) and we found that the estimates in both works are consistent.
## Acknowledgments
The authors would like to thank Jongsuk Hong for the valuable comments and suggestions. J.L. is supported by the National Research Foundation of Korea grant 2016R1A5A1013277, 2022R1F1A1073952 and National Research Foundation of Korea grant funded by the Korean Government (NRF-2019H1A2A1077350-Global Ph.D. Fellowship Program); C.Y.H. is supported by the research fund of Chungnam National University and by the National Research Foundation of Korea grant 2022R1F1A1073952. J.T. are supported by the National Key Research and Development Program of China (grant No. 2020YFC2201400) and the National Natural Science Foundation of China (NSFC, grant No. 12173014). A.K.H.K. is supported by the National Science and Technology Council of Taiwan through grant 111-2112-M-007-020. P.H.T. is supported by the NSFC grant No. 12273122 and the China Manned Space Project (No. CMS-CSST-2021-B09). K.L.L. is supported by the National Science and Technology Council of the Republic of China (Taiwan) through grant 111-2636-M-006-024, and he is also a Yushan Young Fellow supported by the Ministry of Education of the Republic of China (Taiwan).
|
2308.05209 | Black Holes with Abelian and Non-Abelian Charges and Their Impact on
Matter Accretion Flows | We study the black hole spacetime structure of a model consisting of the
standard Maxwell theory and a $p$-power-Yang-Mills term. This non-linear
contribution introduces a non-Abelian charge into the global solution,
resulting in a modified structure of the standard Reissner-Nordstr\"{o}m black
hole. Specifically, we focus on the model with $p=1/2$, which gives rise to a
new type of modified Reissner-Nordstr\"{o}m black hole. For this class of black
holes, we compute the event horizon, the innermost stable circular orbit, and
the conditions to preserve the weak cosmic censorship conjecture. The latter
condition sets a well-established relation between the electric and the
Yang-Mills charges. As a first astrophysical implication, the accretion
properties of spherical steady flows are investigated in detail. Extensive
numerical examples of how the Yang-Mills charge affects the accretion process
of an isothermal fluid in comparison to the standard Reissner-Nordstr\"{o}m and
Schwarzschild black holes are displayed. Finally, analytical solutions in the
fully relativistic regime, along with numerical computations, of the mass
accretion rate for a polytropic fluid in terms of the electric and Yang-Mills
charges are obtained. As a main result, the mass accretion rate efficiency is
considerably improved, with respect to the standard Reissner-Nordstr\"{o}m and
Schwarzschild solutions, for negative values of the Yang-Mills charge. | Gabriel Gómez, Ángel Rincón, Norman Cruz | 2023-08-09T20:12:05Z | http://arxiv.org/abs/2308.05209v2 | # Black Holes with Abelian and Non-Abelian Charges and Their Impact on Matter Accretion Flows
###### Abstract
In this paper, we study the black hole spacetime structure of a model consisting of the standard Maxwell theory and a \(p\)-power-Yang-Mills term. This non-linear contribution introduces a non-Abelian charge into the global solution, resulting in a modified structure of the standard Reissner-Nordstrom black hole. Specifically, we focus on the model with \(p=1/2\), which gives rise to a new type of modified Reissner-Nordstrom black hole. For this class of black holes, we compute the event horizon, the innermost stable circular orbit, and the conditions to preserve the weak cosmic censorship conjecture. The latter condition sets a well-established relation between the electric and the Yang-Mills charges. As a first astrophysical implication, the accretion properties of spherical steady flows around this new modified Reissner-Nordstrom black hole are investigated in detail. Concretely, we compute the critical radius that establishes the condition for having stable transonic flow in terms of the local sound speed and the involved charges. Extensive numerical examples of how the Yang-Mills charge affects the accretion process of an isothermal fluid in comparison to the standard Reissner-Nordstrom and Schwarzschild black holes are displayed. Finally, analytical solutions in the fully relativistic regime, along with numerical computations, of the mass accretion rate for a polytropic fluid in terms of the electric and Yang-Mills charges are obtained. As a main result, the mass accretion rate efficiency is considerably improved, with respect to the standard Reissner-Nordstrom and Schwarzschild solutions, for negative values of the Yang-Mills charge.
## I Introduction
In classical and alternative theories of gravity (i.e. in General Relativity and beyond), the research in the context of Black Hole (BH) physics is quite relevant [1; 2]. Given that BHs are simple solutions of the Einstein's field equations, and they incorporate several classical and quantum properties [3], they represent an ideal arena to get insights into profound questions still open, as i) how we can consistently combine both the classical and the quantum regimen, or ii) how to deal with certain singularities present in General Relativity (GR). Black holes are parameterized at most by three fundamental quantities: i) the mass \(M\), ii) the angular momentum \(J\) and finally, iii) the charge \(Q\), statement supported by the "no-hair theorem" [4].
BHs are the perfect example in which classical and quantum effects coexist in a non-trivial way and rule in different regimes. At this point, it is essential to mention the Hawking radiation [5]. Ignoring the details, Hawking demonstrated that a black hole, with a surface gravity (labelled by \(\kappa\)), emits thermal radiation with a temperature given by \(T_{H}=\kappa/(2\pi)\). The simplest case, i.e., a Schwarzschild black hole [6], for a given a black hole mass, \(M_{0}\), the temperature grows up when the black hole emits energy (consequence of its negative specific heat). Thus, albeit Hawking radiation has not been detected yet [5; 7], it has a special spot inside (among all the potential signatures related to the BH's observations [8]) the different effects present in a black hole.
Numerous black hole solutions have been obtained within and beyond GR since the pioneering Schwarzschild solution, and we have three remarkable examples of four-dimensional black holes. In addition to the Schwarzschild solution [6], we have: i) the Reissner-Nordstrom solution [9; 10], ii) the Kerr solution [11] and, finally, iii) the Kerr-Newman solution [12]. The above-mentioned four solutions are considered as the "black-hole" solutions of general relativity. Such examples have been significantly studied, computing a huge variety of their physical properties. To be more precise, the celebrated Reissner-Nordstrom solution has been extensively studied in the literature [9; 10; 13; 14], leading, among other things, to interesting similarities with the Kerr solution for rotating BHs: the electric charge plays an analogue role as the spin parameter does in the event horizon radius and in the dynamics of magnetized particles as the magnetar J1745-2900 orbiting around the supermassive BH Sagittarius A\({}^{\star}\) (Sgr A\({}^{\star}\)) [15].
Classical self-gravitating configurations, such as soliton solutions (or boson stars) [16; 17; 18; 19; 20; 21], are of great interest in astrophysics since they can serve as black holes mimickers [22]. In general relativity and in the pure Yang-Mills theory, for instance, soliton solutions with localized energy densities do not exist, but in the coupled Eistein
Yang Mills scenario they can arise [23]. It is known, on the other hand, that the static Schwarzschild and Reissner-Nordstrom solutions are unique [24; 25], in the sense that in these solutions staticity implies spherical symmetry whereby regularity of the event horizon and the asymptotic behavior determine the solutions completely in terms of the mass and charge, as appropriate [26]. As a consequence, stationary electrovacuum BHs are axisymmetric and belong to a more general family of BH solutions called the Kerr-Newman BH [27]. Contrary to this claim, the uniqueness and no-hair theorems can be **violated** by the presence of a non-Abelian gauge field [28], leading to the Einstein Yang-Mills BH solution with non-trivial non-Abelian hair [29; 30; 31]) (see also Ref. [23] for other generalized non-Abelian BH solutions). Let us mention that the (non-Abelian) Yang-Mills theory is a natural generalization of the Abelian Maxwell theory. As mentioned, the idea of considering Yang-Mills theory with gravity was first studied in Ref. [32] where a family of (purely magnetic) particle-like solutions was found. The inclusion of a non-vanishing cosmological constant \(\Lambda\) has been studied in Refs. [33; 34; 35]. Other non-trivial solutions involving the Yang-Mills theory can be found in [36; 37] in higher spacetime dimensions, and in the context of the generalized SU(2) Proca theory in [21; 38].
Subsequently, novel and also non-trivial black hole solutions were found in the Einstein-Yang Mills theory by adding extra terms such as quartic self-interactions [38]. An emblematic example emerges when the idea of non-linear electrodynamics is implemented in Einstein-Yang-Mills black hole solutions. Albeit simple, the most natural generalization appears when a power-law electromagnetic Lagrangian is included (see [39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56] and references therein).
The power Yang-Mills solutions are quite promising in the sense that they generalize naturally the well-known Einstein-Yang-Mills solution with the Wu-Yang Ansatz for the gauge field configuration, and allow non-trivial solutions to exist. Although Yang-Mills and power Yang-Mills solutions are relevant, they are less popular than the Maxwell and power Maxwell counterpart (see [57; 58; 59; 60; 61; 62] and references therein).
The non-linearities of YM invariant have been exploited in different theories, and it has been also coupled either minimally or non-minimally to other theories, resulting in generic BH solutions [57; 58; 59; 61; 38]. Nevertheless, one can think of one minimal extension that includes both the electric and magnetic YM charge in the simplest setup. This configuration gives precisely place to what we call the Einstein-Maxwell power Yang-Mills theory. Hence, the incorporation of the power Yang-Mills term to the Maxwell theory naturally generalizes the new space-time structure of the emerging BH. We can immediately think of this as a modification of the standard Reissner-Nordstrom solution: a new kind of modified Reissner-Nordstrom BH with new properties.
A consolidated route to test gravity theories that involve (non-trivial) fields beyond the vacuum case is to study the behavior of matter (and the same fields) and particles test around the emerging BH solutions. A variety of phenomena such as gravitational lensing [63], the movement of the so-called \(S\)-stars around SgrA\({}^{*}\) in our Galactic center [64] and accretion of matter through optically thick accretion disk [65; 8; 66], they all have been the central targets in recent observational projects to understand not only how matter behaves in extreme environments but also to comprehend the spacetime structure itself, which is crucial to study the associated astrophysical processes. This latter aspect, in turn, provides an appealing way to test gravity theories beyond Einstein's general relativity theory. A particular program under this perspective is to study the stability conditions under which accretion of matter onto BHs can take place [67], ensuring thus the astrophysical viability of the theory.
Motivated by this, the primary goal of this paper is to investigate the steady accretion flows around this new class of black holes. Specifically, we aim to address the main question of how efficient the mass accretion rate is in this new setting compared to the standard Reissner Nordstrom solution. To accomplish this, our study begins by examining the structure of the event horizon and the conditions required to prevent the occurrence of a naked singularity. These aspects are essential to establish the transonic conditions of steady flows and, therefore, the mass accretion rate. In this respect, the study of accretion onto black holes for non-neutral solutions in four-dimensional space-time has been studied on several occasions (see e.g. [68; 69; 70; 71; 72; 73; 74; 75] and references therein). Bondi accretion is an interesting subject in astrophysics serving as a probe of concept of several phenomena [76; 77; 78].
This paper is organized as follows: after this compact introduction, in Section (II), we present the main ingredients and basic equations that describe the BH spacetime structure. Subsequently, we present in Sec. (III) the equations that describe the spherical steady accretion flows in a general theory of gravity. Then, in section (IV), we particularize the equations for the Einstein-Maxwell \(p\)-power Yang-Mills theory for the specific case \(p=1/2\) and compute the accretion rate for isothermal fluids and, more generally, for polytropic fluids in the fully relativistic regime. Finally, in the last section, a general discussion of our main results and some observational perspectives of this work are presented. We will use the mostly positive metric signature, \((-,+,+,+)\), and work in geometrical units, i.e., \(c=1=G\).
## II Spacetime structure of the Einstein-Maxwell-Power Yang-Mills black
This section describes the main ingredients of the theory that describe a new non-linear black hole solution. We work in a 4-dimensional theory which includes three ingredients: i) The Einstein-Hilbert term, ii) The Maxwell invariant, and iii) the Power Yang-Mill invari
ant. Accordingly, the total action becomes:
\[S_{0}=\frac{1}{2}\int\sqrt{-g}\ \mathrm{d}^{4}x\Bigg{[}R-\mathcal{F}_{\mathrm{M} }-\mathcal{F}_{\mathrm{YM}}^{p}\Bigg{]}, \tag{1}\]
where we have used the conventional definitions, i.e., \(R\) is the Ricci scalar, \(g\) is the determinant of the metric tensor \(g_{\mu\nu}\), \(p\) is a real parameter that introduces possible non-linearities in the Yang-Mills theory. Moreover, we have defined the Maxwell invariant
\[\mathcal{F}_{\mathrm{M}}=F_{\mu\nu}F^{\mu\nu}, \tag{2}\]
and, subsequently, we have defined the power Yang-Mills term with the help of the following relations
\[\mathcal{F}_{\mathrm{YM}} =\mathbf{Tr}\Big{(}F_{\lambda\sigma}^{(a)}F^{(a)\lambda\sigma} \Big{)}, \tag{3}\] \[\mathbf{Tr}(\cdot) =\sum_{a=1}^{3}(\cdot) \tag{4}\]
As usual, \(F_{\mu\nu}\) is the electromagnetic field strength, and \(F_{\mu\nu}^{(a)}\) is the gauge strength tensor that are defined in terms of the potentials \(A_{\nu}\) and \(A_{\nu}^{(a)}\), respectively, as follows:
\[F_{\mu\nu} \equiv\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\,, \tag{5}\] \[F_{\mu\nu}^{(a)} \equiv\partial_{\mu}A_{\nu}^{(a)}-\partial_{\nu}A_{\mu}^{(a)}+ \frac{1}{2\sigma}C_{(b)(c)}^{(a)}A_{\mu}^{(b)}A_{\nu}^{(c)}\,, \tag{6}\]
where the Greek indices run from 0 to 3 and \(a\) is the internal gauge index running from 1 to 3. Also, notice that \(C_{(b)(c)}^{(a)}\) symbolize the structure constants of 3 parameter Lie group \(G\), \(\sigma\) is an arbitrary coupling constant, \(A_{\mu}^{(a)}\) are the \(SO(3)\) gauge group Yang-Mills potentials, and finally \(A_{\mu}\) is the conventional Maxwell potential. To be more precise, for the YM field, we use the well-known magnetic Wu-Yang ansatz [37; 79]
\[\mathbf{A}^{(a)}=\frac{q_{\mathrm{YM}}}{r^{2}}(x_{i}dx_{j}-x_{j}dx_{i}), \tag{7}\]
with
\[2\leq j+1\leq i\leq 3, \tag{8}\] \[1\leq a\leq 3, \tag{9}\]
and
\[r^{2}=\sum_{i=1}^{3}x_{i}^{2}. \tag{10}\]
The Maxwell potential 1-form is given by
\[\mathbf{A}=\frac{Q}{r}dt, \tag{11}\]
where \(Q\) is the electric charge and \(q_{\mathrm{YM}}\) is the YM charge. Moreover, the Maxwell field and the non-Abelian gauge field are decoupled to each other but, of course, they are coupled linearly through gravity. Notice that the general solution (for an N-dimensional case) was first studied in [80] in the presence of a cosmological constant. In that work, the main thermodynamics, including a review of the first law of black hole thermodynamics in the extended phase space, was also computed. To maintain the discussion self-contained, we present the corresponding modified Einstein's field equations, i.e.,
\[G_{\mu\nu}+\Lambda g_{\mu\nu}=T_{\mu\nu}, \tag{12}\]
where we have defined two contributions to the energy-momentum tensor, i.e.,
\[T_{\mu\nu}\equiv T_{\mu\nu}^{\mathrm{M}}+T_{\mu\nu}^{\mathrm{YM}}, \tag{13}\]
with the above tensors defined as
\[T_{\mu\nu}^{\mathrm{M}} =2F_{\mu}^{\lambda}F_{\nu\lambda}-\frac{1}{2}F_{\lambda\sigma}F^ {\lambda\sigma}g_{\mu\nu}, \tag{14}\] \[T_{\mu\nu}^{\mathrm{YM}} =-\frac{1}{2}g_{\alpha\mu}\Bigg{[}\delta_{\nu}^{\alpha}\mathcal{ F}_{\mathrm{YM}}^{p}-4p\mathbf{Tr}\Big{(}F_{\nu\lambda}^{(a)}F^{(a)\alpha \lambda}\Big{)}\mathcal{F}_{\mathrm{YM}}^{p-1}.\Bigg{]} \tag{15}\]
By Variation with respect to the gauge potentials \(\mathbf{A}\) and \(\mathbf{A}^{(a)}\) provides the Maxwell and Yang Mills equations, respectively
\[\mathrm{d}\Big{(}^{\star}\mathbf{F}\Big{)} =0, \tag{16}\] \[\mathbf{d}\Big{(}^{\star}\mathbf{F}^{(a)}\mathcal{F}_{\mathrm{YM} }^{p-1}\Big{)}+\frac{1}{\sigma}C_{(b)(c)}^{(a)}\mathcal{F}_{\mathrm{YM}}^{p-1 }\mathbf{A}^{(b)}\wedge^{\star}\mathbf{F}^{(c)} =0, \tag{17}\]
where \(\star\) means duality. For the Wu-Yang ansatz, the trace of the Yang-Mills takes the form:
\[\mathcal{F}_{\mathrm{YM}}=\frac{q_{\mathrm{YM}}^{2}}{r^{4}}, \tag{18}\]
which is positive, allowing us thus to consider all rational numbers for the \(p\)-values. It is evident that for \(p=1\), the formalism reduces to the standard Einstein-Yang Mills theory. In the same direction, we have remarkable examples of which Yang-Mills term is included, for instance, [59; 62]. Now, considering a spherically symmetric spacetime in Schwarzschild coordinates, we can write the line element as
\[ds^{2}=-f(r)dt^{2}+f(r)^{-1}dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\theta d\phi^{2}), \tag{19}\]
with \(r\) being the radial coordinate. Ignoring the unnecessary details, the differential equation for the lapse function, coming from the Einstein field equation, is given by
\[\frac{\mathrm{d}f(r)}{\mathrm{d}r}+\frac{1}{r}f(r)=\frac{1}{r}-\frac{Q^{2}}{r^ {3}}-\frac{2^{p-1}q_{\mathrm{YM}}^{2p}}{r^{4p-1}}, \tag{20}\]
and, identifying the total derivative on the left-hand side we simply write the well-known form
\[\frac{\mathrm{d}}{\mathrm{d}r}\Big{(}rf(r)\Big{)}=1-\frac{Q^{2}}{r^{2}}-\frac{2 ^{p-1}q_{\mathrm{YM}}^{2p}}{r^{4p-2}}. \tag{21}\]
Thus, the metric function admits the general solution (setting the constant of integration as \(-2M\)):
\[f(r)=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}+\frac{Q_{\rm YM}}{r^{4p-2}}. \tag{22}\]
Notice that the power Yang-Mills solution1 assembles linearly the standard Reissner-Nordstrom solution, and the Yang-Mills charge \(q_{\rm YM}\) is related to its normalized version as [62]
Footnote 1: Keeping only the power Yang-Mills solution with \(p=1\) leads also to the Reissner-Nordström solution.
\[Q_{\rm YM}\equiv\frac{2^{p-1}}{4p-3}q_{\rm YM}^{2p}, \tag{23}\]
for \(p\neq 3/4\). For \(p=3/4\) a radial logarithmic dependency appears from the solution which overclouds analytical treatments, so we left with the case \(p\neq 3/4\) for simplicity. We also have to restrict this study to some values of \(p\) that satisfy some of the energy conditions of general relativity (in the pure Yang-Mills case) and provide nearly asymptotic flat solutions for a wide range of Yang-Mill charge values2. One may ask whether such a Yang-Mills charge is related to Noether currents or any symmetry in the theory. The answer is not. The magnetic charge is topological and comes from the Bianchi identity3[81].
Footnote 2: For instance, taking \(Q=0.6M\), and \(p=1/3,1/4\), \(Q_{YM}\sim\mathcal{O}(\pm 10^{-3})\) whereas \(p=1/2\) allows \(Q_{YM}\sim\mathcal{O}(\pm 1)\). The nearly asymptotic flat solution with a residual Yang-Mills charge is achieved either from above or below, depending on the Yang-Mills charge sign since the associated term dominates at large radial coordinate.
Footnote 3: We thank J. F. Rodriguez for clarifying this point.
After exploring some possible \(p\)-exponents, we deal concretely with a simple, by still intriguing, case \(p=1/2\) because of: i) it is consistent with the well-known energy conditions of general relativity and the causality condition [62] and ii) it modifies the Reissner-Nordstrom spacetime structure in a non-trivial but still manageable way, unlike other cases explored that lead to obscure tremendously the solution. Moreover, this case provides an illuminating analytical solutions for the inner (Cauchy) \(r_{-}\) and external (event horizon) \(r_{+}\) radii:
\[r\pm=\frac{M\pm\sqrt{M^{2}-Q^{2}(1+Q_{\rm YM})}}{1+Q_{\rm YM}}. \tag{24}\]
We call this solution henceforth modified Reissner-Nordstrom (MRN) solution with \(Q_{\rm YM}\neq-1\). It is interesting to see that the modification to the standard Reissner-Nordstrom is, in fact, effective on account of the denominator quantity; otherwise, a simple charge redefinition would be carried out like for the case \(p=1\). Notice that for the exponent \(p=1/2\), \(Q_{\rm YM}\) is a dimensionless parameter. The precise form for the event horizon radius is of great importance because, among other things, it sets a well-defined relation between both charges for preventing the naked singularity. It yields4
Footnote 4: Observational inferences of the EHT horizon-scale image of SgrA\({}^{\star}\) set at \(1\sigma\) and \(2\sigma\), respectively, \(Q/M\lesssim 0.8\) and \(Q/M\lesssim 0.95\). So extremal RN BH and the naked singularity regime \(1<Q/M\lesssim\sqrt{9/8}\) are ruled out [82]. Since such constraints are not applicable to our MRN BH, we allow extremal solutions in our description, but not naked singularities, for merely astrophysical purposes.
\[Q_{\rm YM}>-1\;\wedge\;0<\frac{Q}{M}<\sqrt{\frac{1}{1+Q_{\rm YM}}}. \tag{25}\]
Naturally, the standard constraint for the RN BH is contained in the previous expression and fully recovered in the limit \(Q_{\rm YM}\to 0\), leading to \(Q/M<1\) as it should be. For the allowed range of values of both charges, the corresponding horizons \(r_{+}\) are completely determined. Some particular cases within the ranges \(-1<Q_{\rm YM}<1\) and derived constraint \(Q\in(0,Q_{\rm Max})\) from Eq. (25) are shown in table 1 to illustrate the emergence of the new structure of the MRN black hole in terms of both charges as compared with Schwarzschild and RN cases. Another appealing solution embedded here is a kind of modified Schwarzschild BH, or a bit more precisely, a trivially charged Yang-Mills solution, which is attained for \(Q=0\). For this solution clearly \(r_{-}=0\) and the event horizon becomes
\[r_{+}^{\rm YM}=\frac{2M}{1+Q_{\rm YM}}, \tag{26}\]
with the restriction \(Q_{\rm YM}\neq-1\). On the contrary, for \(Q=Q_{\rm Max}\) (upper value in the second column of table 1 for a given \(Q_{\rm YM}\)), \(r_{+}\) and \(r_{-}\) each coincides with, which leads to a family of extremal BH solutions. This particular case corresponds to the lower possible value of \(r_{+}\). It is interesting to notice that when \(Q_{\rm YM}\) goes from the lowest possible value to larger ones, \(Q\) must be reduced consistently to preserve the cosmic censorship conjecture, leading thereby to different extremal cases as compared to the standard RN BH solution: \(r_{+}^{\rm RN}/M=1\) with \(Q/M=1\). For instance, for \(Q_{\rm YM}=1\), the corresponding maximum electric charge is now \(Q/M=0.70710\) with a horizon \(r_{+}=r_{+}^{\rm RN}/2\). For very large values \(Q_{\rm YM}\gg 1\), \(r_{+}=r_{-}\to 0\). In the opposite limit, \(Q_{\rm YM}/M\to-1\) which is the smallest asymptotic value, \(r_{+}\to\infty\). This is, of course, unphysical as a result of the divergence appearing in Eq. (24). Taking (the non-conservative) lower value \(Q_{\rm YM}/M=-0.9\), results in the extremal case \(r_{+}=10r_{+}^{\rm RN}\). In general, for \(Q_{\rm YM}<0\), the event horizon can be formed far beyond the standard Schwarzschild case with a maximum value corresponding to the Einstein-YM power case (\(Q=0\)). In the opposite range \(Q_{\rm YM}>0\), the horizon can be below the Cauchy horizon of the standard RN case when \(Q\neq 0\) and matches the extreme RN case for \(Q_{\rm YM}=1\) and \(Q=0\). This analysis is summarized in Table. 1 where other combinations of charges give place to extremal solutions as well.
Notice that the fact of having two charges leads to two-fold degeneracy of the event horizon. This is not illustrated in Table. 1. Let us then expose some concrete examples for completeness in our analysis. The case \(Q_{\rm YM}=-0.25\) with \(Q/M=1\) leads to \(r_{+}/M=2\), as in the Schwarzschild case, but with a smaller Cauchy horizon \(r_{-}/M=0.6666\). Likewise, the case \(Q_{\rm YM}=-0.36\) with \(Q/M=1.2\) also leads to \(r_{+}/M=2\), but this time with a larger Cauchy horizon \(r_{-}/M=1.125\). So there are multiple choices of charge values that provide similar results. In general, for a given \(Q\) value, the event horizon Eq. (24) is a monotonically decreasing function of \(Q_{\rm YM}\) with small variations in the range \(Q_{\rm YM}>0\).
Finally, it is important to note that some studies on the BH shadow size have constrained the electric charge, \(Q\), of a RN black hole to satisfy roughly \(Q/M<1\), effectively ruling out the extremal case [83; 84; 85]. In Table (1), we consider large values of \(Q\) since they still prevent the formation of a naked singularity, especially when negative Yang-Mills charge values are taken into account. This serves as an illustrative example of how the presence of the Yang-Mills charge can modify the properties of the standard RN black hole. Therefore, our numerical results do not conflict with the aforementioned constraints derived for the standard RN black hole, as they are not directly applicable to our scenario. However, it is worth mentioning that we have recently conducted a similar study in which we obtained constraints on the electric charge in terms of the Yang-Mills charge, and vice versa, by using the observed shadow size [86].
On the other hand, another useful parameter in astrophysical accretion is the radius of the innermost stable circular orbit (ISCO). Albeit well-known, we will briefly summarize the main ingredients behind the computation of ISCO for spherically symmetric four-dimensional backgrounds. We follow Ref. [87] closely for the main derivations. By using the line element (19) and following Ref. [88], the equations of motion for test particles are obtained by using the geodesic equation
\[\frac{d^{2}x^{\mu}}{ds^{2}}+\Gamma^{\mu}_{\rho\sigma}\frac{dx^{\rho}}{ds} \frac{dx^{\sigma}}{ds}=0, \tag{27}\]
where \(s\) is the corresponding proper time. The Christoffel symbols, \(\Gamma^{\mu}_{\rho\sigma}\), are related with the metric and its derivatives by [89]
\[\Gamma^{\mu}_{\rho\sigma}=\frac{1}{2}g^{\mu\lambda}\left(\frac{\partial g_{ \lambda\rho}}{\partial x^{\sigma}}+\frac{\partial g_{\lambda\sigma}}{ \partial x^{\rho}}-\frac{\partial g_{\rho\sigma}}{\partial x^{\lambda}}\right). \tag{28}\]
To rewrite the equation in a more suitable form, we take advantage of the existence of two conserved quantities (two first integrals of motion), precisely as in the Keplerian problem in classical mechanics. In such a sense, we notice that, for \(\mu=1=t\) and \(\mu=4=\phi\), the geodesic equations take the simple form
\[0 = \frac{d}{ds}\left(f(r)\frac{dt}{ds}\right), \tag{29}\] \[0 = \frac{d}{ds}\left(r^{2}\frac{d\phi}{ds}\right). \tag{30}\]
Introducing the corresponding conserved quantities
\[E\equiv f(r)\frac{dt}{ds},\ \ \ \ \ L\equiv r^{2}\frac{d\phi}{ds}, \tag{31}\]
we can, effectively, parameterize the problem easily. Thus, the last two quantities, \(E\) and \(L\), are usually related to the energy and angular momentum, respectively. Assuming a motion on the \((x-y)\) plane (namely, studying motions on the equatorial plane: \(\theta=\pi/2\)), the geodesic equation for the \(\theta\) index is also satisfied automatically and, therefore, such an equation does not provide information. The only non-trivial equation is obtained when \(\mu=2=r\) (see [88] for more details)
\[\left(\frac{dr}{ds}\right)^{2}=\left[E^{2}-f(r)\left(\epsilon+\frac{L^{2}}{r^ {2}}\right)\right], \tag{32}\]
which may also be obtained from [88]
\[g_{\mu\nu}\frac{dx^{\mu}}{ds}\frac{dx^{\nu}}{ds}=\epsilon. \tag{33}\]
Notice at this level that we have more than one possibility, depending on the value of the \(\epsilon\)-parameter. Thus, when \(\epsilon=1\) we deal with massive test particles, and when \(\epsilon=0\) with light rays. In what follows, we will consider the case with \(\epsilon=1\) (massive test particle with mass \(m\)). Then the geodesic equation can be written accordingly to
\[\left(\frac{dr}{ds}\right)^{2}=\left[E^{2}-f(r)\left(1+\frac{L^{2}}{r^{2}} \right)\right], \tag{34}\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(Q_{YM}\) & \(Q/M\) & \(r_{+}/M\) \\ \hline -0.9 & (0, 3.16228) & (20, 10) \\ -0.8 & (0, 2.23607) & (10, 5) \\ -0.7 & (0, 1.82574) & (6.66667, 3.33333) \\ -0.6 & (0, 1.58114) & (5, 2.5) \\ -0.5 & (0, 1.41421) & (4, 2) \\ -0.4 & (0, 1.29099) & (3.33333, 1.66667) \\ -0.3 & (0, 1.19523) & (2.85714, 1.42857) \\ -0.2 & (0, 1.11803) & (2.5, 1.25) \\ -0.1 & (0, 1.05409) & (2.22222, 1.11111) \\ 0 & (0, 1) & (2, 1) \\
0.1 & (0, 0.95346) & (1.81818, 0.90909) \\
0.2 & (0, 0.91287) & (1.66667, 0.83333) \\
0.3 & (0, 0.87705) & (1.53846, 0.76923) \\
0.4 & (0, 0.84515) & (1.42857, 0.714128) \\
0.5 & (0, 0.81649) & (1.33333, 0.6666) \\
0.6 & (0, 0.79056) & (1.25, 0.625) \\
0.7 & (0, 0.76696) & (1.17647, 0.58823) \\
0.8 & (0, 0.74535) & (1.1111, 0.5555) \\
0.9 & (0, 0.72547) & (1.05263, 0.52631) \\
1 & (0, 0.70710) & (1, 0.5) \\ \hline \end{tabular}
\end{table}
Table 1: Yang-Mills charge values \(-1<Q_{\rm YM}<1\) and derived constraint \(Q\in(0,Q_{\rm Max})\) by demanding the fulfillment of the the weak cosmic censorship conjecture Eq. (25). The existence of \(Q_{\rm YM}\) changes the possible values of \(r_{+}\). In particular, for a given \(Q_{\rm YM}\), we take the maximum electric charge \(Q_{\rm Max}\) that leads to the extremal case which is always equal to one-half the purely power YM case (\(Q=0\)).
and identifying the effective potential
\[V(r)=f(r)\left(1+\frac{L^{2}}{r^{2}}\right), \tag{35}\]
we can obtain ISCO radius. In concrete, from the condition \(V^{\prime}(r)=0\), \(L^{2}\) can be derived and replaced subsequently in \(V^{\prime\prime}(r)=0\) to ultimately obtain the (real) root,
\[r_{\text{ISCO}}=\frac{4M^{4}-3M^{2}Q^{2}(1+Q_{\text{YM}})+M^{4/3}\mu^{1/3}(2M^{ 4/3}+\mu^{1/3})}{M^{5/3}(1+Q_{\text{YM}})\mu^{1/3}}, \tag{36}\]
with \(\mu=\xi^{1/2}+8M^{4}-9M^{2}Q^{2}(1+Q_{\text{YM}})+2Q^{4}(1+Q_{\text{YM}})^{2}\) and \(\xi=Q^{4}(1+Q_{\text{YM}})^{2}(5M^{4}-9M^{2}Q^{2}(1+Q_{\text{YM}})+4Q^{4}(1+Q_ {\text{YM}})^{2})\). After careful examination of the impact of both charges on the ISCO structure, some values of the ISCO radius are reported in Table 2 to illustrate the richness of this class of BH. The pair of values (\(Q,Q_{\text{YM}}\)) also fulfill the constraint Eq. (25). The first and third rows correspond, respectively, to the Schwarzschild and extremal Reissner-Nordstrom cases. For a fixed electric charge, say, \(Q=0.9\), negative \(Q_{\text{YM}}\) leads to a larger ISCO radius, even larger than the Schwarzschild case (see e.g. sixth row). On the contrary, keeping \(Q_{\text{YM}}\) fixed and positive, large values of \(Q\) can reduce the ISCO radius even below the extremal Reissner-Nordstrom case (see seventh row). Hence, the ISCO radius can be smaller than the extremal Reissner-Nordstrom case and larger than the Schwarzschild solution.
Interestingly, it is also possible to replicate the ISCO radius of the Schwarzschild and extremal Reissner-Nordstrom cases for the selected values of charges: \(Q_{\text{YM}}=1.732050\) and \(Q/M=-0.58333\); and \(Q_{\text{YM}}=1.41421\) and \(Q/M=-0.3750\), respectively, in agreement with Table (2). For the purely power Yang-Mill case (\(Q\to 0\)), the ISCO reads
\[r_{\text{ISCO}}^{\text{YM}}=\frac{6M}{1+Q_{\text{YM}}}, \tag{37}\]
which is, again, a modified version of the Schwarzschild solution. Notice that the ISCO radius is affected in the same fashion as the event horizon (see Eq. (26)) by the Yang-Mills charge, that is, scaled by the factor \((1+Q_{\text{YM}})\).
All the aforementioned features for the event horizon and ISCO can also be observed in Fig. 1. In particular, it is depicted how these quantities behave as functions of the Yang-Mill charge for fixed values of the electric charge and how the extremal cases delimit the existence of the horizon (see points on the blue and red solid curves), thereby giving rise to the naked singularity beyond such critical values.
Having investigated the general properties of the nontrivial structure of the event horizon and ISCO radii in terms of both charges, and derived an useful relation between them that guarantees the cosmic sensor conjecture, we proceed now to investigate the behavior of spherical steady accretion flows onto this class of black hole. It will be studied after a brief description of the hydrodynamics equations.
## III Spherical steady accretion flows in a general theory of gravity
Accretion processes of an ideal and polytropic fluids onto black holes in an arbitrary spacetime have been extensively investigated as astrophysical probes to reveal any deviation from general relativity. Although we are working in the framework of Einstein's gravity we derive general accretion equations of steady flows in spherical
Figure 1: The position of the event horizon (solid curves) and the innermost stable circular orbits (dashed curves) is plotted against the Yang-Mills charge \(Q_{\text{YM}}\) for several fixed values of the (normalized) electric charge \(q\equiv Q/M\), as indicated in the legend. The event horizon for the solutions denoted by the solid blue and red curves exists only in a certain region of the parameter space before the naked singularity becomes apparent. This region is delineated by the extremal cases (points on the curves), for which \(Q_{\text{YM}}\) takes the maximum possible value for a given \(q\). Conversely, there are solutions in which the innermost stable circular orbits exhibit discontinuities. This occurs in a very small region of the parameter space (see red and blue dashed curves), where the solution itself cannot be guaranteed. Grey horizontal lines indicate, as appropriate, the event horizon and the innermost stable circular orbit for the Schwarzschild solution.
symmetry spacetime5 but in a general background metric following the general description of Ref. [90]. Here, the gravitational backreaction of the accreting fluid on the metric function is neglected. We consider a perfect fluid with total density \(\rho\), mass density \(\rho_{0}\), internal energy density \(\epsilon\), such that \(\rho=\rho_{0}+\epsilon\). For isentropic fluids, the pressure can be defined as \(P=\omega\rho^{\gamma}\) where \(\omega\) is a constant and \(\gamma\) is the adiabatic index. The stress energy-momentum tensor is given by
Footnote 5: We consider the replacement \(f(r)^{-1}=g(r)\) in the line element Eq. (19) to be more general in the description. The standard ac
\[T^{\mu\nu}=(\rho+P)u^{\mu}u^{\nu}+Pg_{\mu\nu}, \tag{38}\]
where \(u^{\mu}=(u^{t},u^{r},0,0)\) is the four velocity of the fluid with radial accretion (and wind) flow. From the normalization condition one gets the relation \(u^{t}=\sqrt{\frac{g(u^{2}+f)}{f}}\), where we have defined for simplicity \(u\equiv u^{r}\). From the baryon conservation and energy-momentum conservation
\[\nabla_{\mu}(\rho_{0}u^{\mu}) =0, \tag{39}\] \[\nabla_{\mu}T^{\mu\nu} =0, \tag{40}\]
one obtain two master equations, respectively
\[\frac{\rho_{0}^{\prime}}{\rho_{0}}+\frac{f^{\prime}}{2f}+\frac{g ^{\prime}}{2g}+\frac{u^{\prime}}{u}+\frac{2}{r}=0, \tag{41}\] \[uu^{\prime}+\left(\frac{f^{\prime}}{2fg}+\frac{c_{s}^{2}}{g} \frac{\rho_{0}^{\prime}}{\rho_{0}}\right)(1+gu^{2})+\frac{g^{\prime}}{2g}u^{2 }=0, \tag{42}\]
where prime denotes radial derivative. Here the sound speed is defined as \(c_{s}^{2}\equiv\frac{dP}{d\rho}\) at constant entropy. We have also used the first law of thermodynamics in the form \(\frac{d\rho}{d\rho_{0}}=\frac{\rho+P}{\rho_{0}}\) to obtain the useful relation \(P^{\prime}=\frac{(\rho+P)}{\rho_{0}}c_{s}^{2}\)\(\rho_{0}^{\prime}\). Integration of Eqs. (41) and (42) gives, respectively, the mass accretion rate
\[\dot{M}=4\pi r^{2}u\rho_{0}, \tag{43}\]
and the relativistic version of the Bernoulli equation
\[f(1+gu^{2})\left(\frac{\rho+P}{\rho_{0}}\right)=C, \tag{44}\]
where \(C\) is an integration constant that must be defined, as well as the equation of state, in order to solve for the infall radial velocity.
## IV Accretion process in Einstein-Maxwell-Power Yang-Mills theory
We start by solving the system of differential equations Eqs. (41)-(42) that governs the dynamics of the fluid flow for the Einstein-Maxwell-Power Yang-Mills theory and metric function given by Eq. (22) with a power \(p=1/2\). There should exist a critical radius that guarantees the monotonic increase of the velocity as decreasing \(r\) and avoid singularities in the flow. At such a point, the flow speed equals the sound speed even in the Newtonian treatment of the problem. This condition imposes regularity in both equations at some critical point \(r_{c}\), resulting in
\[u_{c}^{2} =-\frac{Q^{2}-Mr_{c}}{2r_{c}^{2}}, \tag{45}\] \[c_{s,c}^{2} =\frac{-Q^{2}+Mr_{c}}{Q^{2}+r_{c}(-3M+2(1+Q_{\rm YM})r_{c})}, \tag{46}\]
for the critical velocity of the fluid and the sound speed, respectively. In spite of having explicitly the Yang-Mills charge that changes the structure of the fluid flow, it is possible, after some algebra, to write the critical velocity as \(u_{c}^{2}=-\frac{c_{s,c}^{2}f}{-1+c_{c}^{2}}\) as in the Schwarzschild case. With this, the critical radius can be determined:
\[r_{c}=\frac{M+3c_{s,c}^{2}M\pm\sqrt{(M+3c_{s,c}^{2}\,M)^{2}-8c_{s,c}^{2}(1+c_{ s,c}^{2})Q^{2}(1+Q_{\rm YM})}}{4c_{s,c}^{2}(1+Q_{\rm YM})}. \tag{47}\]
The critical radial velocity matches the Reissner-Nordstrom velocity at leading order whereas the critical
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(Q_{YM}\) & \(Q/M\) & \(r_{\rm ISCO}/M\) \\ \hline
0 & 0 & 6 \\
1.732050 & -0.58333 & 6 \\
0 & 1 & 4 \\
1.414213 & -0.3750 & 4 \\
0.234567 & 0 & 4.860 \\ -0.2 & 0 & 7.50 \\
0.234567 & 0.9 & 3.24 \\ -0.2 & 0.9 & 6.09274 \\ -0.2 & 1 & 5.67198 \\ \hline \end{tabular}
\end{table}
Table 2: Radius of the ISCO in terms of both electric and Yang-Mill charge for some reference values according to the relation Eq. (25) that preserves the cosmic censorship conjecture. For comparison the Schwarzschild and Reissner-Nordström cases have been included in the first and third rows, respectively.
sound speed receives contributions from the Yang-Mills charge which makes the accretion flow differently. The Reissner-Nordstrom solutions is recovered in the limit \(Q_{\rm YM}\to 0\) and the Schwarzschild case taking, in addition, \(Q\to 0\).
Footnote 1: The \(Q_{\rm YM}\) is a function of the critical radius \(r_{c}\), which is defined by \(r_{c}^{2}=\frac{1+3c_{s}^{2}}{4c_{s}^{2}(1+Q_{\rm YM})}\).
The existence of the critical radius is ensured provided that the conditions
\[\begin{split} Q_{\rm YM}&<-1\ \vee\\ -1<& Q_{\rm YM}&<-1+\frac{(1+3c_{s,c}^{2 })^{2}}{8(Q/M)^{2}c_{s,c}^{2}(1+c_{s,c}^{2})},\end{split} \tag{48}\]
are satisfied. Such constraints are compatible with \(Q>0\) and the causality condition \(c_{s,c}^{2}<1\). Nevertheless, to prevent the naked singularity, the region \(Q_{\rm YM}<-1\) must be discarded in agreement with the condition set by Eq. (25). The critical radius for the pure power Yang-Mill case (\(Q=0\)) is
\[r_{c}^{\rm YM}=\frac{1+3c_{s}^{2}+\sqrt{(1+3c_{s}^{2})^{2}}}{4c_{s}^{2}(1+Q_{ \rm YM})}, \tag{49}\]
whose existence relies only on the nature of the matter fluid, that is, \(c_{s,c}^{2}>-1/3\), which is plainly guaranteed and consistent with the causality condition.
It is instructive to see how the variables involved affect the position of the critical radius in both cases. In doing so, it is convenient to introduce the (electric) charge-to-mass ratio \(q=Q/M\) and the dimensionless critical radius \(x_{c}=r_{c}/M\). Since we have many variables at play, our strategy is to take certain values of the electric and YM charges and allow \(c_{s,c}^{2}\) varies within some reasonable range. Although, values within \(c_{s,c}^{2}<1\) are the ones of physical interest, we relax this condition knowingly to get a more complete picture of the accretion problem. As we shall see, for \(c_{s,c}^{2}>1\) accretion is apparently possible but breaks down the causal propagation of the sound speed. In particular, we take values of charges so that the Cauchy and event horizons exist simultaneously, with the exception of the case \(q=1.2\) and \(Q_{\rm YM}=-0.2\) (naked singularity) as can be seen in Fig. 2, where the electric charge overcomes its maximum allowed value as described in Tab. 1. The latter also violates the condition Eq. (48) for \(c_{s,c}^{2}\gtrsim 0.3\) whereas the other cases behaves regularly for any \(c_{s,c}^{2}\). This is the reason why the green curve is cut at a certain sound speed value as can be seen.
For subsonic sound speeds the critical radius can be accommodated far away from their respective event horizons and in the opposite range, it can be located between the event horizon and the Cauchy horizon unless it corresponds to the extremal case: \(q=0.9\) and \(Q_{\rm YM}=-0.2\) (red curve), where the position of the critical radius is unaltered for \(c_{s,c}^{2}\geq 1\), namely, once it reaches the event horizon.
For \(c_{s}^{2}=1\), the position of all critical points coincides with their respective position of the event horizon, as in the Schwarzschild and RN cases. This is marked with dots over all curves. Finally, in the limit of \(Q_{\rm YM}\to 0\), our results agree fully with accretion of perfect fluids in the RN metric [68].
### Accretion flow for isothermal fluids
In order to simplify our analysis and get some physical insights before proceeding in a more general way, we investigate first accretion of isothermal test fluids. The equation of state is then of the form \(P=\omega\rho\), from which the sound speed is simply derived: \(c_{s,c}^{2}=\omega\). This choice, in addition to reducing the analysis and the numerical computations considerably, gives us a general idea of the behavior of the mass density and radial velocity through the critical points for a given constant \(\omega\) and charges values. So, the implemented numerical strategy will cover some discrete values of the full parameter space that capture the general behavior of the accreting matter flow.
We remind that the YM charge leads to an enhancement of the electric charge up to \(\sim 3.2\) that still preserves the cosmic weak censorship. It makes, as a consequence, that the critical radius can be located far away from the BH in comparison with the standard Schwarzschild and Reissner Nordstrom cases. So, for suitable comparison and representation, we take \(q\leq 1\) in the subsequent analysis.
Let us start by considering a stiff fluid \(\omega=1\). For certain values of the electric charge \(q\), a range of values
Figure 2: Position of the critical point in terms of the critical sound speed for different values of the electric and YM charges as described by the legend. Dots over curves point out the match of the critical radius with the event horizon which happens for \(c_{s,c}^{2}=1\). Green curve corresponds to a naked singularity whose associated critical radius exists only for \(c_{s,c}^{2}\lesssim 0.3\) and the condition Eq. (48) can be preserved.
for the YM charge that satisfies Eq. (48) is derived. The results are shown in Fig. 3 with bar legends depicting the YM charge. Left panels correspond to the radial velocities from top \(q=0\) to bottom \(q=1\) while right panels display the mass densities for the same values of charges. To read this plot correctly, notice that as long as the color tone intensifies, which is dictated by the enhancement of \(Q_{\rm YM}\) up to some positive upper value that is constrained by the existence of the solution itself according to Eq. (45), the trajectory of the infall radial velocity approaches to the BH. Fair comparison can be done between the cases \(q=0\), \(q=0.5\) and \(q=0.7\) where we have chosen the same range of values for the YM charge unlike the case \(q=1\) where negative values of \(Q_{\rm YM}\) have been taken instead. Red points, as well as their associated curves, correspond to the case with vanishing YM charge that coincides of course with the Schwarzschild case for \(q=0\). Turning on the Yang-Mills charge, but still keeping \(q=0\), leads to the purely power Yang-Mills case. Notice that from \(q=0\) till \(q=0.7\), several accretion flows are allowed to transit around the standard RN case (red points) provided that \(Q_{\rm YM}\neq 0\). Hence, there do exist as many possible transonic solutions as values for the YM charge. The main effect of \(Q_{\rm YM}\) on the critical point is to bring the transonic flow closer to the BH as it increases. As to the case \(q=1\), the critical point (red color) coincides with the event horizon in agreement with our previous discussion about \(c_{s,c}^{2}=1\) found below Eq. (48). Notice that the abscissa now covers large values of the radial coordinate compared to the other cases. So, a comparison with the above cases must be taken with caution.
This first comprehensive inspection tells us then that the Yang-Mills charge remarkably enriches the physical condition behind the transonic flow of accreting matter, thus allowing new critical or sound points to existing. We do not explain in detail the other cases because they can be described, considering the aforementioned properties for the stiff case, in a general way as follows. As \(\omega\) gets smaller, passing from the ultra-relativistic case \(\omega=1/2\) (Fig. 4), the relativistic case (Fig. 5) to the case \(\omega=1/4\) (Fig. 6), and considering the same values of charges, all critical points move away from the BH. This is the reason why the sequence of curves spreads out so that they must be relocated in the new position of the critical points. This aspect becomes more noticeable as the electric charge \(q\) increases. Another noteworthy feature is that, for \(\omega=1/3\) and \(\omega=1/4\), as well as small electric charges \(q<1\), the transonic condition is no longer guaranteed. As a result, few sequence of curves are allowed to transit through the critical points, in particular those characterized by \(Q_{\rm YM}>0\).
Finally, we focus on both charges values that lead to the extremal case for this new BH solution and pay attention on how the sequence of velocity curves transit for the same \(\omega\)-values as before. As a reference, the infall radial velocity for the standard extremal Reissner Nordstrom case (\(Q=1,Q_{\rm YM}=0\)) is described by the black curve along with some cases characterized by positives and negatives Yang-Mills charges whose associated velocities are located respectively around it. This is shown in Fig. 7. All points over the curves mark the position of the critical radius. Detailed information about the precise values of charges, critical points, and critical velocities can be found in Table. 9. All the main features discussed above keep unalterably. Notice, however, that for the case \(\omega=1\), the value of critical velocity does not change, unlike other cases.
### Accretion rate for a polytropic fluid
At this point, the properties of the steady transonic flow at the sonic point are known which allow us to compute the maximum accretion rate for the Einstein-Maxwell power-Yang-Mills theory. Before proceeding, it is very useful to rewrite radial velocity and sound speed at the critical point in terms of the known boundary conditions. It also demands knowledge of the equation of state. So we consider a non-relativistic baryonic gas with polytropic equation
\[P=Kn^{\gamma}, \tag{50}\]
where \(\gamma\) is the adiabatic index and \(K\) is a constant. With this and from the first law of thermodynamics one can get
\[\rho=mn+\frac{K}{\gamma-1}n^{\gamma}. \tag{51}\]
#### iv.2.1 Relativistic regime
We provide here an exact expression for the accretion ratio in the relativistic regime for the theory in consideration. When studying accretion however most of the approaches focus on the asymptotic limit only, leaving thus incomplete the full picture of the problem. For our surprise, no work has investigated analytically the steady-state spherical accretion rate in alternative theories of gravity beyond GR in the fully relativistic regime. We follow closely Ref. [77], where Bondi accretion of steady spherical gas flow onto a Schwarzschild black hole has been studied.
From the polytropic equation, the sound speed can be written in terms of the mass density
\[c_{s}^{2}=\frac{\gamma K\rho_{0}^{\gamma-1}}{1+\gamma K\rho_{0}^{\gamma-1}/( \gamma-1)}. \tag{52}\]
Evaluating this expression at the critical point and in the asymptotic region, it is easy to find the relation
\[\rho_{0,s}=\rho_{0,\infty}\left(\frac{c_{s,c}^{2}}{c_{s,\infty}^{2}}\right)^{ \frac{1}{\gamma-1}}\left(\frac{\gamma-1-c_{s,\infty}^{2}}{\gamma-1-c_{s,c}^{2 }}\right)^{\frac{1}{\gamma-1}}. \tag{53}\]
Figure 3: **Stiff case \(\omega=1\)**. _Left panel_: infall radial velocity for specific values of the electric charge \(q\) along with the effect of changing simultaneously the Yang-Mill charge \(Q_{\rm YM}\), as described by the bar legend. Red points mark the critical point for the standard RN BH case \(Q_{\rm YM}=0\). _Right panel_: mass density distribution due to the BH gravitational potential for the same values of the electric charges as in the radial velocity. Notice that in bottom panels negative values of \(Q_{\rm YM}\) have been considered only to allow \(q=1\) to exist according to the table 1.
Figure 4: **Ultra-relativistic case \(\omega=1/2\). Left panels show the infall radial velocity while right panels describe the mass density distribution due to the BH gravitational potential for a specific value of the electric charge \(q\) along with the effect of changing simultaneously the Yang-Mill charge \(Q_{\rm YM}\) as described by the bar legend. Notice that in bottom panels negative values of \(Q_{\rm YM}\) have been considered only to allow \(q=1\) according to the table 1.**
Figure 5: **Relativistic case \(\omega=1/3\)**. Left panels show the infall radial velocity while right panels describe the mass density distribution due to the BH gravitational potential for a specific value of the electric charge \(q\) along with the effect of changing simultaneously the Yang-Mill charge \(Q_{\rm YM}\), as described by the bar legend. The delimited range \(0<Q_{\rm YM}<-0.5\) in the three first arrows is because those values do provide transonic flows. Notice that in bottom panels negative values of \(Q_{\rm YM}\) have been considered only to allow \(q=1\) according to the table 1.
Figure 6: **Case \(\omega=1/4\)**. Left panels show the infall radial velocity while right panels describe the mass density distribution due to the BH gravitational potential for a specific value of the electric charge \(q\) along with the effect of changing simultaneously the Yang-Mill charge \(Q_{\rm YM}\), as described by the bar legend. We have delimited the range \(0<Q_{\rm YM}<-0.5\) for the same reason as the relativistic case. Notice that in bottom panels negative values of \(Q_{\rm YM}\) have been considered only to allow \(q=1\) according to the table 1.
Finally, the relativistic Bernoulli equation, evaluated at the critical point, provides the relation
\[(1+3c_{s,c}^{2})\left(1-\frac{c_{s,c}^{2}}{\gamma-1}\right)^{2}=\left(1-\frac{c_{ s,\infty}^{2}}{\gamma-1}\right)^{2}. \tag{54}\]
So, once the boundary condition at infinity is given, all the quantities at the critical point are uniquely determined. Notice that the above equation is a cubic equation for \(c_{s,c}^{2}\) but only one solution is (real) physical for a given polytropic equation of state. This is solved by using a root-finding procedure.
Considering all the above, the critical accretion rate
\[\dot{M}=4\pi\rho_{0,s}u_{s}r_{s}^{2}, \tag{55}\]
can be computed easily
\[\dot{M}_{\rm MRN}=4\pi\left(\frac{M}{c_{s,\infty}^{2}}\right)^{2}c_{s,\infty}^ {2}\;\rho_{0,\infty}\;\lambda_{\rm MRN}, \tag{56}\]
Figure 7: Infall radial velocity for a couple of values \(q\) and \(Q_{\rm YM}\) that lead to the extremal BH solution of the theory. As the Yang-Mills charge is turned on, new radial velocities are allowed to transit around the critical points. See also Table 4 for further information.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(q\) & \(Q_{YM}\) & \(\omega=1\) & \(\omega=1/2\) & \(\omega=1/3\) & \(\omega=1/4\) \\ & \((x_{c},v_{c})\) & \((x_{c},v_{c})\) & \((x_{c},v_{c})\) & \((x_{c},v_{c})\) & \((x_{c},v_{c})\) \\ \hline
1.41421 & -0.5 & (2, 0.41833) & (3, 0.505524) & (4, 0.51841) & (5., 0.517687) \\
1.19523 & -0.3 & (1.42857, 0.41833) & (2.14286, 0.483044) & (2.85714, 0.467706) & (3.57143, 0.442718) \\
1.05409 & -0.1 & (1.11111, 0.41833) & (1.66667, 0.459465) & (2.22222, 0.410789) & (2.77778, 0.352134) \\
1 & 0 & (1, 0.41833) & (3/2, 0.44721) & (2, 0.37914) & (5/2, 0.296645) \\
0.95346 & 0.1 & (0.90909, 0.41833) & (1.36365, 0.434606) & (1.81819, 0.344595) & (2.27274, 0.228028) \\
0.87705 & 0.3 & (0.76923, 0.41833) & (1.15389, 0.408229) & (1.53849, 0.262183) & – \\
0.81649 & 0.5 & (0.66666, 0.41833) & (1, 0.380048) & (1.33334, 0.136908) & – \\ \hline \end{tabular}
\end{table}
Table 3: Critical radii and velocities for a couple of values \(q\) and \(Q_{YM}\), that correspond to the new extremal BH solution according to Eq. (24), for a given constant \(\omega\). For \(\omega=1\), the position of the critical point matches the corresponding event horizon as in the Schwarzschild and RN cases.
with a modified eigenvalue
\[\begin{split}\lambda_{\text{MRN}}\equiv&\left(\frac{c_{s, c}^{2}}{c_{s,\infty}^{2}}\right)^{\frac{5-3\gamma}{\gamma-1}}\left(\frac{\gamma-1-c_{s, \infty}^{2}}{\gamma-1-c_{s,c}^{2}}\right)^{\frac{1}{\gamma-1}}\times\\ &\frac{1}{4}\beta(1+3c_{s,c}^{2})^{3/2}.\end{split} \tag{57}\]
Here the \(\beta\) factor contains information of both charges whereby it accounts for deviation from the Schwarzschild solution and the RN case as well. This is defined as
\[\begin{split}\beta=&\frac{1}{4(1+Q_{\text{YM}})^{2 }}\times\\ &\left[1+\sqrt{1-\frac{8c_{s,c}^{2}(1+c_{s,c}^{2})q^{2}(1+Q_{ \text{YM}})}{(1+3c_{s,c}^{2})^{2}}}\right]^{2}.\end{split} \tag{58}\]
Computing the ratio of Schwarzschild to the modified Reissner-Nordstrom accretion ratios is particularly convenient for quantifying the effect of both charges
\[\frac{\dot{M}_{\text{MRN}}}{\dot{M}_{\text{Sch}}}=\beta. \tag{59}\]
The purely power Yang-Mills case is independent of the adiabatic index and the boundary condition. This provides \(\beta_{\text{YM}}=\frac{1}{(1+Q_{\text{YM}})^{2}}\). Considering \(Q_{\text{YM}}>0\) leads to \(\beta<1\) and _vice versa_. It means hence that \(Q_{\text{YM}}<0\) produce an enhancement of the accretion rate which does not happen in the standard Reissner Nordstrom (see e.g. [69; 91]) and Kerr [92] solutions: the mass accretion rate decreases as the charge and spin values increase. On the other hand, the accretion rate for the Schwarzschild solution is recovered when both charges are turned off, leading to \(\beta\to 1\), as can be plainly checked. Solving the Bernoulli equation Eq. (54), that is, defining the sound speed at the boundary condition and solving for \(c_{s,c}^{2}\), one can fully compute the \(\beta\) factor (Eq. (58)) to determine Eq. (59). This is shown in Fig. 8.
It is important to mention that the larger the boundary sound speed, the shorter the accretion rate variations are, keeping both the adiabatic index \(\gamma\) and the electric charge fixed. Also, taking \(q\) small makes things less distinguishable for a different \(\gamma\). So we take \(c_{s}^{2}=0.01\) and \(q>0.3\) to see changes between different \(\gamma\)'s values. Intersections of all curves with the (gray) dashed vertical line \(Q_{\text{YM}}=0\) represent the Reissner Nordstrom accretion rates (strictly for \(q\leq 1\)) which in all cases is below one as is known. Notice that, for \(\gamma=4/3,5/3\), the effect of the electric charge is quite perceptible: compare \(q=0\) (solid black curve) and \(q=1\) (magenta curve) cases; unlike the case \(\gamma=1\) where both curves are practically indistinguishable. All curves, however, converge as \(Q_{\text{YM}}\to-1\) independent of the electric charge and adiabatic index. In general, the accretion rate efficiency in the relativistic regime (\(\gamma=4/3\)) is only larger than its non-relativistic counterpart (\(\gamma=5/3\)) at sub-percent level. However, in the isothermal limit \(\gamma\to 1\), the difference is a few percent levels compared to the previous cases. Even though the electric charge decreases the accretion rate, it is possible to have \(\dot{M}_{\text{MRN}}\geq\dot{M}_{\text{Sch}}\) by taking \(Q_{\text{YM}}<0\).
As a main result, the Yang-Mill charge, with \(Q_{\text{YM}}<0\), can correct the accretion rate deficiency of the electric charge or practically cease the accretion process for \(Q_{\text{YM}}>0\) because the critical point is well inside the event horizon. The enhancement of the accretion rate can be up to order \(10^{2}\) for \(Q_{\text{YM}}\to-1\) in comparison to the standard Schwarzschild case. These results hold only for the power Yang-Mills case \(p=1/2\). We expect to find however similar qualitative results for \(p\neq 1/2\) whenever it leads to two branches (\(\pm\)) of solutions for the Yang-Mills charge when finding the roots of the polynomial \(Q_{\text{YM}}=\frac{2^{p-1}q_{\text{YM}}^{2}}{4p-3}\).
#### iv.2.2 Asymptotic limit
We do not describe in detail all derivations regarding the hydrodynamic equations for the non-relativistic case since most of them are independent of the metric background and can be found, for instance, in Ref. [78]. This description is strictly valid for a polytropic fluid Eq. (50) with adiabatic index \(\gamma<5/3\)[78; 92]. The Bernoulli equation provides the relation
\[c_{s,c}^{2}\approx\frac{2c_{s,\infty}^{2}}{(5-3\gamma)}, \tag{60}\]
for the non-relativistic condition \(c_{s,c}\ll 1\) which holds for reasonable large radius, \(r\gg r_{c}\), far away from the BH gravitational influence. The same condition leads to the simple relation
\[c_{s,c}^{2}\approx K\gamma\;\rho_{0,c}^{\gamma-1}, \tag{61}\]
between the sound speed and the mass density at the critical point. This implies that the mass density can be expressed in terms of the sound speed at the infinity in view of Eq. (60) to yield
\[\rho_{0,c}\approx\rho_{0,\infty}\left(\frac{c_{s,c}^{2}}{c_{s,\infty}^{2}} \right)^{\frac{1}{\gamma-1}}. \tag{62}\]
At this point all above is standard and the quantities does not receive contributions from the effective global charges \(Q\) and \(Q_{\text{YM}}\) at the lowest order in \(c_{s,c}\). This is not the case however for the critical radius Eq. (47) where the YM charge already appears explicitly at leading order, affecting the Schwarzschild critical radius as a non-linear correction. At the next leading order, both charges \(Q\) and \(Q_{\text{YM}}\) appear independently of one another. Next correction, however, reveals the coupling between them as a non-linear manifestation of the Maxwell-Power Yang
Mills structure:
\[r_{c}\approx \frac{M}{2(1+Q_{\rm YM})c_{s,c}^{2}}+\frac{3M^{2}-2Q^{2}(1+Q_{\rm YM} )}{2M(1+Q_{\rm YM})}+\] \[\frac{2Q^{2}(M^{2}-Q^{2}(1+Q_{\rm YM}))}{M^{3}}c_{s,c}^{2}+\mathcal{ O}(c_{s,c}^{4}).\]
Keeping only contributions up to next leading order and using Eq. (60), the critical radius can be approximated to
\[r_{c}\approx\frac{1}{\eta}\left(\frac{(5-3\gamma)}{4c_{s,\infty}^{2}}M+\frac{3 M^{2}-2Q^{2}\eta}{2M}\right), \tag{63}\]
where the dimensionless correction factor due to the YM charge has been defined simply as
\[\eta=1+Q_{\rm YM}, \tag{64}\]
considering the restriction \(Q_{\rm YM}\neq-1\). It is instructive to write the purely Power Yang-Mills case
\[r_{c}^{\rm YM}\approx\frac{M}{\eta}\left(\frac{(5-3\gamma)}{4c_{s,\infty}^{2} }+\frac{3}{2}\right), \tag{65}\]
and the non-relativistic version of the Reissner-Nordstrom case
\[r_{c}^{\rm RN}\approx\frac{(5-3\gamma)}{4c_{s,\infty}^{2}}M+\frac{3M^{2}-2Q^{2 }}{2M}. \tag{66}\]
Considering all the above, we can proceed now to compute the accretion rate for the Einstein-Maxwell-Power-Yang-Mills theory which corresponds to a non-relativistic flow with non-vanishing corrections to the Newtonian transonic flow. This results in
\[\begin{split}\dot{M}^{\rm NR}_{\rm MRN}\approx 4\pi\rho_{0, \infty}\left(\frac{M}{\eta}\right)^{2}c_{s,\infty}^{-3}\left(\frac{1}{2}\right) ^{\frac{\gamma+1}{2(\gamma-1)}}\times\\ \left(\frac{5-3\gamma}{4}\right)^{\frac{3\gamma-5}{(\gamma-1)}} \chi^{2},\end{split} \tag{67}\]
where the function \(\chi=1+\left(6-4\frac{Q^{2}}{M^{2}}\eta\right)\frac{c_{s,\infty}^{2}}{(5-3 \gamma)}\) has been introduced for comparison purposes with the Schwarzschild accretion rate expression in the non-relativistic regime. This, besides, contains information of both electric and Yang-Mills charges even far outside the event horizon. The non-relativistic version of the Reissner-Nordstrom solution is straightforwardly recovered in the limit \(\eta\to 1\) and the one for the Newtonian accretion rate is achieved additionally when \(Q\to 0\). Consequently, we can write the accretion rate in the compact form
\[\dot{M}^{\rm NR}_{\rm MRN}=\left(\frac{\chi}{\eta}\right)^{2}\dot{M}_{\rm New}. \tag{68}\]
In this way, the ratio \(\left(\frac{\chi}{\eta}\right)^{2}\) accounts for the deviation from the Newtonian accretion rate and it is the quantity we have to pay our attention to. First examination tells us that the effect of the electric charge is not so prominent (\(\sim 0.02\%\)) since it is attenuated by the squared sound speed which is taken to be very small6, although it can be increased as \(\gamma\to 5/3\). For values within the range \(1<\gamma<5/3\), however, there do not exist appreciable differences when \(q\) varies within the approximation \(c_{s,\infty}^{2}\ll 1\). Interestingly, the factor \(\eta\), which depends on the Yang-Mills charge, does produce, on the contrary, significant changes in the accretion rate, specially for very small values since accretion rate scales as \(\eta^{-2}\). Hence, the largest differences will appear for charge values \(Q_{\rm YM}\sim\mathcal{O}(-0.1)\), or more precisely near the asymptotic value \(Q_{\rm YM}\to-1\), as happens for the fully relativistic case (see Fig. 8).
Footnote 6: This is however somehow artificial because of the non-linear dependence of the accretion rate on the critical radius that leads to write the accretion rate in the form Eq. (68), i.e., as a factor of the Newtonian accretion rate.
## V Discussion and Conclusions
In the present paper, we have investigated the spacetime structure of the resulting BH that arises from the Einstein-Maxwell Power-Yang-Mills theory with power \(p=1/2\). The presence of the Yang-Mills charge modifies non-trivially the structure of the standard Reissner-Nordstrom BH, and in the absence of the electric counterpart (the purely power Yang-Mills case), the Schwarzschild solution. This can be noticed through Eqs. (24) and (26) that describe, respectively, the position of the event horizons in the mentioned cases. In particular, negative Yang-Mills charges can increase considerably the event horizon for a given electric charge that can take, in this new modified version of the Reissner-Nordstrom BH, values up to \(\sim 3.2\), but still preserving the weak cosmic censorship conjecture. Interestingly, the ISCO radius can be smaller than the extremal Reissner-Nordstrom case and larger than the Schwarzschild solution depending on the combination of the charges (\(Q,Q_{\rm YM}\)) as reported in Table 2. The introduction of the Yang-Mills charge, along with a specific electric charge value, can lead to an unexpected degeneracy of the event horizon and ISCO radii with respect to the Schwarzschild and Reissner-Nordstrom cases. This is not the case in the purely power Yang-Mills case. This degeneracy may be broken, for instance, with the aid of the BHs Shadow using the Event Horizon Telescope observations of Sgr A\({}^{\star}\).
As a first astrophysical implication of this theory, we have investigated the steady, transonic properties of isothermal test fluids through extensive numerical examples that cover from the Einstein-Power-Yang-Mills (\(q=0\) and \(Q_{\rm YM}\neq 0\)) to the Einstein-Maxwell-Power-Yang-Mills case (\(q\neq 0\) and \(Q_{\rm YM}\neq 0\)). Thus, broad numerical solutions of sequence of velocity and density
curves have been depicted for discrete values of the involving charges and given equation of state \(\omega\).
The derived results point out that all critical points shift towards the horizon for \(Q_{\rm YM}>0\), while for \(Q_{\rm YM}<0\) such points move away from the event horizon. Nevertheless, for \(\omega=1/3\) and \(\omega=1/4\) isothermal fluids are affected distinctively from the other cases considered: for some positive values of \(Q_{\rm YM}\), the radial critical velocity can not transit through the critical point which is a necessary condition to have a well-behaved transonic flow and to avoid singularities in it. It restricts the sign of the Yang-Mills charge to mostly negative values for \(q\leq 1/2\) and open, on the other hand, the possibility of having positive values for larger electric charge \(q>1\). In contrast, for \(\omega=1\), \(\omega=1/2\) and \(q\leq 1/2\), positive values of \(Q_{\rm YM}\) are allowed. Although there is a little preference for having negative Yang-Mills charges for \(q\) small, this is not a conclusive outcome in the sense that the transonic condition depends sensibly on both the electric charge and the nature of the fluid. It suggests that implementation of observational data is imperative to set the suitable sign of \(Q_{\rm YM}\) for ensuring the astrophysical applicability of the theory, and more generally, to constrain more robustly the available parameter space beyond the theoretical bounds derived in this paper from physical grounds.
At the lowest order in \(c_{s}\), or equivalently in the weak gravity regime, we have found that both the electric and the Yang-Mills charges contribute independently to the critical radius. The latter enters, however, into the accretion rate expression in a non-linear way. Furthermore, we have quantified the effect of the Yang-Mill charge, in the full non-linear gravity regime, on the accretion rate for a polytropic fluid. The effect of the Yang-Mills charge results, in the more optimistic scenario, in an enhancement of a factor of up to \(10^{2}\) for \(Q_{\rm YM}\to-1\). This conclusion holds for the range \(1<\gamma<5/3\) and it is quite independent of the electric charge. As a main result, the mass accretion rate efficiency can be considerably improved, with respect to the standard Reissner-Nordstrom and Schwarzschild solutions, for negative values of the Yang-Mills charge. Physically, it can be understood as follows: as the location of the critical points are far from the BH for \(Q_{\rm YM}<0\), which means that particles velocities reach the transonic flow far before than those located near the BH \(Q_{\rm YM}\geq 0\), a major contribution to the accretion flow of infalling particles towards the BH is naturally expected as a result of the spherical symmetry. This collective effect translates into larger accretion rate.
As a striking similarity to other BH solutions, we should mention that the Einstein-Maxwell Power-Yang-Mills black hole background for \(p=1/2\) mimics a charged black hole solution surrounded by cloud of strings for concrete values of the parameters of such solution (see, for instance [93]). In particular, the asymptotic structure of the metric potential is also modified, i.e., instead of recovering asymptotically flat solution \(f(r\to\infty)\sim 1\), we obtain a slightly shifted modification, namely \(f(r\to\infty)\sim 1+Q_{\rm YM}\), in agreement with the predicted by the asymptotic behavior in a cloud of strings producing, \(f(r\to\infty)\sim 1+C\), being \(C\) a dimensionless constant. This similarity between non-linear charged black holes and a solution surrounded by a cloud of strings will be addressed in a future work.
A promising phenomenological approach to put constraints on the electric and Yang-Mills charges involves
Figure 8: Ratio of the mass accretion rates (Eq. (59)) as a function of the Yang-Mills charge for different polytropic indexes and electric charges, as described by the legend. Notice that the Schwarzschild case is formally recovered when both charges vanish. However, as \(\gamma\to 1\), \(Q_{\rm YM}\to 0\) and \(q<1\), \(\dot{M}_{\rm MRN}\) approaches closely to \(\dot{M}_{\rm Sch}\). We have chosen \(c_{s,\infty}^{2}=10^{-3}\) as a reference value. The transonic condition for the critical velocity \(u_{c}>c_{s,\infty}\) is fulfilled, where the maximum critical velocity is obtained for \(\gamma=5/3\).
studying the main properties of the EHT's images of Sgr A\({}^{\star}\) and M87\({}^{\star}\) BHs within this new class of BH solution, such as the shadows and photon rings, surrounded by an optically and geometrically thin accretion disk. To achieve this, the Newman-Janis algorithm must be implemented to find the corresponding rotating charged BH solutions. However, we have recently made progress in the study of quasi-normal modes and shadows within this setup, as presented in our recent work [86].
###### Acknowledgements.
G. G. acknowledges financial support from Vicerrectoria de Investigacion, Desarrollo e Innovacion - Universidad de Santiago de Chile, Proyecto DICYT, Codigo 042031CM_POSTDOC. A.R. is funded by the Generalitat Valenciana (Prometeo excellence programme grant CIPROM/2022/13) and by the Maria Zambrano contract ZAMBRANO 21-25 (Spain).
|
2308.12013 | Quantum-Noise-Driven Generative Diffusion Models | Generative models realized with machine learning techniques are powerful
tools to infer complex and unknown data distributions from a finite number of
training samples in order to produce new synthetic data. Diffusion models are
an emerging framework that have recently overcome the performance of the
generative adversarial networks in creating synthetic text and high-quality
images. Here, we propose and discuss the quantum generalization of diffusion
models, i.e., three quantum-noise-driven generative diffusion models that could
be experimentally tested on real quantum systems. The idea is to harness unique
quantum features, in particular the non-trivial interplay among coherence,
entanglement and noise that the currently available noisy quantum processors do
unavoidably suffer from, in order to overcome the main computational burdens of
classical diffusion models during inference. Hence, we suggest to exploit
quantum noise not as an issue to be detected and solved but instead as a very
remarkably beneficial key ingredient to generate much more complex probability
distributions that would be difficult or even impossible to express
classically, and from which a quantum processor might sample more efficiently
than a classical one. An example of numerical simulations for an hybrid
classical-quantum generative diffusion model is also included. Therefore, our
results are expected to pave the way for new quantum-inspired or quantum-based
generative diffusion algorithms addressing more powerfully classical tasks as
data generation/prediction with widespread real-world applications ranging from
climate forecasting to neuroscience, from traffic flow analysis to financial
forecasting. | Marco Parigi, Stefano Martina, Filippo Caruso | 2023-08-23T09:09:32Z | http://arxiv.org/abs/2308.12013v3 | # Quantum-Noise-driven Generative Diffusion Models
###### Abstract
Generative models realized with machine learning techniques are powerful tools to infer complex and unknown data distributions from a finite number of training samples in order to produce new synthetic data. Diffusion models are an emerging framework that have recently overcome the performance of the generative adversarial networks in creating synthetic text and high-quality images. Here, we propose and discuss the quantum generalization of diffusion models, i.e., three quantum-noise-driven generative diffusion models that could be experimentally tested on real quantum systems. The idea is to harness unique quantum features, in particular the non-trivial interplay among coherence, entanglement and noise that the currently available noisy quantum processors do unavoidably suffer from, in order to overcome the main computational burdens of classical diffusion models during inference. Hence, we suggest to exploit quantum noise not as an issue to be detected and solved but instead as a very remarkably beneficial key ingredient to generate much more complex probability distributions that would be difficult or even impossible to express classically, and from which a quantum processor might sample more efficiently than a classical one. An example of numerical simulations for an hybrid classical-quantum generative diffusion model is also included. Therefore, our results are expected to pave the way for new quantum-inspired or quantum-based generative diffusion algorithms addressing more powerfully classical tasks as data generation/prediction with widespread real-world applications ranging from climate forecasting to neuroscience, from traffic flow analysis to financial forecasting.
**Keywords:** Generative Models, Diffusion Models, Quantum Machine Learning, Quantum Noise, Quantum Computing
In Machine Learning (ML), diffusion probabilistic models, or briefly Diffusion Models (DMs), are an emerging class of generative models used to learn an unknown data distribution in order to produce new data samples. They has been proposed for the first time by Sohl-Dickstein et al. [1] and take inspiration from diffusion phenomena of non-equilibrium statistical physics. The underlying core idea of the DMs is to gradually and slowly destroy the information encoded into the data distribution until it became fully noisy, and then learn how to restore the corrupted information in order to generate new synthetic data. More precisely, the generic structure of diffusion models consists of two stages: (i) a _diffusion_ (or _forward_) process and (ii) a _denoising_ (or _reverse_) process. In the former phase, a training data is progressively perturbed by adding noise, typically
Gaussian, until all data information is destroyed. The increasing perturbation of information due to the systematically and progressive injections of noise can be physically understood as if the noise propagated inside the data structure, as shown in Fig. 1 from left to right. Let us highlight the fact that in this first stage the training of any ML model is not required. In the second phase, the previous diffusive dynamics is slowly reversed in order to restore the initial data information. The goal of this phase is to learn how to remove noise correctly and produce new data starting from uninformative noise samples as in Fig. 1 from right to left. In contrast to the forward diffusion process, the noise extraction--and as a result the data information retrieval--is implemented training a ML model typically based on a so-called U-Net neural network (NN) architecture [2]. In detail, U-Net models are structured in a succession of convolutional layers followed by an equal number of deconvolutional layers where each deconvolution takes as input the output of the previous deconvolution and also the copy of the output of the corresponding convolutional layer in reverse order. The procedure described above allows DMs to succesfully address the main complication in the design of probabilistic models, i.e., being _tractable_ and _flexible_ at the same time [1, 3]. In fact, alternatively to DMs there are other generative probabilistic models, for instance,
Figure 1: Depiction of the diffusion (from left to right) and denoising (from right to left) processes within a diffusion probabilistic model framework. The original image \(\mathbf{x}_{0}\) sampled from the unknown data distribution \(p(\mathbf{x}_{0})\) is progressively perturbed (\(t\to t+1\)) by adding noise to obtain a latent variable \(\mathbf{x}_{T}\) from a known and tractable distribution where the information is completely destroyed. In our framework the diffusion process can be implemented with a classical or a quantum stochastic dynamics. The denoising process is trained to approximate the structure of the data distribution in order to generate new samples. The latter is implemented step by step, using a classical (on the left in blue) or quantum (on the right in orange) parameterized model \(\hat{U}(\theta)\) in order to approximate the backward mapping. The standard diffusion models implement both the diffusion and the denoising processes in a classical framework. We propose three different new approaches for the other cases: i) classical diffusion and quantum denoising (CQGDM); ii) quantum diffusion and classical denoising (QCGDM); iii) quantum diffusion and quantum denoising (QQGDM). A similar picture can be applied to time series.
Autoregressive Models (ARMs) that are generally tractable but not flexible, or Variational Auto Encoders (VAEs) [4] and Generative Adversarial Networks (GANs) [5] that are flexible but not tractable.
Diffusion models find use in computer vision for several image processing tasks [6], such as, inpainting [7], super-resolution [8], image-to-image translation [9], and image generation [10, 11, 12]. They are also successfully adopted in several applications, for instance: Stable diffusion [13] that is an open source model for high resolution image syntesis [10]; DALL-E 2 that is a platform implemented by OpenAI [14] to generate photo-realistic images from text prompts [11]; Google Imagen [15] that combines _transformer_ language models with diffusion models also in the context of text-to-image generation [12]. Moreover, it has recently been shown that diffusion models perform better than GANs on image synthesis [16].
Furthermore, diffusion models can also be applied to other contexts, for instance in text generation [17, 18] and time-series related tasks [19, 20, 21]. For instance, time series forecasting is the task of predicting future values from past history and diffusion models are employed to generate new samples from the forecasting distribution [22, 23]. Moreover, diffusion models can also be used in time series generation, which is a more complex task involving the complete generation of new time-series samples from a certain distribution [24, 25].
On the other side, very recently, we are witnessing an increasing interest in quantum technologies. Near-term quantum processors are called Noisy Intermediate-Scale Quantum (NISQ) devices [26] and they represent the-state-of-the-art in this context. NISQ computers are engineered with quantum physical systems using different strategies. For instance, a commonly used technology employs superconductive-circuits-based platforms [27, 28] realized with transmon qubits [29, 30]. This technology is exploited, for instance, by IBM [31], Rigetti Computing [32], Google [33]. Moreover D-Wave [34] exploits superconducting integrated circuits mainly as quantum annealers [35]. Xanadu [36] is instead a company employing photons as information units within the linear optical quantum computing paradigm [37] to realize their devices. Finally, the quantum computation can be realized directly manipulating the properties of single atoms. For instance, IonQ [38] realizes quantum devices with trapped ions [39, 40], while Pasqal [41] and QuEra [42] realize analog quantum computers with Rubidium Rydberg neutral atoms held in optical tweezers [43]. All the mentioned devices can be in principle integrated in computational pipelines that can involve also classical computation. In this context they can be referred with the term quantum processing unit (QPU) that can make some computational task much faster than its classical counterpart (CPU) harnessing the quantum properties of particles at the atomic scale. The main reason for building a quantum processor is the possibility of exploiting inherent and peculiar resources of quantum mechanical systems such as _superposition_ and _entanglement_ that, in some cases, allow to perform computational tasks that are impossible or much more difficult via a classic supercomputer [44, 45].
One of the most promising applications of NISQ devices is represented by Quantum Machine Learning (QML) that is a recent interdisciplinary field merging ML and quantum computing, i.e. data to be processed and/or learning algorithms are quantum [44, 46, 47, 48]. Indeed, it involves the integration of ML techniques and quantum computers in order to process and subsequently analyze/learn the underlying data structure. QML can involve the adoption of classical ML methods with quantum data or environments, for instance to analyze noise in quantum devices [49, 50, 51, 52] or to control quantum systems [53]. Alternatively, QML can consider the implementation of novel ML techniques using quantum devices, for instance to implement visual tasks [54] or generative models like Quantum Generative Adversarial Network (QGAN) [55, 56, 57, 58, 59] that are the quantum implementation of classical GAN or in Natural Language Processing (NLP) context to generate text [60]. In fact, quantum devices are capable of processing information in ways that are different from the classical computation. Thus, the implementation of QML models can offer an advantage over the corresponding classical ML models. The latter is expected to arise in the form of _quantum speedup_ or a much smaller number of training parameters, by harness the peculiar properties of quantum systems, for instance, superposition, coherence and entanglement. However, NISQ devices are indeed still very noisy and thus they do not perform the
ideal (pure) dynamics. Therefore, the system evolution is affected and driven by _quantum noise_ due to the undesired interactions with external environment and has to be described by the more general open quantum system formalism [61].
In order to generalize DMs with quantum computing ideas, a crucial role is played by noise. In classical information theory, noise is usually modeled by the framework of probability theory, and in general via Markovian processes. Accordingly, the main features of classical noise are a linear relationship among successive steps of the dynamics whose evolution depends only on the current state. Formally, noise is represented by a transition matrix that has the properties of _positivity_ (non-negative entries) and _completeness_ (columns summing to one). In particular, Gaussian noise is a type of random noise that is very often added to the input data of a DM in order to help its learning to generate new data that is as similar as possible to the training data, also in the case when the input is not perfect.
In the quantum domain, noise can be generated also by quantum fluctuations that are typical of quantum systems, hence going much beyond the classical noise sources. Mathematically, quantum noise is described by the more general formalism of _quantum operations_ or _quantum maps_[61], where, for instance, the decoherence is the typical noise affecting the phase coherence among the quantum states and, in fact, is the main enemy to fight with in order to build up more powerful quantum processors. But what about if such noise is not only detrimental for the quantum computation but it is instead actually very beneficial for some ML tasks as we have observed in the past in other different contexts [53, 62, 63]? Quantum noise might allow, for instance, to generate much more complex (due to the presence of entanglement) probability distributions that would be difficult (or even impossible) to express classically and from which one can sample more efficiently via a quantum processor than via a classical supercomputer.
In this article we therefore introduce and formalize the quantum versions of DMs, particularly based on Denoising Diffusion Probabilistic Models (DDPMs) and Score Stochastic Differential Equations (Score SDEs) in the context of QML. More precisely, we propose three potential quantum-noise-driven generative diffusion models (QNDGDMs) that can be both computationally simulated in the NISQ devices and implemented _experimentally_ due to the naturally occurring noise effects in open quantum systems. The three algorithms are: i) Classical-Quantum Generative Diffusion Model (CQGDM) in which the forward diffusion process can be implemented in the classical way, while the backward denoising with a Quantum Neural Network (QNN) (that can be either a Parametrized Quantum Circuit (PQC) or an hybrid quantum-classical NN); (ii) Quantum-Classical Generative Diffusion Model (QCGDM) in which the noise diffusion process can be implemented in a quantum way, while in the denoising process classical NNs are used; (iii) Quantum-Quantum Generative Diffusion Model (QQGDM) where both the diffusion and the denoising dynamics can be implemented in a quantum domain.
## 1 Classical-Quantum
Generative Diffusion
Model (CQGDM)
In this section we propose a model where the diffusion process is classical while the denoising phase is implemented with a quantum dynamics. Moreover, as a result of this setting, the training dataset is necessarily classical, for instance, images, videos, time series, etc.
Formally, given an initial training data \(\mathbf{x}_{0}\) sampled from a generic and unknown probability distribution \(p(\mathbf{x}_{0})\), the procedure consists in a progressive destruction of the information encoded in the initial data via a diffusive stochastic process. At the end, the data is degraded to a fully noisy state \(\mathbf{x}_{T}\) sampled from a classical closed form and tractable _prior_ distribution \(p(\mathbf{x}_{T})\) that represents the latent space of the model. Here, tractable stands for the fact that the distribution can be computationally calculated. The implementation of this process can be obtained with different ways. For instance, in DDPM the dynamics of forward diffusive process is implemented by classical Markov chain [1, 3], while in Score SDE the stochastic evolution is determined by a differential equation [64]. In detail, the former approach considers a discrete-time stochastic process whose evolution, at every step, depends only on the previous state and the transition relies on hand-designed kernels \(p(\mathbf{x}_{t}|\mathbf{x}_{t-1})\), \(t=1,2,\ldots,T\)
(see Methods for more details). Alternatively, in Score SDE the evolution is a continuous-time process within a close time interval \(t\in[0,T]\) and determined by the stochastic differential equation: \(\mathrm{d}\,\mathbf{x}=\mathbf{f}(\mathbf{x},t)\,\mathrm{d}\,t+g(t)\,\mathrm{d}\, \mathbf{w}\), where \(\mathbf{f}(\mathbf{x},t)\) is the drift coefficient, \(g(t)\) is the diffusion term, and \(\mathbf{w}\) is the Wiener process (also known as standard Brownian motion) that models the stochastic process [65]. The solutions of this equation lead to the tractable prior distribution \(p(\mathbf{x}(T))\).
Afterwards, in order to generate new data samples, the objective is to learn how to reverse the diffusion process starting from the prior latent distribution. In case of DDPM, calculating \(p(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) is not computationally tractable and it is classically approximated by a model parameterized with \(\theta\) (e.g., a NN): \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) (see Methods). In the case of Score SDE models, the quantity to be estimated is \(\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})\), where \(p_{t}(\mathbf{x})\) is the density probability of \(\mathbf{x}(t)\)[64]. Here, for either DDPM and Score SDE diffusion processes, we propose to implement the denoising process with a QNN that can be fully quantum via a PQC or even a classical-quantum hybrid NN model. The results of a simulation of this type of algorithm on a dataset composed of 2-dimensional points distributed along a line segment in the interval \([-1,1]\) is shown in Fig. 2. At the best of our knowledge, this is the first implementation of a hybrid classical-quantum diffusion model, and indeed represents a starting point for more in-depth future studies (for more details on the model and the implementation refer to Section 5.3).
In this context, the main advantage of using the quantum denoising process instead of the classical one can be the possibility of using the trained quantum model to efficiently generate highly dimensional data (e.g., images) taking advantage of the peculiar quantum mechanical properties, such as quantum superposition and entanglement, to speed up data processing [66, 67, 68]. Indeed, QPU devices could be very effective to overcome the main computational burdens of classical diffusion model during this inference process. As shown in Fig. 3, the denoising process for CQGDM crosses the border between classical and quantum distribution spaces, this could take advantage of the quantum speedup in order to accelerate the training of the model.
## 2 Quantum-Classical Generative Diffusion
Model (QCGDM)
In real experiments quantum systems are never perfectly isolated, but they are easily subjected to noise, e.g., interactions with the environment and imperfect implementations. Accordingly, we propose to physically implement the diffusion process via a noisy quantum dynamics exploiting such quantum noise as a positive boost.
In this setting a quantum dataset is considered, i.e., a collection of _quantum_ data. Classical information can be embedded into the initial state of a quantum system, allowing to treat classical data as quantum [48, 69, 70]. Even better, we could avoid the encoding of the classical data if we consider quantum data as any result arising from a quantum experiment [71] or produced directly by a quantum sensing technology [72]. Formally, quantum data is identified with the density operator \(\rho(t)\) living in \(\mathfrak{S}(\mathcal{H})\) being the set of non-negative operators with unit trace acting on the elements of the Hilbert space \(\mathcal{H}\) where the quantum states live.
We here propose two approaches to implement the diffusion process: (i) _quantum_ Markov chains generalizing their classical counterparts [73], and
Figure 2: Evolution of the data distribution for a simulated CQGDM. The initial data distribution consists of two-dimensional points distributed in a line segment between \(-1\) and \(1\). The diffusion process is implemented via a classical DM that transforms the initial data distribution \(p(\mathbf{x}_{0})\) at time \(t=0\) to the prior \(p(\mathbf{x}_{T})\) that is a normalized Gaussian distribution at the final time \(t=40\). Meanwhile, the denoising is implemented via a (noiseless) simulated PQC to reconstruct the initial data distribution (\(t=0\)) from the Gaussian prior (\(t=40\)). In the top row, we show the forward process (from left to right) for a sample of \(1\,000\) points at different discrete time steps \(t=0,8,\ldots,40\). In the bottom row, we display the denoising (from right to left) of a different sample of \(1\,000\) points.
(ii) Stochastic Schrodinger Equation (SSE) [74, 75, 76, 77] modelling the dynamics of an _open quantum system_ subjected to an external noise source.
In the former approach (i), a quantum Markov chain can be described with a composition of transition operation matrices (TOMs) mapping a density operator \(\rho\) to another density operator \(\rho^{\prime}\). TOMs are matrices whose elements are _completely positive maps_ and whose column sums form a _quantum operation_ (for more details refer to Section 5). A special case of TOMs are the transition effect matrices (TEMs) whose columns are discrete positive operator valued measures (POVMs). A quantum Markov chains can be, thereby, implemented by a sequence of quantum measurements [73].
The second approach (ii) employs SSEs to describe the physical quantum diffusion process. Given a system in the state \(\rho(t)\), its stochastic evolution is determined by a SSE that takes the form \(\dot{\rho}(t)=-i[H(t),\rho(t)]\), with \(\hbar=1\), and where the Hamiltonian \(H(t)=H_{s}(t)+H_{p}(t)\) consists of the sum of the Hamiltonian of the system \(H_{s}(t)\) and the stochastic term \(H_{p}(t)\) representing the stochastic dynamics to which the quantum system is subjected. Arbitrary sources of noise applied to optimally controlled quantum systems were very recently investigated with the SSEs formalism by our group [78].
The implementation of diffusion dynamics on quantum systems during the forward stage can allow the processing of the data information not only by classically simulated noise but also with quantum physical noise. Here, as previously mentioned, let us remind that quantum noise is more general with respect to its classical counterpart. In particular, the noise distributions used in QCGDMs can be expressed (and more naturally arise by quantum dynamics) in more general and powerful forms respect to the typical Gaussian distributions that are commonly employed in classical DMs. In this set up, at the end of the diffusion process, it is possible to obtain non-classical prior distributions related to entangled state that do not exist in the classical information scenario. In other terms there are probability density distributions that are purely quantum. This can be used to implement diffusion processes that are not possible to be implemented classically. At the end, during the denoising phase, classical NNs can be used in order to remove noise and thus finally generate new samples. Moreover, if the obtained prior distribution is not classical, it is possible to consider the adoption of the denoising NN as a discriminator to identify probability distributions that are purely quantum. This could also be framed in a _security_ context. One can imagine a channel where the communication of data takes place with the application of a quantum diffusion process that maps to a purely quantum probability distribution. In that case, the receiver can restore and so obtain the initial information only with the training of a QNN and thus only with a quantum device. This might be also exploited for quantum attacks/defence in cyber-security applications.
## 3 Quantum-Quantum
Generative Diffusion
Model (QQGDM)
In this last section we describe diffusion models within a fully quantum physical framework. Precisely, the training data, the diffusion process and the denoising process have all a quantum mechanical nature. This scenario can be obtained by exploiting the quantum tools described above, namely, quantum Markov chain or SSE for the forward diffusion phase, and a PQC for the backward denoising phase.
Accordingly, all the advantages described in Sections 1 and 2 hold. The adoption of a fully quantum pipeline for both the diffusion and denoising phases would allow the possibility to obtain purely quantum prior distributions that can be processed during the denoising phase with PQCs obtaining a generation process that is not feasible classically. Moreover, this approach could lead to an _exponential_ advantage in sample and time complexity as shown in [72, 79]. As shown in Fig. 3, the diffusion and denoising processes for QQGDM are entirely located in the space of quantum distributions. This might lead to the speedup already described previously for CQGDM and in addition to the possibility of exponentially reducing the computational resources for storing and processing of data information [68]. Finally, it is also possible to access to complex quantum probability distributions that are impossible or much more difficult to treat classically.
## 4 Discussion
The entanglement is a crucial quantum mechanical phenomenon occurring only in the quantum domain (not classical analogue) when two or more quantum systems interact. It is detected by measurement correlations between the quantum systems that cannot be described with classical physics. Accordingly, quantum systems are capable of representing distributions that are impossible to be produced efficiently with classical computers [46, 80]. For this reason, a quantum diffusion process is capable to explore probability density functions that are not classically tractable.
In Fig. 3 we highlight the relationship between the space of the probability distributions that are tractable with classic computers, which we denote with _classical distributions_ to be more concise, and the space of the probability distributions that are tractable with quantum devices, which we denote with _quantum distributions_ hereinafter. Moreover, we can observe several possible trajectories that map probability distributions to other probability distributions during the diffusion and denoising of the classical DM and of the three proposed quantum approaches: CQGDM, QCGDM and QQGDM.
The classic DM realizes maps from classical distributions to other classical distributions and the NN that implements the denoising are trained to realize the inverse maps, i.e., to match the distributions crossed during the diffusion.
In the CQGDM approach, the diffusion process is implemented classically. Thus, all the probability distributions are necessarily classical. However, during the denoising process, the quantum dynamics is free to explore also the quantum probability space whithin each one of the steps hence exploiting potential (noise-assisted and/or quantum-enhanced) shortcuts. This may give huge advantage for the training of the denoising model. Let us stress that when we evolve quantum systems within a QPU it is possible to process and manipulate exponentially more information as compared to the classical case.
When we consider the fully quantum framework QQGDM we gain the advantage of exploring quantum distributions also during the diffusion phase. For this reason we could explore more complex noisy dynamics compared to the ones that can be simulated in classical computers. Moreover, the two processes can be experimentally implemented on real quantum processors. Furthermore, compared to the CQGDM approach, and provided that the initial distribution of the dataset is quantum, it is possible to design a QQGDM generative models that is capable of generating complex quantum data that are not analytically computable.
Besides, we would like point out that the QCGDM approach can be challenging to be implemented. In detail, if the diffusion process leads to an entagled quantum distribution it is impossible, for the previously mentioned reasons, to efficiently train a classical NN to perform the denoising. This context could be adopted as a proof of concept for the realization of a discriminator for the quantum distributions from the classical ones. In other
Figure 3: Relationship between the space of the probability distributions that are tractable with classical computation (**C**) and instead only with quantum computation (**Q**). We show the trajectories arising from the mappings between probability distributions (colored and white shapes) during the diffusion (blue wavy arrows) and denoising (red arrows) processes for the four different combination: **CC**, **CQ**, **QC** and **CC** indicating whether the diffusion (first letter) and the denoising (second letter) are classical or quantum. The initial data distribution (squares) is progressively transformed during diffusion (changing color and shape) to an uninformative distribution represented by the white circles, and vice versa during denoising. Completely classical models are limited to operate within the space of classically-tractable probability distributions, while completely quantum models can manipulate quantum-tractable probabilities. Models that have classical diffusion and quantum denoising are forced to work only with classical probabilities, but during the denoising phase they can exploit quantum properties within each step. Finally, models that have quantum diffusion and classical denoising can manipulate quantum probabilities during the forward, but in that case, it is not possible to train the classical backward to map those probability distributions.
words, if it is possible to train a model to perform the denoising, then the distribution is classical.
As a future outlook, we would like to realize the implementation of the quantum generative diffusion models either computationally via NISQ and/or physically by using quantum sensing technologies. In particular, regarding QCGDMs and QQGDMs, we propose to implement the diffusion process exploiting naturally noisy quantum dynamics in order to take advantage of the possible benefits of the quantum noise. Instead, regarding CQGDMs and QQGDMs, we propose to use quantum implemented QMLs models, for instance QNNs and PQCs, to learn the denoising process.
Finally, the design and realization of quantum generative diffusion models, with respect to classical DMs, could alleviate and reduce the computational resources (e.g. space of memory, time and energy) to successfully address ML applications such as generation of high-resolution images, the analysis and the prediction of rare events in time-series, and the learning of underlying patterns in experimental data coming also from very different fields as, among others, life and earth science, physics, quantum chemistry, medicine, material science, smart technology engineering, and finance.
## 5 Methods
In this section we include some mathematical details on the classical and quantum tools for the diffusion and denoising processed discussed in the main text.
### Classical methods
Here we formalize the classical methods used in the standard generative diffusion models and used also for the relevant part of the proposed CQGDM and QCGDM. In particular, we consider classical Markov chains for a Gaussian pertubation and the NNs used for the classical denoising.
The classical diffusion process [1, 3] starts from an initial data sample \(\mathbf{x}_{0}\) drawn from an unknown generic distribution \(p(\mathbf{x}_{0})\). Gaussian noise is then iteratively injected for a number \(T\) of time steps to degrade the data to \(\mathbf{x}_{T}\) sampled from a prior Gaussian distribution \(\mathcal{N}(0,\mathbf{I})\). In detail, the used Gaussian transition kernel is in the form:
\[p(\mathbf{x}_{t}|\mathbf{x}_{t-1})=\mathcal{N}(\mathbf{x}_{t};\sqrt{1-\beta_{ t}}\mathbf{x}_{t-1},\beta_{t}\mathbf{I}), \tag{1}\]
where \(\beta_{t}\in(0,1)\) is an hyperparameter (fixed or scheduled over the time) for the model at the time step \(t\) that describes the level of the injected noise, \(\beta_{t}\mathbf{I}\) is the identity matrix, and \(\mathbf{x}_{t}\) and \(\mathbf{x}_{t-1}\) are the random variables at the time steps \(t\) and \(t-1\), respectively. In this way it is possible to calculate a tractable closed form for the trajectory:
\[p(\mathbf{x}_{1:T}|\mathbf{x}_{0})=p(\mathbf{x}_{0})\prod_{t=1}^{T}p(\mathbf{ x}_{t}|\mathbf{x}_{t-1}). \tag{2}\]
By obing so, for \(T\) sufficiently high, \(p(\mathbf{x}_{1:T}|\mathbf{x}_{0})\) converges to an isotropic Gaussian \(p(\mathbf{x}_{T})=\mathcal{N}(0,\mathbf{I})\). Moreover, given an initial data \(\mathbf{x}_{0}\) we can obtain a data sample \(\mathbf{x}_{t}\) by sampling a Gaussian vector \(\epsilon\sim\mathcal{N}(0,\mathbf{I})\):
\[\mathbf{x}_{t}=\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t}}\epsilon, \tag{3}\]
where \(\alpha_{t}:=1-\beta_{t}\) and \(\bar{\alpha}_{t}:=\prod_{s=0}^{t}\alpha_{s}\).
The denoising phase starts from the Gaussian prior distribution and the transition kernel that is implemented is in the form:
\[p_{\theta}(\mathbf{x}_{t}|\mathbf{x}_{t-1})=\mathcal{N}(\mathbf{x}_{t-1};\mu_ {\theta}(\mathbf{x}_{t},t),\mathbf{\Sigma}_{\theta}(\mathbf{x}_{t},t)), \tag{4}\]
and the closed form for the trajectory is:
\[p_{\theta}(\mathbf{x}_{0:T}|\mathbf{x}_{T})=p(\mathbf{x}_{T})\prod_{t=1}^{T}p_ {\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t}). \tag{5}\]
Usually, a NN, specifically a U-Net architecture [2], is used to estimate the mean \(\mu_{\theta}(\mathbf{x}_{t},t)\) and the covariance \(\mathbf{\Sigma}_{\theta}(\mathbf{x}_{t},t)\) in Eq. (4). In principle, the approach to train the NN would be to find the parameters \(\theta\) such that \(p_{\theta}(\mathbf{x}_{0})\) would be maximized for each training sample \(\mathbf{x}_{0}\). However \(p_{\theta}(\mathbf{x}_{0})\) is intractable because it is impossible to marginalize over all the possible trajectories. For this reason, the common approach [3] is to fix the covariance and minimize the loss:
\[\mathcal{L}=\mathbb{E}_{t\sim[1,T]}\mathbb{E}_{\mathbf{x}_{0}\sim p(\mathbf{x }_{0})}\mathbb{E}_{\mathbf{z}_{t}\sim\mathcal{N}(0,\mathbf{I})}||\mathbf{z}_{ t}-\mathbf{z}_{\theta}(\mathbf{x}_{t},t)||^{2},\]
where \(\mathbf{z}_{t}\) is the real Gaussian noise, and \(\mathbf{z}_{\theta}(\mathbf{x}_{t},t)\) is the noise estimation of the model at time \(t\).
### Quantum methods
Here we formalize the use of the quantum Markov chain introduced for the diffusion processes of QCGDM and QQGDM in Sections 2 and 3 and the QNNs used for the denoising of CQGDM and QQGDM in Sections 1 and 3.
Formally, a quantum Markov chain can described by two elements: i) a directed graph \(G\) whose sites represent the possible state that the quantum system can occupy, ii) a TOM \(\mathcal{E}=\mathcal{E}_{ij}\) whose elements are _completely positive maps_[61, 81] and whose column sums form a _quantum operation_[44, 73]. Formally, a positive a map is a linear transformation of one positive bounded operator into another. A completely positive map is a linear map \(\phi:\mathcal{B}(\mathcal{H})\rightarrow\mathcal{B}(\mathcal{H})\), where \(\mathcal{B}(\mathcal{H})\) is the set of bounded linear operators acting on the Hilbert space \(\mathcal{H}\), such that the map \(\phi\otimes I\) is positive on the space \(\mathcal{B}(\mathcal{H})\otimes\mathcal{B}(\mathcal{H}^{\prime})\) for any Hilbert space \(\mathcal{H}^{\prime}\). A quantum operation is a completely positive map \(\phi\) preserving the trace, i.e., \(\mathrm{tr}(\rho)=\mathrm{tr}(\phi(\rho))\), with \(\rho\in\mathcal{B}(\mathcal{H})\). Physically, the elements \(E_{ij}\) describe the passage operation of the quantum system from site \(j\) to site \(i\) in one time step. Given a density operator \(\rho\), representing the state of system, the quantity \(\mathcal{E}(\rho)\) is again a density operator. Moreover, if \(\mathcal{E}\) and \(\mathcal{F}\) are two TOMs with the same size and acting on the same Hilbert space, then the \(\mathcal{E}\mathcal{F}\) is again a TOM by matrix multiplication. Accordingly, the dynamics of the quantum system after a discrete number of time steps \(n\) is described by the map \(\mathcal{E}^{n}=\mathcal{E}\mathcal{E}^{n-1}\), with \(n=2,3...\), and the initial state \(\rho\) is transformed in the final state \(\mathcal{E}^{n}(\rho)\).
Let us now introduce the concepts of QNN [48] in the QML framework and how they are trained. Formally, a QNN can be written as a product of layers of unitary operations:
\[\hat{U}(\theta)=\prod_{\ell=1}^{L}\hat{V}_{\ell}\hat{U}_{\ell}(\theta_{\ell}), \tag{6}\]
where \(\hat{V}_{\ell}\) and \(\hat{U}_{\ell}(\theta_{\ell})\) are fixed and parameterized unitary operations, respectively, for \(\ell^{th}\) layer of QNN. The output of the QNN is:
\[f(\theta)=\mathrm{tr}(\mathcal{M}\rho_{\theta}) \tag{7}\]
where \(\mathcal{M}\) is an Hermitian operator representing the physical observable, \(\rho_{\theta}=\hat{U}(\theta)^{\dagger}\rho_{0}\hat{U}(\theta)\) and \(\rho_{0}\) is the initial state, which is the input of the QNN. The QNN is optimized minimizing the difference between its output and the desired value. Generally, the latter is performed with the gradient descent method with the adoption of the parameters shift rule [82].
### Simulation
Here we describe in detail both the model and its implementation regarding the simulation of the CQGDM used to obtain the results of Fig. 2 in Section 1. For the simulation, we used a dataset composed of \(1\,000\) points distributed along a line segment in the interval \([-1,1]\).
The diffusion process has been implemented via a classical Markov chain composed of a sequence of Gaussian transition kernels Eq. (1) in order to map the initial data distribution \(p(\mathbf{x}_{0})\) to an isotropic Gaussian \(p(\mathbf{x}_{T})\) with final time \(T\equiv 40\). Furthermore, the data sampling at each time step \(t\) is computed by using Eq. (3).
The denoising process has been implemented via a PQC and trained to estimate the mean \(\mu_{\theta}(\mathbf{x}_{t},t)\) and the covariance \(\mathbf{\Sigma}_{\theta}(\mathbf{x}_{t},t)\) in Eq. (4). The model is built and simulated with the help of the _Pennylane_[83] and _PyTorch_[84] libraries. More precisely, the PQC consist of a four qubits circuit divided in two concatenated parts called _head_ and _tail_. The parameters of the head are shared among all the values of \(t\), while the parameters of the tail are specific for each value of \(t=0,\ldots,39\). In particular, the head takes as input the values of the coordinates of a single point and encode them in the state of the first two qubits with an _angle embedding_[48], while the other two qubits are initialized to \(|0\rangle\). After the embedding, the circuit is composed of \(256\) layers of parametric rotations on the three axes for all the four qubits alternated by layers of entangling controlled not gates [85]. At the end of circuit measurements are performed and the expectation values of the observable \(\sigma_{z}\) on all four qubits is computed. The tail is similarly composed, except that the first operation is the angle embedding of the four expectation values previously obtained from the head. In order to simplify the model we assumed that the denoising process is uncorrelated among the features and therefore, the covariance matrix is diagonal and only two values for the variance are necessary. Finally, the four expectation values measured from the tail are used for
the predictions of the mean (the first two values) and variance (the second two values). In detail, we multiply the expectation values used for the mean by a factor 3 in order to enlarge the possible range and the values for the variance are increased by 1 to force positivity. The model was trained for 40 000 epochs on random batches of 1 000 points to minimize the Kullback-Leibler divergence between the predicted and desired Gaussian distributions using _Adam_[86] with learning rate \(10^{-4}\). The plots of Fig. 2 are obtained, after the training of the model, using two different random batches of 1 000 points, one for the forward and another one for the backward.
## Acknowledgements
M.P. and S.M. acknowledge financial support from PNRR MUR project PE0000023-NQSTI. F.C. also acknowledges the European Union's Horizon 2020 research and innovation programme under FET-OPEN GA n. 828946-PATHOS.
|
2303.08339 | Large induced subgraphs of random graphs with given degree sequences | We study a random graph $G$ with given degree sequence $\boldsymbol{d}$, with
the aim of characterising the degree sequence of the subgraph induced on a
given set $S$ of vertices. For suitable $\boldsymbol{d}$ and $S$, we show that
the degree sequence of the subgraph induced on $S$ is essentially concentrated
around a sequence that we can deterministically describe in terms of
$\boldsymbol{d}$ and $S$. We then give an application of this result,
determining a threshold for when this induced subgraph contains a giant
component. We also apply a similar analysis to the case where $S$ is chosen by
randomly sampling vertices with some probability $p$, i.e. site percolation,
and determine a threshold for the existence of a giant component in this model.
We consider the case where the density of the subgraph is either constant or
slowly going to $0$ as $n$ goes to infinity, and the degree sequence
$\boldsymbol{d}$ of the whole graph satisfies a certain maximum degree
condition. Analogously, in the percolation model we consider the cases where
either $p$ is a constant or where $p \to 0$ slowly. This is similar to work of
Fountoulakis in 2007 and Janson in 2009, but we work directly in the random
graph model to avoid the limitations of the configuration model that they used. | Angus Southwell, Nicholas Wormald | 2023-03-15T03:09:51Z | http://arxiv.org/abs/2303.08339v1 | # Large induced subgraphs of random graphs with given degree sequences
###### Abstract
We study a random graph \(G\) with given degree sequence \(\boldsymbol{d}\), with the aim of characterising the degree sequence of the subgraph induced on a given set \(S\) of vertices. For suitable \(\boldsymbol{d}\) and \(S\), we show that the degree sequence of the subgraph induced on \(S\) is essentially concentrated around a sequence that we can deterministically describe in terms of \(\boldsymbol{d}\) and \(S\). We then give an application of this result, determining a threshold for when this induced subgraph contains a giant component. We also apply a similar analysis to the case where \(S\) is chosen by randomly sampling vertices with some probability \(p\), i.e. site percolation, and determine a threshold for the existence of a giant component in this model. We consider the case where the density of the subgraph is either constant or slowly going to \(0\) as \(n\) goes to infinity, and the degree sequence \(\boldsymbol{d}\) of the whole graph satisfies a certain maximum degree condition. Analogously, in the percolation model we consider the cases where either \(p\) is a constant or where \(p\to 0\) slowly. This is similar to work of Fountoulakis in 2007 and Janson in 2009, but we work directly in the random graph model to avoid the limitations of the configuration model that they used.
## 1 Introduction
Random graphs with given degree sequences are a well-studied random graph model. To define the model, let \(\boldsymbol{d}=(d(1),\ldots,d(n))\) be the degree sequence of a graph. Then \(\mathcal{G}(\boldsymbol{d})\) is a uniform random graph with degree sequence \(\boldsymbol{d}\). This graph model has been the focus of much study recently, both due to improvements in the tools to study the model and also as it has found applications as a null model for studying networks (see [7]). Compared to binomial random graphs, this model is much better suited to studying properties of graphs where the degrees of the vertices are not concentrated around a particular value. However, this comes at the cost of ease of analysis: events that are trivial to study in the binomial random graph model (such as the adjacency of two vertices) are quite non-trivial in \(\mathcal{G}(\boldsymbol{d})\) and not fully understood in general.
In this paper we study induced subgraphs of random graphs with given degree sequences, i.e. the degree sequence of the subgraph \(G[S]\) of \(G\in\mathcal{G}(\boldsymbol{d})\) induced by \(S\subseteq V(G)\). Our main results are that the degree sequence of the induced subgraph is close to a model degree sequence \(\boldsymbol{d}_{H}\) defined in Definition 2.2. In particular, the distribution of the degree of a vertex in \(G[S]\) is approximately binomial, in terms of its degree in \(G\) and the density of \(S\) in \(G\) (see Section 2 for a precise statement). We use this approximation to show that with probability tending to \(1\) as
\(n\to\infty\) (a.a.s.), the large entries in the degree sequence of \(G[S]\) are asymptotically equal to the corresponding entries in \(\boldsymbol{d}_{H}\), and the frequencies of small entries in each sequence are also close. We formally state this in Theorem 2.3.
The result mentioned above applies to a given subset \(S=S(n)\). We also make use of it to prove a similar result about the model where \(G\) is again a uniformly random graph with degree sequence \(\boldsymbol{d}\), but where \(S\) is chosen randomly by taking each vertex independently with probability \(p=p(n)\). This model is commonly known as _(site-)percolated random graph \(\mathcal{G}(\boldsymbol{d})\) with survival probability \(p\)_. This is in contrast to _bond percolation_, where edges are deleted instead of vertices. Percolation problems have been studied on a wide range of graphs, both deterministic and random, since the 1950s. See, for example, the work of Broadbent and Hammersley [2], Fountoulakis [3], Janson [5], or McDiarmid, Scott, and Withers [9]. In this paper, we use the phrase "percolated random graph" and the notation \(G_{\boldsymbol{d}}(p)\) to refer to a uniformly random graph with degree sequence \(\boldsymbol{d}\) after site percolation with survival probability \(p\). In this model, \(S\) is a random variable, where each subset \(S\subset[n]\) occurs with probability \(p^{|S|}(1-p)^{n-|S|}\). We define a model sequence \(\boldsymbol{d}_{A}\) (formally given in Definition 2.5), which is a function of \(\boldsymbol{d}\) and \(p\), and show that, for suitable \(\boldsymbol{d}\) and \(p\), the degree sequence of the percolated random graph is close to the model sequence \(\boldsymbol{d}_{A}\).
The relationship between the large entries of \(\boldsymbol{d}_{A}\) and \(\boldsymbol{d}_{S}\) in the percolated random graph model is less precise than the corresponding relationship between \(\boldsymbol{d}_{H}\) and \(\boldsymbol{d}_{S}\) in the model where \(S\) is given. Instead of estimating each entry, we estimate the degree of a vertex conditional on it being a member of \(S\). As well as this, we give a result about the sum of the large entries in each sequence. This is because of the potential lack of concentration of large degrees in the induced subgraph. For instance, if only one vertex \(v\) exists of very large degree \(i\), and the rest have small degree \(j\), then the maximum degree in the induced subgraph will be close to \(ip\) with probability \(p\) (in the case that \(v\in S\)), and at most \(j\) otherwise.
A common problem studied in random graphs and percolation models is the existence of a giant component. Molloy and Reed [11] used the configuration model (often denoted \(\mathcal{C}(\boldsymbol{d})\)) to investigate the existence of a giant component in \(\mathcal{G}(\boldsymbol{d})\). The configuration model is a model proposed by Bollobas [1] to study \(\mathcal{G}(\boldsymbol{d})\) which constructs a random (pseudo)graph with the correct degree sequence from a random pairing of sets of points in bins corresponding to the vertices. It is much easier to analyse than \(\mathcal{G}(\boldsymbol{d})\), but the need to transfer results to \(\mathcal{G}(\boldsymbol{d})\) resulted in strict conditions on the degree sequence \(\boldsymbol{d}\) in [11]. Recently, Joos, Perarnau, Rautenbach, and Reed [6] generalised this to fully describe the threshold for the existence of a giant component in \(\mathcal{G}(\boldsymbol{d})\) in terms of \(\boldsymbol{d}\), for all sequences \(\boldsymbol{d}\).
The results of Molloy and Reed [11] were used by Fountoulakis [3] to study the threshold for the existence of a giant component in a percolated random graph. Again this was done by studying the configuration model. A key element of his proof is the following fact: the percolated random graph is distributed uniformly at random conditioned on its degree sequence. He then studied the distribution of the resulting degree sequence in both site and bond percolation models. This result has strict requirements on \(\boldsymbol{d}\), such as a maximum degree of at most \(n^{1/9}\), bounded average degree, and a sufficiently nice limiting distribution. Fountoulakis then applied the aforementioned results of [11] to prove a threshold for the existence of a giant component in (site or bond) percolated \(\mathcal{G}(\boldsymbol{d})\). Janson [5] used similar ideas and tools from the theory of branching processes to prove a similar result for a wider range of degree sequences. Recently, Fountoulakis, Joos, and Perarnau [4] used results in [6] to prove results about the threshold for the existence of a giant component in bond percolated \(\mathcal{G}(\boldsymbol{d})\). These results apply for a wider range of degree sequences than considered in [3] and [5], but also assume that \(\mathcal{G}(\boldsymbol{d})\) has bounded average degree and that the survival probability \(p\in(0,1]\) is a constant.
In this paper we apply the recent result of Joos et al. [6] and results about our model degree sequences (\(\boldsymbol{d}_{H}\) for when \(S\) is fixed and \(\boldsymbol{d}_{A}\) for when \(S\) is random) to determine a threshold for the existence of a giant component in \(G[S]\). This serves as an example of how our main result can be used to study induced subgraphs of \(\mathcal{G}(\boldsymbol{d})\):
by combining this result with known thresholds for properties of \(\mathcal{G}(\mathbf{d})\), one can determine thresholds for these properties in \(G[S]\). Notably, our results allow for cases where \(\mathcal{G}(\mathbf{d})\) has maximum degree slightly less than \(\sqrt{|E(G)|}\) (see (2.1) for the precise condition), as long as the density of the subgraph (relative to the whole graph \(G\)) is either bounded away from \(0\) and \(1\), or goes to \(0\) sufficiently slowly (roughly up to \(n^{-\varepsilon}\) for some small constant \(\varepsilon>0\)). We can achieve this extension of the results in [3] on site percolation by carrying out our analysis in \(\mathcal{G}(\mathbf{d})\) directly, as opposed to using the configuration model, and we utilise the switching method heavily. In contrast to the results in [4] on bond percolation, our results also apply in cases where \(\mathcal{G}(\mathbf{d})\) has average degree only slightly less than \(\sqrt{|E(G)|}\) and the survival probability is at least \(|E(G)|^{-\varepsilon^{\prime}}\), for a small constant \(\varepsilon^{\prime}\). In particular, for nearly regular degree sequences \(\mathbf{d}\) (e.g. typical degree sequences arising from \(\mathcal{G}(n,p)\) or \(\mathcal{G}(n,m)\)), our results apply when the maximum degree is \(O(n^{1-\varepsilon})\) for any \(\varepsilon>0\). In upcoming work, we also use our degree sequence characterisation to prove thresholds for the connectivity of \(G[S]\), as well as results on its chromatic number and its automorphism group.
## 2 Main results
Here we give much of the notation we use, as well as describing our main results. Let \(\mathbf{d}\) be a graphical sequence of length \(n\), that is, let \(\mathbf{d}=(d(1),\ldots,d(n))\) be a sequence of non-negative integers such that there exists a graph with vertex set \([n]=\{1,\ldots,n\}\) where each vertex \(i\in[n]\) has degree \(d(i)\). Without loss of generality, we assume that all entries of \(\mathbf{d}\) are at least \(1\) and are in non-decreasing order, so \(1\leq d(1)\leq d(2)\leq\cdots\leq d(n)\). We also define \(\Delta=\Delta(\mathbf{d})\) to be the value of the largest entry in \(\mathbf{d}\). For a set \(A\subseteq[n]\), let \(d(A)=\sum_{i\in A}d(i)\) be the _total degree_ of \(A\). We also use \(M(\mathbf{d})\) to denote \(d([n])\), and call it the _total degree_ of a sequence \(\mathbf{d}\). For brevity, we use \(M\) to denote \(M(\mathbf{d})\) where \(\mathbf{d}\) is the degree sequence of the underlying random graph \(G\in\mathcal{G}(\mathbf{d})\). We always use \(S=\{i_{1},\ldots,i_{s}\}\subset[n]\) to denote the vertices of the induced subgraph of \(\mathcal{G}(\mathbf{d})\), and we define \(\overline{S}=[n]\backslash S\) and \(\gamma=\gamma(S)=d(S)/M\).
### Subgraphs induced on a vertex set
Suppose \(S=S(n)\subseteq[n]\), and suppose \((\mathbf{d},S)\) satisfies
\[\Delta^{2}(\gamma^{-1}\log M)^{12}\leq\delta d(S) \tag{2.1}\]
for some \(\delta\to 0\) sufficiently slowly as \(n\to\infty\) (equivalently, \(M\to\infty\)). Throughout the proofs, we use \(\delta\) and various powers of it to bound the rate at which certain functions grow or shrink. We assume that \(\delta=\Omega((\log\log M)^{-1})\), or equivalently that \(\delta^{-1}=O(\log\log M)\). Define \(J=\delta^{-1/16}\gamma^{-1}\log M\). We suppose that \(\gamma<1-c\) for some constant \(c>0\), but we allow \(\gamma=\gamma(n)\to 0\). That is, \(d(\overline{S})\geq cM\) for some constant \(c>0\), but \(d(S)=o(M)\) is possible. The condition on \(\gamma\) given in (2.1) implies that
\[\gamma\geq\delta^{-1/13}\frac{(\Delta^{2}\log^{12}M)^{1/13}}{M^{1/13}}.\]
This immediately implies that \(\gamma=\omega(M^{-1/13})\), a fact used throughout the proofs. The powers of \(\log M\) in our definitions and results are not necessarily optimised, either for studying the distribution of the induced degree sequence in general or for studying the threshold for the existence of giant components.
We also note that the conditions given in (2.1) imply that \(\mathcal{G}(\mathbf{d})\) is non-empty.
**Proposition 2.1**.: _If \(\mathbf{d}\) is a sequence of length \(n\) with even sum such that there exists a set \(S\subset[n]\) satisfying (2.1), then \(\mathbf{d}\) is graphical, that is, there exists a graph with degree sequence \(\mathbf{d}\)._
Proof.: The inequality (2.1) implies that \(\Delta^{2}=o(d(S))\), which implies that \(\Delta^{2}=o(M)\) (since \(\gamma\leq 1\)). Koren [8] (Section 1) states that if a sequence \(\boldsymbol{d}\) is not graphical, then there exist disjoint, non-empty sets \(A,B\subset[n]\) such that
\[\sum_{i\in A}d(i)-\sum_{j\in B}d(j)>a(n-1-b),\]
where \(a=|A|\) and \(b=|B|\). Suppose that such sets \(A\) and \(B\) existed. The left hand side of this inequality is at most \(a\Delta\). Thus, this inequality could only be true if \(b>n-1-\Delta\). This implies that \(a<\Delta+1\), and also that \(\sum_{j\in B}d(j)=\sum_{j\in[n]}d(j)-\sum_{j\notin B}d(j)\geq M(\boldsymbol{d} )-\Delta^{2}\). Since \(\sum_{i\in A}d(i)\leq\Delta^{2}=o(M)\), it follows that the left hand side tends to \(-\infty\) as \(n\to\infty\), which is a contradiction. Therefore the inequality cannot hold, and the sequence is graphical.
In view of this lemma, by supposing that \(\boldsymbol{d}\) is a sequence of length \(n\) with all entries at least \(1\) and even sum that satisfies (2.1), we may assume that \(\boldsymbol{d}\) is graphical, which is useful when talking about probabilities in associated random graph models. We next define a deterministic sequence \(\boldsymbol{d}_{H}\) that in some sense represents a typical degree sequence of \(G[S]\). Let \(\boldsymbol{d}_{S}\) be the degree sequence of the graph \(G[S]\). For an arbitrary sequence \(\boldsymbol{d}\), let \(n_{k}(\boldsymbol{d})\) be the number of entries of \(\boldsymbol{d}\) that are equal to \(k\).
**Definition 2.2**.: _Let \(d\), a sequence of length \(n\), and a set \(S=\{i_{1},\ldots,i_{s}\}\subset[n]\) be given. To define \(\boldsymbol{d}_{H}=\boldsymbol{d}_{H}(S)\), let \(Z_{j}\sim\operatorname{Bin}\left(j,\frac{d(S)}{M}\right)\) and define_
\[N(k)=\left\lfloor\sum_{i\in S}\mathbb{P}\left(Z_{d(i)}\leq k\right)+\frac{1}{ 2}\right\rfloor\]
_for \(k\geq 0\), and \(N(-1)=0\). Then define \(\boldsymbol{d}_{H}\) to be the non-decreasing sequence in which \(n_{k}(\boldsymbol{d}_{H})\), i.e. the number of occurrences of \(k\) in \(\boldsymbol{d}_{H}\), is given by \(n_{k}(\boldsymbol{d}_{H})=N(k)-N(k-1)\)._
We note that \(\boldsymbol{d}_{H}\) may not be a graphical sequence, but we do not need it to be. The main result on the degree sequence of \(G[S]\) is the following theorem, in which each of the sequences \(\boldsymbol{d}_{S}\) and \(\boldsymbol{d}_{H}\) is essentially segmented into two parts (with some overlap to ensure that all entries of both sequences are covered by the theorem). Part (a) of the theorem implies that, beyond a certain index, the corresponding entries in the two sequences are a.a.s. asymptotic to each other. It also gives an explicit formula for these entries which would be suggested by a naive intuition based on expectation, and is useful for practical purposes. Below this index, it is difficult to obtain asymptotic values for each entry of \(\boldsymbol{d}_{S}\), so we just give rough bounds in (b), and a distributional result in (c). The latter, applying for a slightly larger range than (b) in order to overlap with the range for which part (a) applies, states that the number entries that are equal to any given \(k\leq\frac{1}{2}\gamma J\) is similar in each sequence.
**Theorem 2.3**.: _Let \(\boldsymbol{d}\) be a sequence of length \(n\) with all entries at least 1 and even sum, and let \(S\subset[n]\) be such that \((\boldsymbol{d},S)\) satisfies (2.1) for some \(\delta\to 0\) and \(\gamma<1-c\) for some constant \(c>0\). The following claims hold with probability \(1-O(1/\sqrt{\log M})\):_
1. \(d_{S}(k)=\gamma d(i_{k})(1\pm 8\delta^{1/64})=d_{H}(k)\left(1\pm 12\delta^{1/64}\right)\) _for all_ \(k\) _such that_ \(d(i_{k})\geq\delta^{1/32}J\)_;_
2. \(\max\{d_{S}(k),d_{H}(k)\}\leq 2\gamma\delta^{1/32}J\) _for all_ \(k\) _such that_ \(d(i_{k})<\delta^{1/32}J\)_;_
3. \(|n_{i}(\boldsymbol{d}_{S})-n_{i}(\boldsymbol{d}_{H})|\leq\frac{\gamma n_{i}( \boldsymbol{d}_{H})}{J^{2}}+\gamma J^{5}\) _for all_ \(i\leq\frac{1}{2}\gamma J\)_._
We apply this and the results of Joos et al. [6] to prove the following threshold for the existence of giant components in induced subgraphs of \(\mathcal{G}(\boldsymbol{d})\).
**Theorem 2.4**.: _Let \(\mathbf{d}\) be a sequence of length \(n\) with all entries at least \(1\) and even sum, and let \(S\) be a subset of \([n]\). Let \(\gamma=d(S)/M\), and suppose that \(\Delta^{2}\gamma^{-12}\log^{12}M=o(\gamma M)\). Then \(G[S]\) a.a.s. contains \((|S|-n_{0}(\mathbf{d}_{H}))(1+o(1))\) non-isolated vertices. Furthermore, \(G[S]\) a.a.s. contains a component on a positive fraction of the non-isolated vertices if and only if there exists some constant \(\varepsilon>0\) such that \(R(\mathbf{d}_{H})\geq\varepsilon\gamma^{2}M\)._
We prove this result by applying Theorem 2.3 and the result of Joos et al. [6] about the threshold for giant components in \(\mathcal{G}(\mathbf{d})\) (formally stated in Theorem 5.1). We show that for two sequences that are close in the sense described in Theorem 2.3, the thresholds for the existence of a giant component coincide. We defer the proof of Theorem 2.3 for now, and give this and all the intermediate results in Section 3.
### Random induced subgraphs of \(\mathbf{G}\)
Now we consider the (site-)percolated random graph model \(G_{\mathbf{d}}(p)\), for some \(p\in(0,1)\). In this model, \(S\) is a random variable where, for each subset \(T\in[n]\), \(\mathbb{P}\left(S=T\right)=p^{|T|}(1-p)^{n-|T|}\). Thus, the subgraph \(G[S]\) is the subgraph of a uniformly random \(G\in\mathcal{G}(\mathbf{d})\) induced on \(S\), where \(S\) is chosen by randomly keeping vertices of \(G\) independently with some probability \(p\), and deleting the rest. Again we suppose that \(\mathbf{d}\) is ordered in non-decreasing order with all entries at least \(1\). Analogously to the case where \(S\) is fixed, we impose the condition that
\[\Delta^{2}(p^{-1}\log M)^{12}\leq\delta pM \tag{2.2}\]
for some \(\delta\to 0\) sufficiently slowly as \(n\to\infty\). We now define a model degree sequence of the percolated random graph \(G_{\mathbf{d}}(p)\).
**Definition 2.5**.: _Let \(\mathbf{d}=(d(1),\ldots,d(n))\) be a non-decreasing sequence of length \(n\). Let \(p\in(0,1)\), and let \(X_{j}\sim\operatorname{Bin}\left(j,p\right)\). For \(k\in\{0,\ldots,J\}\), define_
\[\tilde{N}(k):=\left\lfloor p\sum_{i\in V}\mathbb{P}\left(X_{d(i)}\leq k \right)+\frac{1}{2}\right\rfloor\]
_for \(k\geq 0\), and \(\tilde{N}(-1)=0\). Then define \(n_{k}(\mathbf{d}_{A})=\tilde{N}(k)-\tilde{N}(k-1)\) to be the number of entries in \(\mathbf{d}_{A}\) with value \(k\)._
Now we state the main result of our paper for degree sequences of site-percolated \(\mathcal{G}(\mathbf{d})\).
**Theorem 2.6**.: _Let \(\mathbf{d}\) be a sequence of length \(n\) with even sum and all entries at least 1, and let \(p\in(0,1)\) be such that \(p<1-\varepsilon\) for some constant \(\varepsilon>0\) and \(\Delta^{2}p^{-12}\log^{12}M\leq\delta pM\) for some \(\delta\to 0\). Then the following statements hold with probability \(1-o(1)\) in the percolated random graph \(G_{\mathbf{d}}(p)\)._
1. \(|S|=np\left(1\pm 3\sqrt{\frac{\log n}{pn}}\right)\)_._
2. \(d(S)=pM\left(1\pm\frac{p^{2}}{M^{1/4}}\right)\)_._
3. \(d_{S}(v)=pd(v)\left(1\pm 9\delta^{1/64}\right)\) _for all_ \(v\in S\) _such that_ \(d(v)>2\delta^{1/32}J(p)\)_._
4. _For all_ \(i\leq\frac{1}{3}pJ(p)\)_,_ \[|n_{i}(\mathbf{d}_{S})-n_{i}(\mathbf{d}_{A})|\leq\frac{pn_{i}(\mathbf{d}_{A})}{J(p)^{3}} (1+o(1))+\frac{pJ(p)^{6}}{\sqrt{\log M}}.\]
Analogously to the definition of \(\tilde{y}_{i}\), we also define
\[\tilde{w}_{k}:=p\sum_{i\in V}\mathbb{P}\left(X_{d(i)}=k\right).\]
It follows immediately that \(n_{k}(\boldsymbol{d}_{A})=\tilde{w}_{k}\pm 1\). We also analogously define \(J(p):=\delta^{-1/16}p^{-1}\log M\). For \(p=\gamma\) this is equivalent to the definition of \(J\) used previously. This also allows us to consider \(J(\gamma(S))\), the corresponding value of \(J\) for a given set \(S\subset[n]\). This is useful as we often prove results for the percolation model by conditioning on a "nice" choice of \(S\) and then applying results proved in the case where \(S\) is fixed. As such, when proving results about the percolation model, we often write definitions from the previous section (e.g. \(\gamma\), \(Z_{j}\), \(\boldsymbol{d}_{H}\), \(\tilde{y}_{i}\)) with the extra argument of \(S\) to highlight the conditional probability space on which we define them. As in the case where \(S\) is fixed, we apply Theorem 2.6 to determine the threshold for the existence of a giant component in the site-percolated random graph under the conditions given in (2.2).
**Theorem 2.7**.: _Let \(\boldsymbol{d}\) be a sequence of length \(n\) with even sum and all entries at least 1. Let \(p\in(0,1)\) be such that \(\Delta^{2}p^{-12}\log^{12}M=o(pM)\). Let \(G_{\boldsymbol{d}}(p)\) be the site-percolated random graph, where \(G\sim\mathcal{G}(\boldsymbol{d})\). Then \(G_{\boldsymbol{d}}(p)\) a.a.s. contains \((np-n_{0}(\boldsymbol{d}_{A}))(1+o(1))\) non-isolated vertices. Furthermore, \(G_{\boldsymbol{d}}(p)\) a.a.s. contains a component on a positive fraction of the non-isolated vertices if and only if \(R(\boldsymbol{d}_{A})\geq\varepsilon p^{2}M\) for some constant \(\varepsilon>0\)._
Much like in the case where \(S\) is fixed, we prove this by showing that \(\boldsymbol{d}_{S}\) and \(\boldsymbol{d}_{A}\) are a.a.s. close, and then showing that for sequences that are close these thresholds coincide. We give the proof of Theorem 2.6 in Section 6.
## 3 Distribution of the induced vertex degree
In this section we prove Theorem 2.3. Let \(A^{i}_{v}\) denote the set of \(G\in\mathcal{G}(\boldsymbol{d})\) such that \(d_{S}(v)=i\).
**Lemma 3.1**.: _Let \(v\) be an arbitrary vertex in \(S\). Then_
\[\frac{|A^{i}_{v}|}{|A^{i+1}_{v}|}=\frac{i+1}{d(v)-i}\cdot\frac{d(\overline{S} )}{d(S)}\left(1+O\left(\frac{\Delta^{2}}{d(S)}\right)\right).\]
Proof.: We define an operation called a switching that takes a graph \(G\in A^{i+1}_{v}\) to some \(G^{\prime}\in A^{i}_{v}\). Let \(G\in A^{i+1}_{v}\). To perform a switching, choose a vertex \(y\) such that \(vy\in E(G)\) and \(y\in S\), as well as an ordered pair of vertices \((u,x)\) such that \(ux\in E(G)\) and \(u\in\overline{S}\) (\(x\) can be in either \(S\) or \(\overline{S}\)). It is also required that
* the vertices \(\{u,v,x,y\}\) are distinct, and
* \(xy\notin E(G)\) and \(uv\notin E(G)\).
The switching deletes edges \(vy\) and \(ux\), replacing these edges with \(uv\) and \(xy\) and hence creating a new multigraph \(G^{\prime}\), and the conditions (a) and (b) imply that \(G^{\prime}\in A^{i}_{v}\). This switching is illustrated in Figure 1.
Now we find upper and lower bounds on the number of switchings that create a particular \(G^{\prime}\in A^{i}_{v}\). Given \(G\in A^{i+1}_{v}\), there are \(i+1\) choices for a vertex \(y\) such that \(vy\in E(G)\) and \(y\in S\). There are \(d(\overline{S})\) choices for a vertex \(u\in\overline{S}\) and neighbour \(x\). Thus, there are at most \((i+1)d(\overline{S})\) switchings that take \(G\in A^{i+1}_{v}\) to some \(G^{\prime}\in A^{i}_{v}\). To determine a corresponding lower bound, we note that since \(G\) has maximum degree at most \(\Delta\), the number of choices for \(\{u,x,y\}\) as described above that violate (a) is \(O((i+1)\Delta)\) and for (b) it is \(O((i+1)\Delta^{2})\). Hence the number of valid switchings that can be applied to each \(G\in A^{i+1}_{v}\) is \((i+1)(d(\overline{S})+O(\Delta^{2}))\).
Now we use a very similar argument to count the switchings that create a particular \(G^{\prime}\in A^{i}_{v}\). There are \(d(v)-i\) choices for the vertex \(u\), and \(d(S)\) choices for an ordered pair of vertices \((x,y)\) such that \(y\in S\) and \(xy\in E(G^{\prime})\).
So an upper bound is \((d(v)-i)d(S)\). The number of these combinations that are invalid, due to a vertex being repeated or the edge \(ux\) or \(vy\) being present, is \(O(\Delta^{2})(d(v)-i)\). It follows that the number of switchings that create \(G^{\prime}\) is \((d(v)-i)(d(S)+O(\Delta^{2}))\).
From the conclusions of the previous two paragraphs, the total number of switchings applicable to graphs in \(A_{v}^{i+1}\) can be counted in two different ways as \(|A_{v}^{i+1}|(i+1)(d(\overline{S})+O(\Delta^{2}))\) and \(|A_{v}^{i}|(d(v)-i)(d(S)+O(\Delta^{2}))\). The lemma follows, since \(d(\overline{S})=\Theta(M)\) and \(\Delta^{2}=o(d(S))\).
Define \(S_{\text{small}}=\{i_{1},\ldots,i_{\ell}\}\) where \(\ell\) is the smallest index such that \(d(i_{j})>J\) for all \(j>\ell\). That is, \(S_{\text{small}}\) is the set of vertices in \(S\) with degree (in \(G\)) at most \(J\). Naturally, we can also define \(S_{\text{big}}=S\backslash S_{\text{small}}\). Define
\[\tilde{y}_{i}=\sum_{v\in S_{\text{small}}}\mathbb{P}\left(Z_{d(v)}=i\right).\]
For small \(i\) (that is, smaller than \(c\gamma J\) for some constant \(c<1\)), \(\tilde{y}_{1}\) is very close to \(n_{i}(\boldsymbol{d}_{H})\), the number of entries in the sequence \(\boldsymbol{d}_{H}\) with value \(i\). One noteworthy and straightforward consequence of Theorem 2.3 is that \(n_{i}(\boldsymbol{d}_{H})=\tilde{y}_{i}\pm(1+o(M^{-5}))\) for all \(i\leq\frac{1}{2}\gamma J\). For the sequence \(\boldsymbol{d}_{S}\), we can analogously define
\[Y_{i}=\sum_{v\in S_{\text{small}}}\mathbbm{1}_{\{d_{S}(v)=i\}}. \tag{3.1}\]
These definitions allow us to consider the behaviour of vertices in \(S_{\text{small}}\) and \(S_{\text{big}}\) somewhat separately. This is useful in certain circumstances, particularly when studying the distribution of the number of vertices with very low degrees (e.g. \(0\), \(1\), or \(2\)) in \(\boldsymbol{d}_{H}\) and \(\boldsymbol{d}_{S}\), as the number of vertices in \(S_{\text{big}}\) with very low induced degree is a.a.s. \(0\). More specifically, Theorem 2.3 implies that a.a.s. \(n_{i}(\boldsymbol{d}_{S})=Y_{i}\) for all \(i\leq\frac{1}{2}\gamma J\). A later result (Lemma 3.6) then implies that \(\mathbb{E}\left[Y_{i}\right]=\tilde{y}_{i}\left(1+o(J^{-6})\right)\) for all \(i\leq J\).
**Remark 3.2**.: _Let \(S_{j}\) be the set of \(i\in S\) such that \(d(i)=j\). Then for \(k\in\left[1,\frac{1}{2}\gamma J\right]\),_
\[\tilde{y}_{k}=\sum_{i\in S_{\text{small}}}\mathbb{P}\left(Z_{d(i)}=k\right)= \sum_{j\leq J}|S_{j}|\mathbb{P}\left(Z_{j}=k\right)=\sum_{j\leq J}\frac{d(S) }{d(\overline{S})}\frac{j-k+1}{k}|S_{j}|\mathbb{P}\left(Z_{j}=k-1\right).\]
_With some naive bounds on the value of \(\frac{j-k+1}{k}\), this gives useful bounds on the ratios between successive values of \(\tilde{y}_{i}\), and thus on \(n_{i}(\boldsymbol{d}_{H})\). Since \(j\leq J\),_
\[\tilde{y}_{k}=\sum_{j\leq J}|S_{j}|\mathbb{P}\left(Z_{j}=k\right)\leq\frac{J }{k}\frac{d(S)}{d(\overline{S})}\sum_{j\leq J}|S_{j}|\mathbb{P}\left(Z_{j}=k -1\right)=\frac{\gamma J}{k(1-\gamma)}\tilde{y}_{k-1}.\]
_Thus, \(\tilde{y}_{k}=O(\gamma J\tilde{y}_{k-1})\). More commonly, we use the form \(\tilde{y}_{k-1}=\Omega\left(\frac{\tilde{y}_{k}}{\gamma J}\right)\)._
Figure 1: A switching. Here \(v,y\in S\) and \(u\in\overline{S}\). Edges present in \(G\) (on the left, respectively \(G^{\prime}\) on the right) are given as solid lines, forbidden edges are given as dashed. Other edges may be present or absent.
### Concentration of large degrees
Recall that \(d_{S}(v)\) is the degree of vertex \(v\) in \(G[S]\). Also recall the definition of \(\gamma=d(S)/M\), and that \(\delta\) is an arbitrary function such that \(\delta\to 0\) and \(\Delta^{2}(\gamma^{-1}\log M)^{12}\leq\delta d(S)\).
**Lemma 3.3**.: _Suppose \(\varepsilon=4\delta^{1/64}\) and define \(i_{0}=i_{0}(v)=\gamma d(v)\) (not necessarily an integer). Then, for \(n\) sufficiently large,_
\[\mathbb{P}\big{(}d_{S}(v)\in[i_{0}(1-2\varepsilon),i_{0}(1+2 \varepsilon)]\big{)}<2d(v)\exp\big{(}-\varepsilon^{2}i_{0}/2\big{)}\,.\]
Proof.: We first prove that the probability that \(d_{S}(v)<i_{0}(1-2\varepsilon)\) is less than \(d(v)\exp\big{(}-\frac{1}{2}\varepsilon^{2}i_{0}\big{)}\). Define \(i_{k}=(1-k\varepsilon)i_{0}\) for all \(k>0\). Recall that \(|A_{v}^{i}|\) is the number of graphs in \(\mathcal{G}(\boldsymbol{d})\) such that \(d_{S}(v)=i\). For all \(i\leq i_{1}-1\), Lemma 3.1 implies that
\[\frac{|A_{v}^{i}|}{|A_{v}^{i+1}|}=\frac{i+1}{d(v)-i}\frac{d( \overline{S})}{d(S)}\left(1+O\left(\frac{\Delta^{2}}{d(S)}\right)\right)\leq (1-\varepsilon)\,\frac{i_{0}}{d(v)-i_{0}}\frac{d(\overline{S})}{d(S)}\left( 1+O\left(\frac{\Delta^{2}}{d(S)}\right)\right).\]
By definition of \(i_{0}\),
\[\frac{i_{0}}{d(v)-i_{0}}=\frac{\frac{d(S)}{M}}{1-\frac{d(S)}{M}} =\frac{d(S)}{d(\overline{S})}.\]
Thus, for all \(i\leq i_{1}-1\),
\[\frac{|A_{v}^{i}|}{|A_{v}^{i+1}|}\leq(1-\varepsilon)\left(1+O\left(\frac{ \Delta^{2}}{d(S)}\right)\right)<1-\frac{3}{4}\varepsilon, \tag{3.2}\]
for \(n\) sufficiently large, since \(\Delta^{2}=o(\varepsilon d(S))\). Hence for all \(i\leq\lceil i_{2}\rceil-1\),
\[\frac{|A_{v}^{i}|}{|A_{v}^{\lceil i_{1}\rceil}|}\leq\left(1-\frac{3}{4} \varepsilon\right)^{\varepsilon i_{0}-1}<\exp\left(-\frac{2}{3}\varepsilon^{2 }i_{0}+O\left(\varepsilon^{3}i_{0}\right)\right)<\exp\left(-\frac{1}{2} \varepsilon^{2}i_{0}\right),\]
where the last inequality holds for \(n\) sufficiently large. So if \(i\leq\lceil i_{2}\rceil-1\), it follows that \(\mathbb{P}\left(d_{S}(v)=i\right)<\exp\left(-\frac{1}{2}\varepsilon^{2}i_{0}\right)\). Performing a union bound over all possible induced degrees \(i\leq i_{2}\) gives that
\[\mathbb{P}\left(d_{S}(v)\leq i_{2}\right)<d(v)\exp\left(-\frac{1}{2} \varepsilon^{2}i_{0}\right).\]
The argument for the upper bound is obtained symmetrically mutatis mutandis, and the lemma follows from the union bound.
**Lemma 3.4**.: _As in Lemma 3.3, suppose \(\varepsilon=4\delta^{1/64}\). The probability that_
\[d_{S}(v)\in[\gamma d(v)(1-2\varepsilon),\gamma d(v)(1+2\varepsilon)]\]
_for all vertices \(v\in S\) such that \(d(v)>\delta^{-1/32}\gamma^{-1}\log M\) is \(1-o\left(M^{-5}\right)\)._
Proof.: We apply Lemma 3.3 along with the union bound over all vertices \(v\in S\) such that \(d(v)>\delta^{-1/32}\gamma^{-1}\log M\). Lemma 3.3 implies that the probability that \(d_{S}(v)\) is outside the specified range is at most \(2n\exp\big{(}-\frac{1}{2}\varepsilon^{2}i_{0}\big{)}\). Note that by assumption, \(i_{0}>\delta^{-1/32}\log M\), where \(\delta\to 0\). Combining this with the union bound implies that the probability that there exists some vertex \(v\) with degree greater than \(\delta^{-1/32}\gamma^{-1}\log M\) such that \(d_{S}(v)\) is outside its specified range is at most \(2n^{2}\exp\left(-\frac{1}{2}\varepsilon^{2}\delta^{-1/32}\log M\right)=2n^{2}M ^{-8}\). Since \(n\leq M\), the claim holds.
### Distribution of small vertex degrees
Define \(\bar{J}=\min\{J,\Delta\}\). Using this notation simplifies some arguments by allowing us to combine cases where \(J\leq\Delta\) and \(J>\Delta\). In the following lemma, we use a slightly more complicated switching than in Lemma 3.1, moving three edges instead of the usual two. The reason for this is that we wish to preserve the degrees of two adjacent vertices, \(v_{1}\) and \(v_{2}\), in \(G[S]\), while switching away the edge between them. To use the previous switching, this means that the two other adjacent vertices in the switching must be in \(S\), and adjacent. Possible variations in the number of choices of such a pair of adjacent vertices would cause a problem. Instead, we use a trick with its origins in the switchings introduced by McKay and Wormald [10], whereby a third pair of vertices are involved in order to make the number of switchings much more stable.
**Lemma 3.5**.: _Suppose \((\boldsymbol{d},S)\) satisfies condition (2.1). Let \(k\) be fixed, let \(\{v_{1},\ldots,v_{k}\}\subset S_{\mathrm{small}}\), and let \(G\) be a uniformly random graph with degree sequence \(\boldsymbol{d}\). Then_
\[\mathbb{P}\left(\left.v_{1}v_{2}\in E(G)\right|d_{S}(v_{1})=i_{1},\ldots,d_{S }(v_{k})=i_{k}\right)=O\left(\frac{\bar{J}^{2}M}{d(S)^{2}}\right)\]
_for all \(i_{j}\leq d(v_{j})\) for \(j\leq k\), and \(\mathbb{P}\left(v_{1}v_{2}\in E(G)\right)=O\left(\frac{\bar{J}^{2}M}{d(S)^{2} }\right)\)._
Proof.: First note that if one of \(i_{1}\) or \(i_{2}\) is equal to \(0\), then the probability in question is \(0\). Thus we may suppose that \(i_{1},\ i_{2}>0\).
Let \(A_{v_{1},v_{2}}\) be the subset of \(\mathcal{G}(\boldsymbol{d})\) consisting of the graphs where \(v_{1}\) and \(v_{2}\) are adjacent and each vertex \(v_{j}\) has induced degree \(i_{j}\) for \(j\leq k\). Similarly, let \(B_{v_{1},v_{2}}\) be the subset of \(\mathcal{G}(\boldsymbol{d})\) where \(v_{1}\) and \(v_{2}\) are not adjacent and each vertex \(v_{j}\) has induced degree \(i_{j}\) for \(j\leq k\). We define a switching between \(A_{v_{1},v_{2}}\) and \(B_{v_{1},v_{2}}\) as follows. Suppose \(G\in A_{v_{1},v_{2}}\). To perform a switching, choose two ordered pairs of vertices in \(V(G)\), \((x,y)\) and \((a,b)\), such that \(ab,\ xy\in E(G)\), and \(y,\ b\in S\), and with the additional requirements that
* the vertices \(\{v_{1},v_{2},a,b,x,y\}\) are distinct, with the exception that \(y=b\) is permissible,
* \(v_{1}y,\ ax,\ v_{2}b\notin E(G)\), and
* the degrees of \(v_{1},\ldots,v_{k}\) in \(G[S]\) are unchanged by the switching.
The switching deletes the edges \(v_{1}v_{2},\ xy,\ ab\) and replaces them with \(v_{1}y,\ ax,\ v_{2}b\), creating a graph \(G^{\prime}\in B_{v_{1},v_{2}}\). A diagram illustrating this switching is given in Figure 2.
First we determine a lower bound on the number of switchings that can be applied to each \(G\in A_{v_{1},v_{2}}\). Since \(y\) and \(b\) are in \(S\), there are \(d(S)^{2}\) choices for \(\{a,b,x,y\}\) ignoring the constraints (a) - (c). Noting that the induced degrees of \(v_{1},\ldots,v_{k}\) are unchanged by the switching if \(b,y\in S\) and \(\{v_{1},\ldots,v_{k},a,b,x,y\}\) are distinct, we see
Figure 2: A switching. Present edges are given as solid lines, forbidden edges are given as dashed. Other edges can be present or absent.
that the number of choices for \(\{v_{1},v_{2},a,b,x,y\}\) that violate (a) or (c) is \(O(d(S)k\bar{J})\), and the number of choices that violate (b) is \(O(d(S)\Delta^{2})\). Since \(k\) is fixed, this implies that the number of switchings that can be applied to each \(G\in A_{v_{1},v_{2}}\) is \(d(S)(d(S)-O(\Delta^{2}))\).
Now we determine an upper bound on the number of switchings that create a particular \(G^{\prime}\in B_{v_{1},v_{2}}\). The definition of \(B_{v_{1},v_{2}}\) implies that the number of choices for \(y\) is \(i_{1}\), and similarly the number of choices for \(b\) is \(i_{2}\). The number of choices for the adjacent pair \((a,x)\) is at most \(M\). Thus, the number of switchings that create \(G^{\prime}\) is at most \(i_{1}i_{2}M\).
Combining these two bounds gives
\[\frac{|A_{v_{1},v_{2}}|}{|B_{v_{1},v_{2}}|}\leq\frac{i_{1}i_{2}M}{d(S)^{2}} \left(1+O\left(\frac{\Delta^{2}}{d(S)}\right)\right).\]
Since \(\Delta^{2}=o(d(S))\) by assumption, the multiplicative error term is \(1+o(1)\). Thus, the probability that the vertices \(v_{1}\) and \(v_{2}\) are adjacent, conditional on the induced degrees of \(\{v_{1},\ldots,v_{k}\}\), is at most
\[\mathbb{P}\left(\left.v_{1}v_{2}\in E(G)\right|d_{S}(v_{1})=i_{1},\ldots,d_{S} (v_{k})=i_{k}\right)=\frac{|A_{v_{1},v_{2}}|}{|A_{v_{1},v_{2}}|+|B_{v_{1},v_{2 }}|}\leq\frac{\bar{J}^{2}M}{d(S)^{2}}(1+o(1)),\]
since both \(i_{1}\) and \(i_{2}\) are at most \(d(v_{1})\) and \(d(v_{2})\) respectively and \(v_{1},v_{2}\in S_{\text{small}}\). This proves the first claim, and the second claim follows immediately from the law of total probability.
**Lemma 3.6**.: _Suppose \((\boldsymbol{d},S)\) satisfies condition (2.1). Let \(k\) be fixed and \(\{v_{1},\ldots,v_{k}\}\subset S_{\text{small}}\). Let \(Z_{j}\sim\operatorname{Bin}\left(j,\frac{d(S)}{M}\right)\). Then, uniformly for all \(i_{1},\ldots,i_{k}\leq\bar{J}\), we have_
\[\mathbb{P}\left(d_{S}(v_{1})=i_{1},\ldots,d_{S}(v_{k})=i_{k}\right) =\prod_{j=1}^{k}\mathbb{P}\left(d_{S}(v_{j})=i_{j}\right)\left(1+ O\left(\bar{J}\left(\frac{\Delta^{2}}{d(S)}+\frac{\bar{J}^{2}M}{d(S)^{2}} \right)\right)\right)\] \[=\prod_{j=1}^{k}\mathbb{P}\left(Z_{d(v_{j})}=i_{j}\right)\left(1+ O\left(\bar{J}\left(\frac{\Delta^{2}}{d(S)}+\frac{\bar{J}^{2}M}{d(S)^{2}} \right)\right)\right).\]
Proof.: We condition on the event that \(d_{S}(v_{j})=i_{j}\) for all \(j\geq 2\), for an arbitrary choice of \((i_{2},\ldots,i_{k})\) where \(i_{j}\leq d(v_{j})\). It suffices to show that
\[\mathbb{P}\left(\left.d_{S}(v_{1})=i_{1}\right|d_{S}(v_{2})=i_{2},\ldots,d_{S} (v_{k})=i_{k}\right)=\mathbb{P}\left(Z_{d(v_{1})}=i_{1}\right)\left(1+O\left( \bar{J}\left(\frac{\Delta^{2}}{d(S)}+\frac{\bar{J}^{2}M}{d(S)^{2}}\right) \right)\right),\]
as well as
\[\mathbb{P}\left(d_{S}(v_{1})=i_{1}\right)=\mathbb{P}\left(Z_{d(v_{1})}=i_{1} \right)\left(1+O\left(\bar{J}\left(\frac{\Delta^{2}}{d(S)}+\frac{\bar{J}^{2}M} {d(S)^{2}}\right)\right)\right).\]
Let \(C_{i}\) be the set of graphs in \(\mathcal{G}(\boldsymbol{d})\) such that \(d_{S}(v_{1})=i\) and \(d_{S}(v_{j})=i_{j}\) for all \(j\geq 2\). That is, for all \(G\in C_{i}\), \((d_{S}(v_{1}),d_{S}(v_{2}),\ldots,d_{S}(v_{k}))=(i,i_{2},\ldots,i_{k})\). We apply a switching similar to the one used in Lemma 3.1 to switch between \(C_{i+1}\) and \(C_{i}\). This switching is illustrated in Figure 3. The important difference between this switching and the switching used in the proof of Lemma 3.1 is that the induced degrees of vertices \(v_{2},\ldots,v_{k}\) are maintained. Other than this extra restriction, the edges are chosen in the same way as the switching used in Lemma 3.1.
Now we define the switching formally. Suppose \(G\in C_{i+1}\). Choose a vertex \(y\) such that \(v_{1}y\in E(G)\) and \(y\in S\), as well as an ordered pair of vertices \((u,x)\) such that \(ux\in E(G)\) and \(u\in\overline{S}\), with the extra restrictions that
1. the vertices \(\{u,v_{1},x,y\}\) are distinct,
2. \(xy\notin E(G)\) and \(uv_{1}\notin E(G)\),
3. the degrees of \(v_{1},\ldots,v_{k}\) in \(G[S]\) are unchanged by the switching.
The switching deletes edges \(v_{1}y\) and \(ux\), replacing these edges with \(uv_{1}\) and \(xy\) and hence creating a new graph \(G^{\prime}\in C_{i}\).
To count the switchings that can be applied to each \(G\in C_{i+1}\), we carry out a computation analogous to that in the proof of Lemma 3.1. For each \(G\in C_{i+1}\), there are \((i+1)d(\overline{S})\) choices for \(\{u,v_{1},x,y\}\) such that \(v_{1}y,ux\in E(G)\), \(y\in S\) and \(u\notin S\). To estimate the number of switchings, we bound from above the expected number of choices for \(\{u,v_{1},x,y\}\) such that (a), (b) or
\[(c^{\prime})\;\left\{v_{2},\ldots,v_{k}\right\}\cap\{u,v_{1},x,y\}=\emptyset\]
is false. Note that \((c^{\prime})\) is slightly stricter than (c). As before, there are at most \(3(i+1)\Delta^{2}\) choices for \(\{u,v_{1},x,y\}\) that do not satisfy (a) or (b). Now we consider (c). By assumption, \(v_{1}\neq v_{j}\) for \(j\geq 2\), and \(u\notin S\). Thus, the only possibilities for a non-empty intersection are if \(x\in\{v_{2},\ldots,v_{k}\}\) or \(y\in\{v_{2},\ldots,v_{k}\}\). In the former case, there are at most \(k\Delta\) choices for a neighbour \(u\) of \(x\). With at most \(i+1\) choices for \(y\) adjacent to \(v_{1}\), this means that there are at most \((i+1)k\Delta\) choices for \(\{u,v_{1},x,y\}\). In the second case, there are \(O(1)\) choices for \(y\in\{v_{2},\ldots,v_{k}\}\), and for each one Lemma 3.5 gives
\[\mathbb{P}\left(\,v_{1}y\in E(G)\right|d_{S}(v_{1})=i+1,\ldots,d_{S}(v_{k})=i _{k}\right)=O\left(\frac{J^{2}M}{d(S)^{2}}\right).\]
In such cases, there are \(d(\overline{S})\) choices for \(ux\). Thus, the expected number of choices for \(\{u,v_{1},x,y\}\) where \(y\in\{v_{2},\ldots,v_{k}\}\), for a random \(G\in C_{i+1}\), is \(d(\overline{S})O\left(\frac{J^{2}M}{d(S)^{2}}\right)\). Therefore, the average number of valid switchings that can be applied to each \(G\in C_{i+1}\) is
\[(i+1)(d(\overline{S})-O(\Delta^{2}))-d(\overline{S})O\left(\frac{J^{2}M}{d( S)^{2}}\right)=(i+1)d(\overline{S})\left(1-O\left(\frac{\Delta^{2}}{M}+\frac{J^{2}M }{(i+1)d(S)^{2}}\right)\right),\]
since \(d(\overline{S})=\Theta(M)\).
Next we determine upper and lower bounds for the number of switchings that create a particular \(G^{\prime}\in C_{i}\). There are \(d(v_{1})-i\) choices for the vertex \(u\notin S\) such that \(v_{1}u\in E(G)\), and \(d(S)\) choices for an ordered pair of adjacent vertices \((x,y)\) such that \(y\in S\). Each such choice is valid if all of the following conditions are satisfied:
1. the vertices \(\{u,v_{1},x,y\}\) are distinct,
2. \(v_{1}y\notin E(G^{\prime})\) and \(ux\notin E(G^{\prime})\),
3. the induced degrees of \(v_{2},\ldots,v_{k}\) are unchanged by the switching.
Figure 3: The switching used in this proof. Here \(v_{1},y\in S\) and \(u\in\overline{S}\). Importantly, the switching does not alter \(d_{S}(v_{2}),\ldots,d_{S}(v_{k})\).
By the same reasoning as used in the proof of Lemma 3.1, the number of choices for these vertices that do not satisfy one of (i) or (ii) is \(O((d(v_{1})-i)\Delta^{2})\).
Again, as an upper bound on the number of choices that do not satisfy (iii), we count the choices where \(\{u,v_{1},x,y\}\) and \(\{v_{2},\ldots,v_{k}\}\) intersect non-trivially. Note that \(v_{1}\neq v_{j}\) for any \(j\geq 2\) by assumption, and \(u\neq v_{j}\) since \(u\in\overline{S}\). Thus, we only need to consider \(x\in\{v_{2},\ldots,v_{k}\}\) or \(y\in\{v_{2},\ldots,v_{k}\}\), which give at most \(2(d(v_{1})-i)\sum_{j=2}^{k}d(v_{j})\) choices for \(\{u,v_{1},x,y\}\). Since \(k=O(1)\) and each \(v_{j}\) has degree at most \(\bar{J}\) (since \(\bar{J}\) bounds the maximum degree of the vertices in \(S_{\text{small}}\)), the number of switchings that can create a given \(G^{\prime}\in C_{i}\) is
\[(d(v_{1})-i)d(S)\left(1-O\left(\frac{\Delta^{2}}{d(S)}+\frac{\bar{J}}{d(S)} \right)\right).\]
Thus, it follows that
\[\frac{|C_{i}|}{|C_{i+1}|}=\frac{i+1}{d(v_{1})-i}\frac{d(\overline{S})}{d(S)} \left(1+O\left(\frac{\Delta^{2}}{d(S)}+\frac{\bar{J}^{2}M}{d(S)^{2}}\right) \right). \tag{3.3}\]
Let \(p_{i}=\mathbb{P}\left(\left.d_{S}(v_{1})=i\right|d_{S}(v_{2})=i_{2},\ldots,d_ {S}(v_{k})=i_{k}\right)\) for \(i\in\{0,\ldots,d(v_{1})\}\). Recall that \(\gamma=d(S)/M\). Then (3.3) implies that, for all \(i\in\{0,\ldots,d(v_{1})\}\),
\[\frac{p_{i+1}}{p_{i}}=\frac{d(v_{1})-i}{i+1}\frac{d(S)}{d(\overline{S})}\left( 1+O\left(\frac{\Delta^{2}}{d(S)}+\frac{\bar{J}^{2}M}{d(S)^{2}}\right)\right)= \frac{d(v_{1})-i}{i+1}\frac{\gamma}{1-\gamma}\left(1+O\left(\frac{\Delta^{2}} {d(S)}+\frac{\bar{J}^{2}M}{d(S)^{2}}\right)\right).\]
Thus, we can express \(p_{i}\) in terms of \(p_{0}\):
\[p_{i} =\binom{d(v_{1})}{i}\left(\frac{1-\gamma}{\gamma}\right)^{i}p_{0 }\left(1+O\left(\frac{\Delta^{2}}{d(S)}+\frac{\bar{J}^{2}M}{d(S)^{2}}\right) \right)^{i}\] \[=\binom{d(v_{1})}{i}\left(\frac{1-\gamma}{\gamma}\right)^{i}p_{0 }\exp\left(O\left(i\left(\frac{\Delta^{2}}{d(S)}+\frac{\bar{J}^{2}M}{d(S)^{2} }\right)\right)\right)\] \[=\binom{d(v_{1})}{i}\left(\frac{1-\gamma}{\gamma}\right)^{i}p_{0 }\left(1+O\left(d(v_{1})\left(\frac{\Delta^{2}}{d(S)}+\frac{\bar{J}^{2}M}{d(S) ^{2}}\right)\right)\right), \tag{3.4}\]
since \(i\leq d(v_{1})\) and the error term goes to \(0\) when \(d(v_{1})\leq\bar{J}\). The sum of all \(p_{i}\) must be equal to \(1\), and thus
\[1=\sum_{i=0}^{d(v_{1})}\left[\binom{d(v_{1})}{i}\left(\frac{\gamma}{1-\gamma} \right)^{i}\left(1+O\left(d(v_{1})\left(\frac{\Delta^{2}}{d(S)}+\frac{\bar{J} ^{2}M}{d(S)^{2}}\right)\right)\right)p_{0}\right].\]
Since the error is uniformly bounded for all terms in the sum, and all terms are positive, the relative error of the whole sum is at most \(O\left(d(v_{1})\left(\frac{\Delta^{2}}{d(S)}+\frac{\bar{J}^{2}M}{d(S)^{2}} \right)\right)\). Thus,
\[\sum_{i=0}^{d(v_{1})}\left[\binom{d(v_{1})}{i}\left(\frac{\gamma}{1-\gamma} \right)^{i}p_{0}\right]=1+O\left(d(v_{1})\left(\frac{\Delta^{2}}{d(S)}+\frac{ \bar{J}^{2}M}{d(S)^{2}}\right)\right).\]
It follows from the previous equation that
\[p_{0} =\left(\sum_{i=0}^{d(v_{1})}\binom{d(v_{1})}{i}\left(\frac{\gamma} {1-\gamma}\right)^{i}\right)^{-1}\left(1+O\left(d(v_{1})\left(\frac{\Delta^{2}} {d(S)}+\frac{\bar{J}^{2}M}{d(S)^{2}}\right)\right)\right)\] \[=(1-\gamma)^{d(v_{1})}\left(1+O\left(d(v_{1})\left(\frac{\Delta^ {2}}{d(S)}+\frac{\bar{J}^{2}M}{d(S)^{2}}\right)\right)\right).\]
Applying Equation (3.4) for all \(i\leq d(v_{1})\), we obtain that
\[\mathbb{P}\left(d_{S}(v_{1})=i_{1}\right|d_{S}(v_{2})=i_{2},\ldots,d_{S}(v_{k})= i_{k}\right)=\mathbb{P}\left(Z_{d(v_{1})}=i_{1}\right)\left(1+O\left(d(v_{1}) \left(\frac{\Delta^{2}}{d(S)}+\frac{J^{2}M}{d(S)^{2}}\right)\right)\right) \tag{3.5}\]
for each choice of \(i_{j}\leq d(v_{j})\) for all \(j\leq k\). By the law of total probability, summing over all ordered tuples \((i_{2},\ldots,i_{k})\) such that \(i_{j}\leq d(v_{j})\) for all \(j\geq 2\) gives that
\[\mathbb{P}\left(d_{S}(v_{1})=i_{1}\right) =\sum_{(i_{2},\ldots,i_{k})}\mathbb{P}\left(d_{S}(v_{1})=i_{1} \left|\bigcup_{j=2}^{k}\{d_{S}(v_{j})=i_{j}\}\right.\mathbb{P}\left(\bigcup_{j =2}^{k}\{d_{S}(v_{j})=i_{j}\}\right)\right.\] \[=\mathbb{P}\left(Z_{d(v_{1})}=i_{1}\right)\left(1+O\left(d(v_{1}) \left(\frac{\Delta^{2}}{d(S)}+\frac{\bar{J}^{2}M}{d(S)^{2}}\right)\right) \right). \tag{3.6}\]
Since \(d(v_{1})\leq\bar{J}\), the two required bounds follow.
With this result, we can prove Theorem 2.3(b). We also use the following Chernoff bound: if \(X\sim\operatorname{Bin}\left(()\,n,p\right)\) and \(\varepsilon>0\), then
\[\mathbb{P}\left(|X-np|\geq\varepsilon np\right)\leq 2\exp\left(- \varepsilon^{2}np/3\right). \tag{3.7}\]
**Lemma 3.7**.: _For all \(k\) such that \(d(i_{k})\leq\delta^{1/32}J\), the following holds. We have \(d_{H}(k)\leq 2\gamma\delta^{1/32}J\), and with probability \(1-o(M^{-2})\), we have \(d_{S}(k)\leq 2\gamma\delta^{1/32}J\)._
Proof.: Suppose that \(k\) is such that \(d(i_{k})\leq\delta^{1/32}J\). The first claim is equivalent to the claim that \(N(2\gamma\delta^{1/32}J)\geq k\), where \(N(x)\) is defined in Definition 2.2 as the sum over all \(i\in S\) of the probability that \(Z_{d(i)}\leq x\), rounded to the nearest integer. Since \(\boldsymbol{d}\) is ordered in non-decreasing order, we know that \(d(i_{j})\leq d(i_{k})\leq\delta^{1/32}J\) for all \(j\leq k\). Thus, the Chernoff bound given in (3.7) implies that
\[\mathbb{P}\left(Z_{d(i_{j})}\leq 2\gamma\delta^{1/32}J\right)\leq\mathbb{P} \left(Z_{d(i_{k})}\leq 2\gamma\delta^{1/32}J\right)\leq\mathbb{P}\left(Z_{ \delta^{1/32}J}\leq 2\gamma\delta^{1/32}J\right)=1-o(M^{-3}) \tag{3.8}\]
for all \(j\leq k\). Therefore,
\[\sum_{i\in S}\mathbb{P}\left(Z_{d(i)}\leq 2\gamma\delta^{1/32}J\right)\geq k- o(M^{-2}),\]
and thus \(N(2\gamma\delta^{1/32}J)\geq k\). This proves the first claim. For the second claim, note that if \(d(k)\leq\delta^{1/32}J\), then \(k\in S_{\text{small}}\). Therefore, Lemma 3.6 implies that \(\mathbb{P}\left(d_{S}(k)=j\right)=\mathbb{P}\left(Z_{d(i_{k})}=j\right)(1+o(1))\). Thus,
\[\mathbb{P}\left(d_{S}(k)>2\gamma\delta^{1/32}J\right)=\mathbb{P} \left(Z_{d(i_{k})}>2\gamma\delta^{1/32}J\right)(1+o(1))\leq\mathbb{P}\left(Z_ {\delta^{1/32}J}>2\gamma\delta^{1/32}J\right)(1+o(1)).\]
Combining this with Equation (3.8) then implies that \(\mathbb{P}\left(d_{S}(k)>2\gamma\delta^{1/32}J\right)=o(M^{-3})\). The second claim then follows from the union bound over all such \(k\).
### Concentration of the counts of small vertex degrees
Recall the definitions of \(Y_{i}\) and \(\tilde{y}_{i}\):
\[Y_{i}=\sum_{j\in S_{\text{small}}}\mathbb{1}_{\{d_{S}(j)=i\}} \quad\text{and}\quad\tilde{y}_{i}=\sum_{j\in S_{\text{small}}}\mathbb{P} \left(Z_{d(j)}=i\right).\]
We can show concentration of \(Y_{i}\) around \(\tilde{y}_{i}\) for all \(i\leq J\). Firstly, Lemma 3.6 implies that
\[\mathbb{E}\left[Y_{i}\right]=\sum_{j\in S_{\text{small}}}\mathbb{P}\left(d_{S}(j )=i\right)=\tilde{y}_{i}\left(1+O\left(\bar{J}\left(\frac{\Delta^{2}}{d(S)}+ \frac{\bar{J}^{2}M}{d(S)^{2}}\right)\right)\right).\]
We also know that if \(\mathbf{d}_{S}\) satisfies the concentration bound given in Theorem 2.3(a) then \(n_{i}(\mathbf{d}_{S})=Y_{i}\) for all \(i\leq\frac{1}{2}\gamma J\). Since this is a.a.s. true, studying \(Y_{i}\) and \(n_{i}(\mathbf{d}_{S})\) are effectively equivalent for small \(i\). It also follows that
\[n_{i}(\mathbf{d}_{H})=N(i)-N(i-1) =\left\lfloor\sum_{i\in S}\mathbb{P}\left(Z_{j}\leq i\right)+ \frac{1}{2}\right\rfloor-\left\lfloor\sum_{i\in S}\mathbb{P}\left(Z_{j}\leq i -1\right)+\frac{1}{2}\right\rfloor\] \[=\sum_{i\in S}\mathbb{P}\left(Z_{j}=i\right)\pm 1=\sum_{i\in S_{ \text{small}}}\mathbb{P}\left(Z_{j}=i\right)\pm(1+o(M^{-5})).\]
Thus, \(n_{i}(\mathbf{d}_{H})=\tilde{y}_{i}\pm 2\). We study \(Y_{i}\) rather than \(n_{i}(\mathbf{d}_{S})\) as the analysis is slightly neater. We get the following bound on the variance of \(Y_{i}\).
**Lemma 3.8**.: _For all \(i\leq J\), \(\operatorname{Var}\left(Y_{i}\right)\leq\mathbb{E}\left[Y_{i}\right]\left(1+O \left(\bar{J}\left(\frac{\Delta^{2}}{d(S)}+\frac{\bar{J}^{2}M}{d(S)^{2}} \right)\right)\mathbb{E}\left[Y_{i}\right]\right)\)._
Proof.: Let \(V_{k}\) be the indicator variable for the event that vertex \(k\in S_{\text{small}}\) has induced degree \(i\). Then \(Y_{i}=\sum_{k\in S_{\text{small}}}V_{k}\). This implies that
\[\operatorname{Var}\left(Y_{i}\right)=\operatorname{Var}\left(\sum_{k\in S_{ \text{small}}}V_{k}\right)=\sum_{k\in S_{\text{small}}}\operatorname{Var} \left(V_{k}\right)+\sum_{j\neq k}\operatorname{Cov}\left(V_{j},V_{k}\right), \tag{3.9}\]
where the last sum is over all ordered pairs \((j,k)\in S_{\text{small}}^{2}\) where \(j\neq k\). Each \(V_{k}\) is a Bernoulli random variable, and thus \(\operatorname{Var}\left(V_{k}\right)=\mathbb{P}\left(d_{S}(k)=i\right)(1- \mathbb{P}\left(d_{S}(k)=i\right))\). So the first summation is bounded by
\[\sum_{k\in S_{\text{small}}}\operatorname{Var}\left(V_{k}\right)=\sum_{k\in S _{\text{small}}}\mathbb{P}\left(d_{S}(k)=i\right)(1-\mathbb{P}\left(d_{S}(k)= i\right))\leq\mathbb{E}\left[Y_{i}\right]. \tag{3.10}\]
Now consider the covariance summation in Equation (3.9). An application of Lemma 3.6 provides a bound on the covariance terms in the second summation when \(j,k\in S_{\text{small}}\):
\[\operatorname{Cov}\left(V_{j},V_{k}\right) =\mathbb{P}\left(d_{S}(j)=i,\ d_{S}(k)=i\right)-\mathbb{P} \left(d_{S}(j)=i\right)\mathbb{P}\left(d_{S}(k)=i\right)\] \[=\mathbb{P}\left(d_{S}(j)=i\right)\mathbb{P}\left(d_{S}(k)=i \right)O\left(\bar{J}\left(\frac{\Delta^{2}}{d(S)}+\frac{\bar{J}^{2}M}{d(S)^{2 }}\right)\right).\]
Thus, the summation in Equation (3.9) of the covariances over all ordered pairs \((j,k)\in S_{\text{small}}^{2}\) where \(j\neq k\) is equal to
\[\left(\sum_{j\in S_{\text{small}}}\sum_{k\in S_{\text{small}}} \mathbb{P}\left(d_{S}(j)=i\right)\mathbb{P}\left(d_{S}(k)=i\right)-\sum_{j\in S _{\text{small}}}\mathbb{P}\left(d_{S}(j)=i\right)^{2}\right)O\left(\bar{J} \left(\frac{\Delta^{2}}{d(S)}+\frac{\bar{J}^{2}M}{d(S)^{2}}\right)\right)\] \[=\left(\left(\sum_{k\in S_{\text{small}}}\mathbb{P}\left(d_{S}(k)= i\right)\right)^{2}-\sum_{j\in S_{\text{small}}}\mathbb{P}\left(d_{S}(j)=i \right)^{2}\right)O\left(\bar{J}\left(\frac{\Delta^{2}}{d(S)}+\frac{\bar{J}^{2} M}{d(S)^{2}}\right)\right).\]
Noting that \(\sum_{j\in S_{\text{small}}}\mathbb{P}\left(d_{S}(j)=i\right)^{2}\in[0,\mathbb{ E}\left[Y_{i}\right]^{2}]\), we obtain
\[\sum_{j\neq k\in S_{\text{small}}}\operatorname{Cov}\left(V_{j},V_{k}\right) \leq\mathbb{E}\left[Y_{i}\right]^{2}\cdot O\left(\bar{J}\left(\frac{\Delta^{2}} {d(S)}+\frac{\bar{J}^{2}M}{d(S)^{2}}\right)\right). \tag{3.11}\]
Combining (3.10) and (3.11) gives an upper bound on the variance of \(Y_{i}\):
\[\operatorname{Var}\left(Y_{i}\right)\leq\mathbb{E}\left[Y_{i}\right]+O\left( \bar{J}\left(\frac{\Delta^{2}}{d(S)}+\frac{\bar{J}^{2}M}{d(S)^{2}}\right) \right)\mathbb{E}\left[Y_{i}\right]^{2}.\]
The claim of the lemma immediately follows.
**Remark 3.9**.: _Recall that we assume \(\Delta^{2}(\gamma^{-1}\log M)^{12}\leq\delta\gamma M\) for some \(\delta\to 0\) slowly. Since we suppose that \(\delta=\omega(\log^{-1}M)\), this implies a bound on \(\frac{\Delta^{2}}{d(S)}+\frac{J^{2}M}{d(S)^{2}}\):_
\[\frac{\Delta^{2}}{d(S)} \leq\delta\frac{1}{\gamma^{-12}\log^{12}M}\leq\delta^{1/2}\gamma J ^{-8}\log^{-1}M,\] \[\frac{J^{2}M}{d(S)^{2}} =\frac{J^{2}M}{\gamma^{2}M^{2}}\leq\gamma J^{-8}o(\log^{-2}M)\leq \delta^{1/2}\gamma J^{-8}\log^{-1}M,\]
_and in particular, \(\frac{\Delta^{2}}{d(S)}+\frac{J^{2}M}{d(S)^{2}}=o(\gamma J^{-8}\log^{-1}M)\)._
**Lemma 3.10**.: _Suppose \(i\leq\frac{1}{2}\gamma J\). If \(\mathbb{E}\left[Y_{i}\right]\geq\gamma^{-1}J^{7}\sqrt{\log M}\), then_
\[\mathbb{P}\left(\left|Y_{i}-\mathbb{E}\left[Y_{i}\right]\right|\geq\frac{ \gamma\mathbb{E}\left[Y_{i}\right]}{2J^{3}}\right)=O\left(\frac{\gamma^{-1}J^ {-1}}{\sqrt{\log M}}\right),\]
_and if \(\mathbb{E}\left[Y_{i}\right]\leq\gamma^{-1}J^{7}\sqrt{\log M}\), then_
\[\mathbb{P}\left(\left|Y_{i}-\mathbb{E}\left[Y_{i}\right]\right|\geq J^{4} \sqrt{\log M}\right)=O\left(\frac{\gamma^{-1}J^{-1}}{\sqrt{\log M}}\right).\]
Proof.: First suppose that \(\mathbb{E}\left[Y_{i}\right]\geq\gamma^{-1}J^{7}\sqrt{\log M}\). Applying Chebyshev's inequality with \(t=\alpha\mathbb{E}\left[Y_{i}\right]\) gives
\[\mathbb{P}\left(\left|Y_{i}-\mathbb{E}\left[Y_{i}\right]\right|\geq\alpha \mathbb{E}\left[Y_{i}\right]\right)\leq\frac{\operatorname{Var}\left(Y_{i} \right)}{\alpha^{2}\mathbb{E}\left[Y_{i}\right]^{2}}\leq\frac{1}{\alpha^{2} \mathbb{E}\left[Y_{i}\right]}+\frac{1}{\alpha^{2}}O\left(J\left(\frac{\Delta ^{2}}{d(S)}+\frac{\bar{J}^{2}M}{d(S)^{2}}\right)\right).\]
Now let \(\alpha=\frac{1}{2}\gamma J^{-3}\). Then since \(\bar{J}\leq J\),
\[\frac{\operatorname{Var}\left(Y_{i}\right)}{\alpha^{2}\mathbb{E}\left[Y_{i} \right]^{2}}\leq\frac{4J^{6}}{\gamma^{2}\mathbb{E}\left[Y_{i}\right]}+4 \gamma^{-2}J^{7}O\left(\frac{\Delta^{2}}{d(S)}+\frac{J^{2}M}{d(S)^{2}}\right) \leq 8(\gamma J\sqrt{\log M})^{-1},\]
since Remark 3.9 states that \(\frac{\Delta^{2}}{d(S)}+\frac{J^{2}M}{d(S)^{2}}=o(\gamma J^{-8}\log^{-1}M)\). This proves the first part of the lemma. Now suppose that \(\mathbb{E}\left[Y_{i}\right]\leq\gamma^{-1}J^{7}\sqrt{\log M}\). Then it follows that
\[\operatorname{Var}\left(Y_{i}\right)=\mathbb{E}\left[Y_{i}\right]+\mathbb{E} \left[Y_{i}\right]^{2}O\left(\bar{J}\left(\frac{\Delta^{2}}{d(S)}+\frac{\bar{J }^{2}M}{d(S)^{2}}\right)\right)\leq\gamma^{-1}J^{7}\sqrt{\log M}(1+o(1)).\]
Then applying Chebyshev's inequality with \(t:=J^{4}\sqrt{\log M}\) gives that
\[\mathbb{P}\left(\left|Y_{i}-\mathbb{E}\left[Y_{i}\right]\right|\geq t\right) \leq\frac{\gamma^{-1}J^{7}\sqrt{\log M}(1+o(1))}{J^{8}\log M}\leq 2(\gamma J \sqrt{\log M})^{-1}.\]
This completes the proof.
We note here that the above proof is the only point in the paper that we use the assumption that \(\delta=\Omega((\log\log M)^{-1})\). If the bound on \(\frac{J^{2}M}{d(S)^{2}}\) in Remark 3.9 is replaced with \(\delta^{1/4}\gamma J^{-8}\log^{-1}M\), then the bound on \(\delta\) can be removed entirely. This could be useful if slightly different error bounds on \(\boldsymbol{d}_{S}\) were required, trading precision in the frequencies of the smaller entries for greater precision in the values of the larger entries.
Now we prove Theorem 2.3(c).
**Corollary 3.11**.: _With probability \(1-o(1)\),_
\[|n_{i}(\mathbf{d}_{S})-n_{i}(\mathbf{d}_{H})|\leq\frac{\gamma n_{i}(\mathbf{d}_{H})}{J^{3}}+ \gamma J^{5}\]
_for all \(i\leq\frac{1}{2}\gamma J\)._
Proof.: First we bound \(\mathbb{E}\left[Y_{i}\right]\) in terms of \(n_{i}(\mathbf{d}_{H})\) using Lemma 3.6 and Remark 3.9 bounding the error term. This implies that for each \(i\leq J\),
\[\mathbb{E}\left[Y_{i}\right]=\sum_{j\in S_{\mathrm{small}}}\mathbb{P}\left(d_ {S}(j)=i\right)=\sum_{j\in S_{\mathrm{small}}}\mathbb{P}\left(Z_{d(j)}=i \right)(1+o(\gamma J^{-8}))=(n_{i}(\mathbf{d}_{H})\pm 2)(1+o(\gamma J^{-8})),\]
since \(n_{i}(\mathbf{d}_{H})=\tilde{y}_{i}\pm 2\) for all such \(i\). We also know that, with probability \(1-o(1)\), \(Y_{i}=n_{i}(\mathbf{d}_{S})\) for all \(i\leq\frac{1}{2}\gamma J\). Then we apply Lemma 3.10 and the triangle inequality to say that a.a.s. for all \(i\leq\frac{1}{2}\gamma J\),
\[|n_{i}(\mathbf{d}_{S})-n_{i}(\mathbf{d}_{H})|\leq|Y_{i}-\tilde{y}_{i}|+2 \leq|Y_{i}-\mathbb{E}\left[Y_{i}\right]|+|\mathbb{E}\left[Y_{i} \right]-\tilde{y}_{i}|+2\] \[\leq\frac{\gamma\mathbb{E}\left[Y_{i}\right]}{2J^{3}}+J^{4}\sqrt {\log M}+o(\tilde{y}_{i}\gamma J^{-8})+2\] \[=\frac{\gamma\tilde{y}_{i}(1+o(1))}{2J^{3}}+J^{4}\sqrt{\log M}+1 +o(\tilde{y}_{i}\gamma J^{-8})+2\] \[\leq\frac{\gamma n_{i}(\mathbf{d}_{H})}{J^{3}}+\gamma J^{5},\]
since \(J=\delta^{-1/16}\gamma^{-1}\log M\). This completes the proof.
The factor of \(\gamma\) in the "multiplicative" error term (the \(\gamma n_{i}(\mathbf{d}_{H})/J^{3}\) term) is particularly useful because it implies that, for \(k\leq\frac{1}{2}\gamma J\), a.a.s.
\[\left|\sum_{i=0}^{k}n_{i}(\mathbf{d}_{H})-n_{i}(\mathbf{d}_{S})\right|\leq\sum_{i=0}^ {k}\left(\frac{\gamma n_{i}(\mathbf{d}_{H})}{J^{3}}+\gamma J^{5}\right)\leq\gamma ^{2}J^{6}+\frac{\gamma|S|}{J^{3}}.\]
Since \(|S|\leq d(S)=\gamma M\) and \(M_{H}\sim\gamma^{2}M\), this implies that this sum is \(O(M_{H}/J^{3})\). This is particularly useful in certain proofs. The bounds given in Lemma 3.10 are not optimal in either the multiplicative or the additive error. An similar proof of Lemma 3.10 with a different threshold for \(\mathbb{E}\left[Y_{i}\right]\) can give better error margins in one direction or the other, but this choice strikes a balance that is sufficient for our purposes. With this result, we can now prove Theorem 2.3.
Proof of Theorem 2.3.: Corollary 3.11 proves part (c), and part (b) follows from Lemma 3.7. Now it remains to prove part (a). We show in Lemma 3.4 that a.a.s. \(d_{S}(k)=\gamma d(i_{k})\left(1\pm 8\delta^{1/64}\right)\) uniformly for all \(k\) such that \(d(i_{k})\geq\delta^{1/32}J\). Now we prove claim (a) by showing that deterministically \(d_{H}(k)=\gamma d(i_{k})\left(1\pm 3\delta^{1/64}\right)\). For each \(i\in S\) such that \(d(i)\geq\delta^{1/32}J\), define
\[d(i)^{-} =\gamma d(i)\left(1-3\delta^{1/64}\right)\] \[\text{and}\] \[d(i)^{+} =\gamma d(i)\left(1+3\delta^{1/64}\right).\]
The Chernoff bound given in (3.7) implies that \(\mathbb{P}\left(|Z_{d(j)}-\gamma d(j)|\geq\delta^{1/64}\gamma d(j)\right)\leq 2 \exp-3\delta^{1/32}\gamma J=2M^{-3}\). Thus,
\[\mathbb{P}\left(Z_{d(i)}\leq d(i)^{+}\right)-\mathbb{P}\left(Z_{d(i)}<d(i)^{- }\right)=1-o(M^{-2}). \tag{3.12}\]
We use Equation (3.12) to show that
\[N\left(d(i_{k})^{-}-1\right)\leq k-1\quad\text{and}\quad N\left(d(i_{k})^{+} \right)\geq k \tag{3.13}\]
for all \(k\geq\ell\). Together these statements imply that for all \(i_{k}\) such that \(d(i_{k})\geq\delta^{1/32}J\), the \(k^{\text{th}}\) entry of \(\mathbf{d}_{H}\) is between \(d(i_{k})^{-}\) and \(d(i_{k})^{+}\). To prove these claims, recall that the degree sequence \(\mathbf{d}\) is ordered in non-decreasing order, so \(d(i_{j})\leq d(i_{k})\) for all \(j\leq k\) and \(d(i_{j})\geq d(i_{k})\) for all \(j\geq k\). This means that, by Equation (3.12),
\[\sum_{j\geq k}\mathbb{P}\left(Z_{d(i_{j})}<d(i_{k})^{-}\right)=o(M^{-1}).\]
This in turn implies that
\[N\left(d(i_{k})^{-}-1\right) =\left\lfloor\sum_{i\in S}\mathbb{P}\left(Z_{d(i)}<d(i_{k})^{-} \right)+\frac{1}{2}\right\rfloor\] \[=\left\lfloor\sum_{j<k}\mathbb{P}\left(Z_{d(i_{j})}<d(i_{k})^{-} \right)+\sum_{j\geq k}\mathbb{P}\left(Z_{d(i_{j})}<d(i_{k})^{-}\right)+\frac{1 }{2}\right\rfloor\] \[\leq\left\lfloor k-1+o(M^{-1})+\frac{1}{2}\right\rfloor\] \[=k-1.\]
This proves the first statement in Equation (3.13). An analogous idea proves the second half:
\[N\left(d(i_{k})^{+}\right) =\left\lfloor\sum_{i\in S}\mathbb{P}\left(Z_{d(i)}\leq d(i_{k})^{ +}\right)+\frac{1}{2}\right\rfloor\] \[=\left\lfloor\sum_{j\leq k}\mathbb{P}\left(Z_{d(i_{j})}\leq d(i_ {k})^{+}\right)+\sum_{j>k}\mathbb{P}\left(Z_{d(i_{j})}\leq d(i_{k})^{+}\right) +\frac{1}{2}\right\rfloor\] \[\geq\left\lfloor k(1-o(M^{-1}))+\frac{1}{2}\right\rfloor\] \[=k.\]
The combination of these mean that the \(k^{\text{th}}\) entry of \(\mathbf{d}_{H}\) has a value between \(d(i_{k})^{-}\) and \(d(i_{k})^{+}\), which proves the part (a) of the lemma. This completes the proof.
## 4 Basic properties of the induced degree sequence
In this section we use Theorem 2.3 to prove some basic properties of the sequences \(\mathbf{d}_{S}\) and \(\mathbf{d}_{H}\), such as the sum of their entries (or squares of the entries) and the number of non-zero entries in each sequence. These results are useful for proving a wide variety of properties of \(G[S]\), including the threshold for the giant component. Throughout this section we implicitly suppose that \(\mathbf{d}\) is a sequence of length \(n\) with even sum and all entries at least \(1\) and that \(S\subset[n]\) such that \((\mathbf{d},S)\) satisfy (2.1).
### Total degree
For brevity, we define \(M_{H}:=M(\mathbf{d}_{H})\) and \(M_{S}:=M(\mathbf{d}_{S})\). For an arbitrary sequence of non-negative integers \(\mathbf{d}\), define \(M_{2}(\mathbf{d})=\sum_{i=1}^{n(\mathbf{d})}d(i)^{2}\).
**Lemma 4.1**.: \(M(\mathbf{d}_{H})\sim\gamma^{2}M\) _always. Furthermore, a.a.s. \(M(\mathbf{d}_{S})\sim M(\mathbf{d}_{H})\) and \(M_{2}(\mathbf{d}_{S})-M_{2}(\mathbf{d}_{H})=o(M_{H})\)._
Proof.: Recall \(M(\mathbf{d}_{H})=\sum_{i\leq\Delta}in_{i}(\mathbf{d}_{H})\) and that \(n_{i}(\mathbf{d}_{H})=N(i)-N(i-1)\). Then
\[M(\mathbf{d}_{H}) =\sum_{i\leq\Delta}i(N(i)-N(i-1))=\sum_{i\leq\Delta}i\left(\sum_{ j\in S}\mathbb{P}\left(Z_{d(j)}=i\right)\pm 1\right)\] \[=\sum_{j\in S}\mathbb{E}\left[Z_{d(j)}\right]\pm\Delta^{2}\] \[=\gamma^{2}M\pm\Delta^{2}.\]
Since \(\Delta^{2}\gamma^{-12}\log^{12}M\leq\delta\gamma M\) by assumption, it follows that \(\Delta^{2}=o(\gamma^{2}M)\). This proves the first claim of the lemma. Now we focus on the second claim. We give the proof that a.a.s. \(M_{2}(\mathbf{d}_{S})\sim M_{2}(\mathbf{d}_{H})\) and it follows by an identical proof that a.a.s. \(M_{S}\sim M_{H}\), since \(d_{H}(i),d_{S}(i)\geq 0\). Let \(k\in\mathbb{N}\) be the smallest index such that \(d_{H}(i)>2\delta^{1/32}\gamma J\) for all \(i\geq k\). Similarly, let \(k^{\prime}\) be the analogous quantity for \(\mathbf{d}_{S}\). Then
\[M_{2}(\mathbf{d}_{H})=\sum_{i=1}^{\frac{1}{2}\gamma J}i^{2}n_{i}(\mathbf{d}_{H})+\sum_ {i=k+1}^{s}d_{H}(i)^{2}.\]
Now we apply Theorem 2.3. Note that \(2\delta^{1/32}\gamma J=o(\gamma J)\). Without loss of generality, suppose that \(k\leq k^{\prime}\). Then a.a.s.
\[M_{2}(\mathbf{d}_{H}) =\sum_{i=1}^{2\delta^{1/32}\gamma J}i^{2}n_{i}(\mathbf{d}_{H})+\sum_{ i=k+1}^{s}d_{H}(i)^{2}\] \[=\sum_{i=1}^{2\delta^{1/32}\gamma J}i^{2}n_{i}(\mathbf{d}_{S})+O \left(\frac{\gamma M_{H}}{J}\right)+o(\gamma J^{6})+\sum_{i=k+1}^{s}d_{S}(i)^{ 2}(1+o(1))\] \[=\sum_{i=1}^{2\delta^{1/32}\gamma J}i^{2}n_{i}(\mathbf{d}_{S})+\sum_ {i=k+1}^{s}d_{S}(i)^{2}(1+o(1))+o(M_{H})\] \[=M_{2}(\mathbf{d}_{S})+\sum_{i=k+1}^{k^{\prime}}d_{S}(i)^{2}(1+o(1))+ o(M_{H}).\]
Theorem 2.3(c) implies that a.a.s. \(|k-k^{\prime}|\leq\frac{\gamma|S|}{J^{3}}+\delta^{1/32}\gamma J^{6}\), and \(d_{S}(i)\leq 2\delta^{1/32}\gamma J\) for all \(i\leq k^{\prime}\) by definition. Therefore, a.a.s.
\[\sum_{i=k+1}^{k^{\prime}}d_{S}(i)^{2}\leq(2\delta^{1/32}\gamma J)^{2}\left( \frac{\gamma|S|}{J^{3}}+\delta^{1/32}\gamma J^{6}\right)=o(M_{H}),\]
since \(\gamma^{3}J^{8}=o(\gamma^{2}M/J)\). Therefore, a.a.s. \(M_{2}(\mathbf{d}_{H})=M_{2}(\mathbf{d}_{S})(1+o(1))\), which completes the proof.
### Concentration of number of non-isolated vertices in \(\mathbf{G[S]}\)
For a sequence \(\mathbf{d}\) of non-negative integers, let \(\mathbf{d}^{*}\) be the maximal subsequence of \(\mathbf{d}\) with all positive entries (that is, all entries equal to zero removed).
**Lemma 4.2**.: \(n(\mathbf{d}^{*}_{H})=\Omega(\gamma|S|)\) _always, and a.a.s. \(n(\mathbf{d}^{*}_{S})\sim n(\mathbf{d}^{*}_{H})\)._
Proof.: For the first result, recall that \(n_{k}(\mathbf{d}_{H})=\tilde{y}_{k}\pm 2\) for all \(k\leq\frac{1}{2}\gamma J\). Then
\[\tilde{y}_{1}=\sum_{i\in S_{\mathrm{small}}}\mathbb{P}\left(Z_{d(i)}=1\right)= \frac{\gamma}{1-\gamma}\sum_{i\in S_{\mathrm{small}}}d(i)\mathbb{P}\left(Z_{d(i )}=0\right).\]
Since \(\mathbf{d}\) has a minimum entry of at least 1, this implies that \(\tilde{y}_{1}=\Omega(\gamma\tilde{y}_{0})\), and the first claim follows since \(\tilde{y}_{0}\leq|S|\). For the second claim, we apply the first part of this lemma as well as Theorem 2.3(c). We know that a.a.s. \(|n_{0}(\mathbf{d}_{H})-n_{0}(\mathbf{d}_{S})|\leq\frac{\gamma n_{0}(\mathbf{d}_{H})}{J^{3} }+J^{5}\). Thus, a.a.s.
\[|n(\mathbf{d}_{S}^{*})-n(\mathbf{d}_{H}^{*})|=|n_{0}(\mathbf{d}_{S})-n_{0}(\mathbf{d}_{H})| \leq\frac{\gamma n_{0}(\mathbf{d}_{H})}{J^{3}}+\gamma J^{5}.\]
By the definition of \(J\) and our assumptions on \(\mathbf{d}\) and \(S\), it follows that \(J^{5}=o(\gamma|S|)\) (since \(|S|\geq d(S)/\Delta\)). We also know that \(\frac{\gamma n_{0}(\mathbf{d}_{H})}{J^{3}}=o(\gamma|S|)\), since \(n_{0}(\mathbf{d}_{H})\leq n(\mathbf{d}_{H})=|S|\). The first claim of this lemma states that \(n(\mathbf{d}_{H}^{*})=\Omega(\gamma|S|)\). Therefore, a.a.s. \(|n(\mathbf{d}_{S}^{*})-n(\mathbf{d}_{H}^{*})|=o(n(\mathbf{d}_{H}^{*}))\), which proves the second claim.
## 5 Giant components in \(\mathbf{G[S]}\)
In this section we prove Theorem 2.4. This is done by applying the following theorem of Joos, Perarnau, Rautenbach, and Reed to the characterisation of the degree sequence of the induced subgraph given in Theorem 2.3.
**Theorem 5.1**.: _([6], Theorems 1 and 6) Let \(\mathbf{d}=(d(1),\ldots,d(n))\) with \(d(1)\leq d(2)\leq\cdots\leq d(n)\). Define the following quantities:_
\[j_{\mathbf{d}}=\min\left(\left\{j:j\in[n]\;\;\text{and}\;\sum_{i=1}^ {j}d(i)(d(i)-2)>0\right\}\cup\{n\}\right),\] \[R(\mathbf{d})=\sum_{i=j_{\mathbf{d}}}^{n}d(i),\] \[\widehat{M}(\mathbf{d})=\sum_{i\in[n],d(i)\neq 2}d(i).\]
_Call a degree sequence well-behaved if \(\widehat{M}(\mathbf{d})\) is at least \(\lambda(n)\) for any function \(\lambda:\mathbb{N}\to\mathbb{N}\) where \(\lambda\to\infty\) as \(n\to\infty\). Then:_
1. _For every function_ \(\delta\to 0\) _as_ \(n\to\infty\)_, for every_ \(\gamma>0\)_, if_ \(\mathbf{d}\) _is a well-behaved graphical sequence with_ \(R(\mathbf{d})\leq\delta(n)\widehat{M}(\mathbf{d})\)_, then the probability that_ \(G(\mathbf{d})\) _has a component of order at least_ \(\gamma n\) _is_ \(o(1)\)_._
2. _For every positive constant_ \(\varepsilon\)_, there is a_ \(\gamma>0\) _such that if_ \(\mathbf{d}\) _is a well-behaved graphical sequence with_ \(R(\mathbf{d})\geq\varepsilon\widehat{M}(\mathbf{d})\)_, then the probability that_ \(G(\mathbf{d})\) _has a component of order at least_ \(\gamma n\) _and a component of size at least_ \(\alpha\widehat{M}(\mathbf{d})\) _is_ \(1-o(1)\)_._
3. _For every_ \(b\geq 0\) _and every_ \(0<\gamma<\frac{1}{8}\)_, there exist a positive integer_ \(n_{b,\gamma}\) _and a_ \(0<\delta<1\) _such that if_ \(n>n_{b,\gamma}\) _and_ \(\mathbf{d}\) _is a degree sequence with_ \(\widehat{M}(\mathbf{d})\leq b\)_, then the probability that there is a component of order at least_ \(\gamma n\) _in_ \(G(\mathbf{d})\) _lies between_ \(\delta\) _and_ \(1-\delta\)_._
We prove this theorem by showing that if \(\mathbf{d}_{S}\) satisfies the bounds given in Theorem 2.3 (equivalently, if \(\mathbf{d}_{S}\in\mathcal{D}_{S}^{\prime}\)), then the corresponding values of \(R(\cdot)\) and \(\widehat{M}(\cdot)\) are close for each sequence. Lemma 4.2 states that for each such sequence \(\mathbf{d}_{S}\), the number of non-zero entries is close to the number of non-zero entries in \(\mathbf{d}_{H}\). With all of these results, the proof follows by applying Theorem 5.1 and the law of total probability.
**Lemma 5.2**.: \(\widehat{M}(\mathbf{d}_{H})=\Theta(M_{H})\)_, and a.a.s. \(\widehat{M}(\mathbf{d}_{H})-\widehat{M}(\mathbf{d}_{S})=o(M_{H})\)._
Proof.: The first claim follows unless \(M_{H}\sim 2n_{2}(\mathbf{d}_{H})\); we prove that the definition of \(\mathbf{d}_{H}\) means that this does not occur. To do this, we show that \(\mathbb{P}\left(Z_{d(i)}=2\right)\leq C\left(\mathbb{P}\left(Z_{d(i)}=1\right)+ \mathbb{P}\left(Z_{d(i)}=3\right)\right)\) for some constant \(C>0\). Firstly, if \(d(i)=1\), then this is trivially true. Next, if \(d(i)=2\), then there exists a constant \(C>0\) such that
\[\mathbb{P}\left(Z_{d(i)}=1\right)=2\gamma(1-\gamma)\leq C\gamma^{2}=C\mathbb{ P}\left(Z_{d(i)}=2\right).\]
Finally, if \(d(i)\geq 3\), then
\[\frac{\mathbb{P}\left(Z_{d(i)}=2\right)}{\mathbb{P}\left(Z_{d(i)}=1\right)+ \mathbb{P}\left(Z_{d(i)}=3\right)}=\frac{\frac{1}{2}(d(i)-1)\gamma(1-\gamma)} {(1-\gamma)^{2}+\frac{1}{6}(d(i)-1)(d(i)-2)\gamma^{2}}.\]
The expression on the right hand side is decreasing in \(d(i)\) if \(\gamma d(i)\geq 6\), and is at most \(\frac{3}{1-\gamma}\) if \(\gamma d(i)\leq 6\). Thus, define \(C_{\gamma}=\frac{3}{1-\gamma}\). By our assumptions on \(\gamma\), we know that \(C_{\gamma}=O(1)\) and
\[\mathbb{P}\left(Z_{d(i)}=2\right)\leq C_{\gamma}\left(\mathbb{P}\left(Z_{d(i) }=1\right)+\mathbb{P}\left(Z_{d(i)}=3\right)\right).\]
Since \(n_{i}(\mathbf{d}_{H})=\sum_{i\in S}\mathbb{P}\left(Z_{d(i)}=i\right)\pm 2\), this implies that
\[\widehat{M}(\mathbf{d}_{H})\geq n_{1}(\mathbf{d}_{H})+3n_{3}(\mathbf{d}_{H})\geq\sum_{i \in S}\left(\mathbb{P}\left(Z_{d(i)}=1\right)+3\mathbb{P}\left(Z_{d(i)}=3 \right)\right)-4\geq C_{\gamma}^{-1}n_{2}(\mathbf{d}_{H})+O(C_{\gamma}^{-1}).\]
This implies that \(\widehat{M}(\mathbf{d}_{H})=\Omega(n_{2}(\mathbf{d}_{H}))\) and thus \(M_{H}\geq(2+\varepsilon)n_{2}(\mathbf{d}_{H})\) for some \(\varepsilon>0\). Therefore, \(\widehat{M}(\mathbf{d}_{H})=\Theta(M_{H})\), which proves the first claim of the lemma.
For the second claim, Theorem 2.3 implies that a.a.s. \(|n_{2}(\mathbf{d}_{H})-n_{2}(\mathbf{d}_{S})|\leq\frac{\gamma n_{2}(\mathbf{d}_{H})}{J^{3} }+J^{5}\). Since \(n_{2}(\mathbf{d}_{H})\leq\frac{1}{2}M_{H}\) and \(J^{5}=o(\gamma^{2}M)\), this implies that a.a.s. \(2n_{2}(\mathbf{d}_{S})=2n_{2}(\mathbf{d}_{H})+o(M_{H})\). Recall from Lemma 4.1 that a.a.s. \(M_{S}\sim M_{H}\). Therefore, a.a.s.
\[\widehat{M}(\mathbf{d}_{S})-\widehat{M}(\mathbf{d}_{H})=M_{S}-M_{H}-2n_{2}(\mathbf{d}_{S}) +2n_{2}(\mathbf{d}_{H})=o(M_{H}).\]
This completes the proof.
**Lemma 5.3**.: _With probability \(1-o(1)\), it holds that \(\sum_{i=1}^{k}\left(d_{S}(i)-d_{H}(i)\right)=o(M_{H})\) for every \(k\leq s\) and \(\sum_{i=1}^{k}\left(d_{S}(i)^{2}-d_{H}(i)^{2}\right)=o(M_{H})\) for every \(k\leq s\)._
Proof.: We provide the proof of the second claim, as the first claim follows immediately from this and the fact that \(d_{S}(i),d_{H}(i)\geq 0\) for all \(i\leq s\). We show that deterministically the result holds when \(\mathbf{d}_{S}\) satisfies the bounds given in Theorem 2.3. Since these concentration results hold with probability \(1-o(1)\), this is sufficient to prove the claim. First consider the case where \(k\) is such that \(d(i_{k})>\delta^{1/32}J\). In this case, we know that a.a.s. \(d_{S}(j)\sim d_{H}(j)\) for all \(j\geq k\), and thus by Lemma 4.1 it follows that a.a.s.
\[\sum_{i=1}^{k}d_{S}(i)^{2} =M_{2}(\mathbf{d}_{S})-\sum_{i=k+1}^{s}d_{S}(i)^{2}=M_{2}(\mathbf{d}_{H}) +o(M_{H})-\sum_{i=k+1}^{s}d_{H}(i)^{2}(1+o(1))\] \[=M_{2}(\mathbf{d}_{H})-\sum_{i=k+1}^{s}d_{H}(i)^{2}+o(M_{H})=\sum_{i =1}^{k}d_{H}(i)^{2}+o(M_{H}).\]
Thus, it only remains to consider the case where \(k\) is such that \(d(i_{k})\leq\delta^{1/32}J\). Since we assume \(\mathbf{d}_{S}\) satisfies the bounds given in Theorem 2.3, it follows that \(d_{S}(i),d_{H}(i)\leq\delta^{1/32}J\) for all \(i\leq k\). Consider that if \(d_{S}(i)=d_{H}(i)\)
then the contribution of this term is \(0\). Thus,
\[\sum_{i=1}^{k}\left(d_{S}(i)^{2}-d_{H}(i)^{2}\right)=\sum_{i\leq k:d_{S}(i)\neq d_{ H}(i)}\left(d_{S}(i)^{2}-d_{H}(i)^{2}\right).\]
The difference \(d_{S}(j)^{2}-d_{H}(j)^{2}\) is bounded by \(\delta^{1/16}J^{2}\) for all \(j\leq k\) by assumption. It remains to bound the size of \(\{i\leq k:d_{S}(i)\neq d_{H}(i)\}\). Note that for all \(j\leq\delta^{1/32}J\), the difference between the index of the first entry equal to \(j\) in \(\mathbf{d}_{S}\) and the first entry equal to \(j\) in \(\mathbf{d}_{H}\) is at most
\[\sum_{i=0}^{j-1}\left(n_{i}(\mathbf{d}_{S})-n_{i}(\mathbf{d}_{H})\right).\]
Since \(d_{S}(i),d_{H}(i)\leq\delta^{1/32}J\), Theorem 2.3(c) implies that
\[|\{i\leq k:d_{S}(i)\neq d_{H}(i)\}| \leq\sum_{x=1}^{\delta^{1/32}}\sum_{i=0}^{x=1}\left(\frac{\gamma n _{i}(\mathbf{d}_{H})}{J^{3}}+\gamma J^{5}\right)\] \[\leq\delta^{1/16}\gamma J^{7}+\delta^{1/32}\frac{\gamma|S|}{J^{2 }}.\]
Since \(|S|\leq d(S)=\gamma M\) and \(M_{H}\sim\gamma^{2}M\), this implies that
\[\sum_{i\leq k:d_{S}(i)\neq d_{H}(i)}\left(d_{S}(i)^{2}-d_{H}(i)^{2}\right)\leq (\delta^{1/32}J)^{2}\left(\delta^{1/16}\gamma J^{7}+\delta^{1/32}\frac{\gamma| S|}{J^{2}}\right)\leq\delta^{1/8}\gamma J^{9}+\delta^{1/16}M_{H}.\]
Since \(J^{9}=o(\gamma^{2}M/J)\) and \(\delta\to 0\), this completes the proof.
**Lemma 5.4**.: _With probability \(1-o(1)\), \(R(\mathbf{d}_{S})-R(\mathbf{d}_{H})=o(M_{H})\)._
Proof.: Again, we show that the claim holds deterministically when \(\mathbf{d}_{S}\) satisfies the bounds given in Theorem 2.3. Since these concentration results hold with probability \(1-o(1)\), this is sufficient to prove the claim. Define \(j_{H}:=j(\mathbf{d}_{H})\) and \(j_{S}:=j(\mathbf{d}_{S})\). Without loss of generality, we assume that \(j_{H}\leq j_{S}\), as the proof is symmetric in the converse case. We also assume that \(j_{H}<n\), otherwise we know that \(R(\mathbf{d}_{S}),\ R(\mathbf{d}_{H})\leq\Delta\) and the result is immediate. These assumptions imply that \(\sum_{i=1}^{j_{H}}d_{H}(i)(d_{H}(i)-2)>0\). Thus, Lemma 5.3 implies that
\[\sum_{i=1}^{j_{H}}d_{S}(i)(d_{S}(i)-2)\geq-\alpha M_{H}\]
for some \(\alpha\to 0\). We can choose \(\alpha\) such that \(\alpha=\omega(\delta^{1/16})\). We also know that the number of entries in each sequence with value \(0\), \(1\), or \(2\) differs by at most \(3J^{5}+\sum_{i=0}^{2}\frac{\gamma n_{i}(\mathbf{d}_{H})}{J^{3}}\). We also know that \(d_{H}(j_{H})\geq 3\), as otherwise \(j_{H}=n\) by definition. Therefore, we know that there are at most \(3J^{5}+\sum_{i=0}^{2}\frac{\gamma n_{i}(\mathbf{d}_{H})}{J^{3}}\) entries in \(\mathbf{d}_{S}\) with value at most \(2\) and index at least \(j_{H}\). Choose \(j^{*}\) to be the smallest integer such that \(\sum_{i=j_{H}}^{j^{*}}d_{S}(i)\geq\alpha^{1/2}M_{H}\). Then
\[\sum_{i=1}^{j^{*}}d_{S}(i)(d_{S}(i)-2) \geq-\left(\alpha M_{H}+3J^{5}+\sum_{i=0}^{2}\frac{\gamma n_{i}( \mathbf{d}_{H})}{J^{3}}\right)+\alpha^{1/2}M_{H}\] \[\geq\frac{1}{2}\alpha^{1/2}M_{H}.\]
Thus, \(j_{S}\leq j^{*}\). Therefore, \(R(\mathbf{d}_{S})\geq\sum_{i=j^{*}}^{s}d_{S}(i)\). Then Lemmas 4.1 and 5.3 imply that
\[\sum_{i=j_{S}}^{s}d_{S}(i) =M_{S}-\sum_{i=1}^{j_{S}-1}d_{S}(i)=M_{S}-\sum_{i=1}^{j_{H}-1}d_{S} (i)-\sum_{i=j_{H}}^{j_{S}-1}d_{S}(i)\] \[=M_{H}-\sum_{i=1}^{j_{H}-1}d_{H}(i)-\sum_{i=j_{H}}^{j_{S}-1}d_{S}( i)+o(M_{H})\] \[=R(\mathbf{d}_{H})-o(M_{H}),\]
since \(j_{S}\leq j^{*}\) and \(\sum_{i=j_{H}}^{j^{*}}d_{S}(i)\geq\alpha^{1/2}M_{H}\) by definition. This completes the proof.
Now we prove Theorem 2.4. Let \(\mathcal{D}_{S}\) be the set of all possible degree sequences of \(G[S]\), and let \(\mathcal{D}_{S}^{\prime}\subset\mathcal{D}_{S}\) be the set containing all the sequences that satisfy Theorem 2.3. Implicitly, \(\mathcal{D}_{S}^{\prime}\) depends on the choice of \(\delta\). For convenience and brevity we omit this dependence.
Proof of Theorem 2.4.: Lemma 4.2 implies that \(n(\mathbf{d}_{S}^{*})\), the number of non-isolated vertices in \(G[S]\), is a.a.s. \(n(\mathbf{d}_{H}^{*})(1+o(1))=(|S|-n_{0}(\mathbf{d}_{H}))(1+o(1))\). Lemmas 4.1, 5.2, and 5.4 imply that for all \(\mathbf{d}_{S}\in\mathcal{D}_{S}^{\prime}\), we have that
\[R(\mathbf{d}_{S})-R(\mathbf{d}_{H})=o(\widehat{M}(\mathbf{d}_{H})),\quad\widehat{M}(\mathbf{d} _{S})\sim\widehat{M}(\mathbf{d}_{H}).\]
Since \(\widehat{M}(\mathbf{d}_{H})=\Theta(M(\mathbf{d}_{H}))\), this implies that \(\mathbf{d}_{H}\) is well-behaved, and thus \(\mathbf{d}_{S}\) is well-behaved for all \(\mathbf{d}_{S}\in\mathcal{D}_{S}^{\prime}\). Now suppose that \(R(\mathbf{d}_{H})\geq\varepsilon\widehat{M}(\mathbf{d}_{H})\) for some \(\varepsilon>0\). Then it immediately follows that \(R(\mathbf{d}_{S})\geq\frac{1}{2}\varepsilon\widehat{M}(\mathbf{d}_{S})\) for all \(\mathbf{d}_{S}\in\mathcal{D}_{S}^{\prime}\). Then Theorem 5.1(b) implies that there exists some \(\gamma:=\gamma\left(\frac{1}{2}\varepsilon\right)>0\) such that the probability that \(G[S]\) contains a component with at least \(\gamma n(\mathbf{d}_{S}^{*})\) vertices is \(1-o(1)\). Let \(A\) be the event that \(G[S]\) contains a component with at least \(\frac{1}{2}\gamma(|S|-n_{0}(\mathbf{d}_{H}))\) vertices. Then the law of total probability gives that
\[\mathbb{P}\left(A\right)=\sum_{\mathbf{d}_{S}\in\mathcal{D}_{S}}\mathbb{P}\left( \left.A\right|\mathbf{d}_{S}\right)\mathbb{P}\left(\mathbf{d}_{S}\right)=\sum_{\mathbf{d} _{S}\in\mathcal{D}_{S}^{\prime}}\mathbb{P}\left(\left.A\right|\mathbf{d}_{S} \right)\mathbb{P}\left(\mathbf{d}_{S}\right)+o(1)=1-o(1).\]
The argument for the converse case is very similar. Suppose that \(R(\mathbf{d}_{H})\leq\delta^{\prime}\widehat{M}(\mathbf{d}_{H})\) for some \(\delta^{\prime}\to 0\). Then immediately it follows that \(R(\mathbf{d}_{S})\leq 2\delta^{\prime}\widehat{M}(\mathbf{d}_{S})\) for all \(\mathbf{d}_{S}\in\mathcal{D}_{S}^{\prime}\). Then Theorem 5.1(a) and the law of total probability imply that for every \(\gamma>0\), the probability that \(G[S]\) contains a component with at least \(\gamma(|S|-n_{0}(\mathbf{d}_{H}))\) vertices is \(o(1)\). Since \(\mathbf{d}_{S}\in\mathcal{D}_{S}^{\prime}\) a.a.s. this completes the proof.
## 6 Site percolation: random induced subgraphs
Proof of Theorem 2.6.: Immediately we know that \(\mathbb{E}\left[|S|\right]=np\), and linearity of expectation gives that
\[\mathbb{E}\left[d(S)\right]=\sum_{i\in[n]}d(i)\mathbb{P}\left(i\in S\right)=pM.\]
First we argue concentration of \(|S|\). The Chernoff bound given in (3.7) implies that
\[\mathbb{P}\left(||S|-\mathbb{E}\left[|S|\right]|\geq\varepsilon np\right)\leq 2 \exp\left(-\frac{np\varepsilon^{2}}{3}\right).\]
Letting \(\varepsilon=3\sqrt{\log n/pn}\), it immediately follows that this probability is at most \(2n^{-3}\). This completes the proof of (a). For part (b), we construct a martingale to show concentration of \(d(S)\). At step \(i\), for \(i\in[n]\), reveal whether vertex \(i\) is in \(S\). Define \(M_{i}=d(S\cap[i])\). Then it follows immediately that \(|M_{i}-M_{i-1}|\leq d(i)\) for all
\(i\leq n\) and \(M_{n}=d(S)\). Thus, Azuma's inequality implies that
\[\mathbb{P}\left(|d(S)-pM|\geq\alpha\right)\leq 2\exp\left(\frac{-\alpha^{2}}{2 \sum_{i\in V(G)}d(i)^{2}}\right).\]
By assumption, \(\Delta(\boldsymbol{d})=o\left(p^{6}\frac{\sqrt{M}}{\log^{6}M}\right)\), which implies that
\[\sum_{i\in V(G)}d(i)^{2}\leq\Delta(\boldsymbol{d})\sum_{i\in[n]}d(i)=o\left(p^ {6}\frac{M^{3/2}}{\log^{6}M}\right).\]
Thus, by setting \(\alpha=p^{3}M^{3/4}\), this probability is \(M^{-\omega(1)}\), which proves that \(S\) a.a.s. satisfies condition (b). For the remainder of this proof, we call a set \(S\) "good" if \(|S|=pn\left(1\pm 3\sqrt{\log n}/\sqrt{pn}\right)\) and \(d(S)=pM\left(1\pm p^{2}M^{-1/4}\right)\). It follows that \(S\) is good with probability at least \(1-3n^{-3}\).
Now we focus on part (c). Let \(S\) be some arbitrary "good" subset of \([n]\), and recall that \(\gamma(S)=d(S)/M\). Then Theorem 2.3(a) applies and thus it follows that
\[\mathbb{P}\left(\left.d_{S}(v)=\gamma(S)d(v)\left(1\pm 8\delta^{1/64}\right) \text{ for all }v\in S\text{ such that }d(v)>J(\gamma(S))\right|S\right)=1-o(1).\]
By parts (a) and (b) of this lemma, we know that \(S\) is good with probability \(1-o(1)\). If \(S\) is good, then \(\gamma(S)=p(1\pm p^{2}M^{-1/4})\). Therefore, \(J(\gamma(S))<2J(p)\) for every good set \(S\). Thus, the probability that there exists some \(v\in S\) with \(d(v)>2\delta^{1/32}J(p)\) with induced degree that is not \(pd(v)\left(1\pm 8\delta^{1/64}\right)(1\pm p^{2}M^{-1/4})\) is \(o(1)\). Since \(\delta=\omega(M^{-1/4})\), it follows that \((1\pm 8\delta^{1/64})(1\pm p^{2}M^{-1/4})=1\pm 9\delta^{1/64}\). This proves part (c).
Finally, we focus on part (d) of the lemma. Recall that \(n_{j}(\boldsymbol{d})\) is the number of entries of \(\boldsymbol{d}\) equal to \(j\), or equivalently (if \(\boldsymbol{d}\) is graphical) the number of vertices with degree \(j\) in a graph with degree sequence \(\boldsymbol{d}\). Let \(S_{j}\) be the set of all degree \(j\) vertices in \(S\). We can express \(|S_{j}|\) as a sum over all vertices in \(G\) with degree \(i\):
\[|S_{j}|=\sum_{d(i)=j}\mathbb{1}_{\{i\in S\}},\]
where \(\mathbb{1}_{\{i\in S\}}\) is an independent Bernoulli random variable with \(\mathbb{P}\left(\mathbb{1}_{\{i\in S\}}=1\right)=p\) for all \(i\in[n]\). It follows from linearity of expectation that \(\mathbb{E}\left[|S_{j}|\right]=pn_{j}(\boldsymbol{d})\). Since these indicators are independent random variables, the Chernoff bound given in (3.7) implies that
\[\mathbb{P}\left(|S_{j}|-pn_{j}(\boldsymbol{d})|>\alpha_{j}pn_{j}(\boldsymbol{d })\right)\leq 2\exp\left(-\frac{1}{3}\alpha_{j}^{2}pn_{j}(\boldsymbol{d})\right)\]
for each \(j\leq J(p)\). Define \(\alpha_{j}=\left(\frac{6\log J(p)}{pn_{j}(\boldsymbol{d})}\right)^{1/2}\). Then
\[\exp\left(-\frac{2\alpha_{j}^{2}}{n_{j}(\boldsymbol{d})}\right)=\exp\left(-2 \log J(p)\right).\]
Setting \(\alpha_{j}=(6pn_{j}(\boldsymbol{d})\log J(p))^{1/2}\) and performing the union bound over \(J(p)\) such events implies that the probability that each \(S_{j}\) is within of its expectation is at most \(J(p)^{-2}\). Suppose for some particular \(j\) that \(pn_{j}(\boldsymbol{d})\geq p^{-2}J(p)^{6}\log^{2}M\). Then, with probability \(1-o(1)\) it follows that
\[||S_{j}|-pn_{j}(\boldsymbol{d})|\leq(6pn_{j}(\boldsymbol{d})\log J(p))^{1/2} \leq(6\log J(p))^{1/2}\frac{pn_{j}(\boldsymbol{d})}{p^{-1}J(p)^{3}\log M}.\]
Now suppose that \(pn_{j}(\mathbf{d})<p^{-2}J(p)^{6}\log^{2}M\). Then it follows that
\[||S_{j}|-pn_{j}(\mathbf{d})|\leq(6pn_{j}(\mathbf{d})\log J(p))^{1/2}<(6\log J(p))^{1/2}p^ {-1}J(p)^{3}\log M.\]
Noting that \(\log J(p)\leq\frac{1}{6}\log M\), it follows that a.a.s. for all \(j\leq J(p)\)
\[||S_{j}|-pn_{j}(\mathbf{d})|\leq\frac{p^{2}n_{j}(\mathbf{d})}{J(p)^{3}\sqrt{\log M}}+ \delta^{1/16}J(p)^{4}\sqrt{\log M}. \tag{6.1}\]
Suppose that \(S\) is an arbitrary good subset of \([n]\) which also satisfies the concentration bounds given in Equation (6.1) for all \(j\in\{0,\ldots,J\}\). Since \(S\) is good, it follows that \(\gamma(S)=p(1\pm p^{2}M^{-1/4})\). The set \(S\) being good also implies that the conditions of Lemma 3.6 are met, and also that \(\frac{1}{3}pJ(p)<\frac{1}{2}\gamma(S)J(\gamma(S))\). Recall from Definition 2.2 the definition of \(n_{i}(\mathbf{d}_{H}(S))\), the number of entries in \(\mathbf{d}_{H}\) (for a given set \(S\)) with value \(i\). Since \(i\leq\frac{1}{2}\gamma(S)J(\gamma(S))\),
\[n_{i}(\mathbf{d}_{H}(S))=\sum_{j\leq J(\gamma(S))}|S_{j}|\mathbb{P}\left(Z_{j}(S)= i\right)\pm\left(1+o(M^{-5})\right),\]
where \(Z_{j}(S)\sim\operatorname{Bin}\left(j,\gamma(S)\right)\). Suppose \(i\leq\frac{1}{3}pJ(p)=\frac{1}{3}\gamma(S)J(\gamma(S))\). Since \(S\) is good, the Chernoff bound (3.7) implies that \(\mathbb{P}\left(Z_{j}(S)=i\right)=o(M^{-5})\) for all \(j\geq\frac{1}{2}J(p)\). Thus, for \(i\leq\frac{1}{3}pJ(p)\),
\[n_{i}(\mathbf{d}_{H}(S))=\sum_{j\leq J(p)}|S_{j}|\mathbb{P}\left(Z_{j}(S)=i\right) \pm 2. \tag{6.2}\]
Now we compare the summation to the value of \(\tilde{w}_{i}\). If \(d(v)\leq J(p)\), then for all \(i\leq d(v)\) it follows that
\[\frac{\mathbb{P}\left(X_{d(v)}=i\right)}{\mathbb{P}\left(Z_{d(v) }=i\right)} =\left(\frac{p}{\gamma(S)}\right)^{i}\left(\frac{1-p}{1-\gamma(S) }\right)^{d(v)-i}\] \[=\left(1+O\left(\frac{p^{2}i}{M^{1/4}}\right)\right)\left(1+O \left(\frac{p^{3}(d(v)-i)}{M^{1/4}}\right)\right)\] \[=1+O\left(\frac{p^{2}J(p)}{M^{1/4}}\right).\]
Recall that the conditions on \(p\) (specifically Equation (2.2)) imply that \(p=\omega(M^{-1/13})\). This implies that \(\frac{p^{2}J(p)}{M^{1/4}}\leq\frac{p}{M^{1/40}J(p)^{3}}\). Therefore,
\[\sum_{j\leq J(p)}|S_{j}|\mathbb{P}\left(Z_{j}(S)=i\right)=\sum_{j\leq J(p)}|S_ {j}|\mathbb{P}\left(X_{j}=i\right)\left(1+o\left(\frac{p}{J(p)^{3}}\right) \right). \tag{6.3}\]
Since \(i\leq\frac{1}{3}pJ(p)\), the Chernoff bound given in (3.7) implies that
\[\tilde{w}_{i}=p\sum_{v\in V}\mathbb{P}\left(X_{d(v)}=i\right)=p\sum_{j\leq J( p)}n_{j}(\mathbf{d})\mathbb{P}\left(X_{j}=i\right)+o(1).\]
Since we assume that \(S\) satisfies the concentration inequalities given in Equation (6.1), it follows that
\[\sum_{j\leq J(p)}|S_{j}|\mathbb{P}\left(X_{j}=i\right) =\sum_{j\leq J(p)}\mathbb{P}\left(X_{j}=i\right)\left(pn_{j}( \boldsymbol{d})+o\left(\frac{p^{2}n_{j}(\boldsymbol{d})}{J(p)^{3}}+J(p)^{4} \sqrt{\log M}\right)\right)\] \[=\sum_{j\leq J(p)}\mathbb{P}\left(X_{j}=i\right)pn_{j}( \boldsymbol{d})\left(1+o\left(\frac{p}{J(p)^{3}}\right)\right)+\sum_{j\leq J( p)}\mathbb{P}\left(X_{j}=i\right)o\left(\frac{pJ(p)^{5}}{\sqrt{\log M}}\right)\] \[=\tilde{w}_{i}+o\left(\frac{p\tilde{w}_{i}}{J(p)^{3}}+\frac{pJ(p) ^{6}}{\sqrt{\log M}}\right). \tag{6.4}\]
Combining Eqs. (6.2 - 6.4) it follows that, conditional on the aforementioned good set \(S\),
\[n_{i}(\boldsymbol{d}_{H}(S))=\tilde{w}_{i}+o\left(\frac{p\tilde{w}_{i}}{J(p)^ {3}}+\frac{pJ(p)^{6}}{\sqrt{\log M}}\right). \tag{6.5}\]
Since \(S\) is good, the pair \((\boldsymbol{d},S)\) also satisfy the conditions of Theorem 2.3(c). This implies that, for this fixed set \(S\), a.a.s.
\[|n_{i}(\boldsymbol{d}_{S})-n_{i}(\boldsymbol{d}_{H}(S))|\leq\frac{\gamma(S)n_ {i}(\boldsymbol{d}_{H}(S))}{J(\gamma(S))^{3}}+\gamma(S)J(\gamma(S))^{5}. \tag{6.6}\]
Recall that \(n_{i}(\boldsymbol{d}_{A})=\tilde{w}_{i}\pm 1\) for all \(i\). Thus, the bounds given in (6.5) and (6.6) and the triangle inequality imply that, conditional on the event that \(S\) is good and also satisfies the concentration bounds given in Equation (6.1), a.a.s.
\[|n_{i}(\boldsymbol{d}_{S})-n_{i}(\boldsymbol{d}_{A})|\leq|n_{i}(\boldsymbol{d }_{S})-n_{i}(\boldsymbol{d}_{H}(S))|+|n_{i}(\boldsymbol{d}_{H}(S))-n_{i}( \boldsymbol{d}_{A})|\leq\frac{pn_{i}(\boldsymbol{d}_{A})}{J(p)^{3}}(1+o(1))+o \left(\frac{pJ(p)^{6}}{\sqrt{\log M}}\right)\]
for all \(i\leq\frac{1}{3}pJ(p)\). Since \(S\) satisfies these conditions a.a.s., this proves part (d).
Recall the definition of \(M_{2}(\boldsymbol{d})=\sum_{i=1}^{n}d(i)^{2}\) where \(d\) is a sequence of length \(n\). Very similarly to the case where \(S\) is fixed, it follows that \(M(\boldsymbol{d}_{S})\) and \(M_{2}(\boldsymbol{d}_{S})\) are both concentrated around their corresponding values for \(\boldsymbol{d}_{A}\). We do not give the full proof here, as it is practically identical the proof of Lemma 4.1.
**Lemma 6.1**.: \(M(\boldsymbol{d}_{A})=p^{2}M(1+o(1))\)_. With probability \(1-o(1)\), \(M(\boldsymbol{d}_{S})\sim M(\boldsymbol{d}_{A})\) and \(M_{2}(\boldsymbol{d}_{S})\sim M_{2}(\boldsymbol{d}_{A})\)._
Proof sketch.: The proof of this claim is analogous to Lemma 4.1, using Theorem 2.6 instead of Theorem 2.3. The same proof method works, noting that \((2\delta^{1/32}\gamma J(p))^{2}\frac{pJ(p)^{6}}{\sqrt{\log M}}=o(p^{2}M/J(p))\).
**Remark 6.2**.: _Recall that \(\boldsymbol{d}^{*}\) is the sequence \(\boldsymbol{d}\) with all entries equal to \(0\) removed. An analogous argument to Lemma 4.2, calling on Theorem 2.6 instead of Theorem 2.3, implies that \(n(\boldsymbol{d}_{A}^{*})=\Omega(p^{2}n)\) and a.a.s. \(n(\boldsymbol{d}_{A}^{*})\sim n(\boldsymbol{d}_{S}^{*})\)._
## 7 Giant components in the percolated graph
The proof of Theorem 2.7 follows quickly from the previously proved results about the percolated random graph model and arguments analogous to those used to prove Theorem 2.4.
**Lemma 7.1**.: \(\widehat{M}(\boldsymbol{d}_{A})=\Theta(M(\boldsymbol{d}_{A}))\)_, and a.a.s. \(\widehat{M}(\boldsymbol{d}_{A})-\widehat{M}(\boldsymbol{d}_{S})=o(M( \boldsymbol{d}_{A}))\)._
Proof.: The proof of this lemma is essentially the same as Lemma 5.2, applying Lemma 6.1 instead of Lemma 4.1 and Theorem 2.6 instead of Theorem 2.3, so we omit the details.
**Lemma 7.2**.: _If \(S\) is good, then a.a.s. \(|R(\mathbf{d}_{S}(S))-R(\mathbf{d}_{A})|=o(M(\mathbf{d}_{A}))\)._
Proof.: The proof of this claim is very similar to the proof of Lemma 5.4, so we omit the details. The notable differences are that we apply Lemma 6.1 instead of Lemma 4.1, and that the numbers of entries in each sequence \(\mathbf{d}_{S}\) or \(\mathbf{d}_{A}\) with value \(0\), \(1\), or \(2\) differ by at most \(o\left(\frac{\gamma J(p)^{6}}{\sqrt{\log M}}\right)+\sum_{i=0}^{2}\frac{\gamma n _{i}(\mathbf{d}_{H})}{J^{3}}\).
Similar to the case where \(S\) is fixed, we define \(\mathcal{D}\) to be the set of all possible sequences of all induced subgraphs of every \(G\in\mathcal{G}(\mathbf{d})\). We then define \(\mathcal{D}(p)\) to be the subset of these sequences that satisfy the bounds stated in parts (a)-(d) of Theorem 2.6. Again, the definition of \(\mathcal{D}(p)\) is implicitly dependent on \(\delta\), but we omit this.
Proof of Theorem 2.7.: Theorem 2.6 implies that \(n(\mathbf{d}_{S})=|S|=np\pm 3\sqrt{np\log n}\) for all \(\mathbf{d}_{S}\in\mathcal{D}(p)\). Remark 6.2 then implies that
\[n(\mathbf{d}_{S}^{*})\sim n(\mathbf{d}_{H}^{*})=\lfloor np\rfloor-n_{0}(\mathbf{d}_{H})\pm 3 \sqrt{np\log n}.\]
Since \(np-n_{0}(\mathbf{d}_{H})\geq p^{2}n\) (by Remark 6.2), this implies that \(n(\mathbf{d}_{S}^{*})=(np-n_{0}(\mathbf{d}_{H}))(1+o(1))\). Then Theorem 2.6 and Lemmas 6.1 and 7.2 imply that if \(R(\mathbf{d}_{A})\geq\varepsilon\widehat{M}(\mathbf{d}_{A})\) for some \(\varepsilon>0\), then a.a.s. \(R(\mathbf{d}_{S})\geq\frac{1}{2}\varepsilon\widehat{M}(\mathbf{d}_{S})\), and conversely that if \(R(\mathbf{d}_{A})\leq\delta^{\prime}\widehat{M}(\mathbf{d}_{A})\) for some \(\delta^{\prime}\to 0\), then a.a.s. \(R(\mathbf{d}_{S})\leq 2\delta^{\prime}\widehat{M}(\mathbf{d}_{A})\). Lemma 6.1 also implies that \(\mathbf{d}_{A}\) is well-behaved and a.a.s. \(\mathbf{d}_{S}\) is well-behaved. Then applying the law of total probability and Theorem 5.1 to each possible choice of \(\mathbf{d}_{S}\) completes the proof.
|
2308.02200 | Online Obstacle evasion with Space-Filling Curves | The paper presents a strategy for robotic exploration problems using
Space-Filling curves (SFC). The region of interest is first tessellated, and
the tiles/cells are connected using some SFC. A robot follows the SFC to
explore the entire area. However, there could be obstacles that block the
systematic movement of the robot. We overcome this problem by providing an
evading technique that avoids the blocked tiles while ensuring all the free
ones are visited at least once. The proposed strategy is online, implying that
prior knowledge of the obstacles is not mandatory. It works for all SFCs, but
for the sake of demonstration, we use Hilbert curve. We present the
completeness of the algorithm and discuss its desirable properties with
examples. We also address the non-uniform coverage problem using our strategy. | Ashay Wakode, Arpita Sinha | 2023-08-04T08:34:15Z | http://arxiv.org/abs/2308.02200v1 | # Online Obstacle evasion with Space-Filling Curves
###### Abstract
The paper presents a strategy for robotic exploration problems using Space-Filling curves (SFC). The region of interest is first tessellated, and the tiles/cells are connected using some SFC. A robot follows the SFC to explore the entire area. However, there could be obstacles that block the systematic movement of the robot. We overcome this problem by providing an evading technique that avoids the blocked tiles while ensuring all the free ones are visited at least once. The proposed strategy is online, implying that prior knowledge of the obstacles is not mandatory. It works for all SFCs, but for the sake of demonstration, we use Hilbert curve. We present the completeness of the algorithm and discuss its desirable properties with examples. We also address the non-uniform coverage problem using our strategy.
Robotic Exploration, Space-Filling curve, Online Obstacle evasion, Non-uniform coverage.
## I Introduction
In 1878, George Cantor demonstrated that an interval \(I=[0,1]\) can be mapped bijectively onto \([0,1]\times[0,1]\). Later, G. Peano discovered one such mapping that is also continuous and surjective; the image of such mapping when parameterized in the interval \(I\) to higher dimensions (\(\mathbb{R}^{n}\)) is known as Space-Filling Curve (SFC). More SFCs were later discovered by E. Moore, H. Lebesgue, W. Sierpinski, and G. Polya [1, 2]. See Fig 1. SFCs have some interesting properties -
* Each curve is made out of similar sub-curves
* SFC passes through every point of \(\mathbb{R}^{n}\)
* Two points close by in \(I\) map to close by points in \(\mathbb{R}^{n}\)
Due to the above properties, SFCs have been used in many applications - data collection from sensor network [3, 4]; ordering meshes of complex geometries [2] and many more. An approximate solution to Travelling Salesman Problem (TSP) can be found using Hilbert's Space Filling curve [5]. Space Filling Tree analogous to SFCs having tree-like structure have been proposed for sampling-based path planning [6], as opposed to traditional methods like Rapid-exploring Random Trees (RRTs) [7].
One of the major applications of SFCs is robotic exploration. In robotic exploration problem, a single or group of robotic agents are deployed to search, survey or gather information about a specific region while avoiding obstacles. Robotics exploration is one of the sub-problems of the larger Coverage Planning Problem (CPP), wherein the agent is bestowed with the task of visiting all points in a given region or volume while avoiding obstacles [8, 9, 10]. Numerous approaches for CPP already exists - Graph based, Grid based, Neural-Network based, Cellular decomposition based [8]. Each of these approaches can be used for robotic exploration problem.
SFCs have been used for robotics exploration and the reasons making it eligible are:
* **Complete and Robust** : Coverage using SFCs is time complete and robust to failure [11]
* Dimension: The strategy developed for 2D can extended to 3D since similar grammar exists for the construction of SFC in both dimensions [2]
* Number of search agents : Coverage using SFCs have been shown to be complete and robust when multiple agents are used [11]
* Irregular Area : The generalized version of SFCs aka Generalized SFC (GSFC) can span irregular quadrilateral and triangles as opposed to SFCs which map regular shapes like square or isosceles right triangle [2]
* **Non-Uniform coverage** : SFCs can be easily used in non-uniform coverage scenarios requiring some parts to be searched more rigorously than others. On top of that Hilbert curve has been shown to be more efficient in time/energy than the lawnmower's path [12]
A 2D area with multiple obstacles of arbitrary shape and size is explored using a SFC. The area is tessellated (referred to as waypoints) and tiles/cells are connected in the order determined by SFC. The tessellation with obstacle is considered unreachable. A online strategy is suggested for a search agent to explore the area using SFC and evade the obstacle on the go. It suggests modified waypoint when unreachable waypoints are encountered. It guarantees that all the waypoints reachable from the initial waypoint are explored. The strategy works for all SFCs, but for the sake of demonstration Hilbert's curve is used in this paper.
Fig. 1: Hilbert curve and Sierpinski curve |
2307.12696 | Impact of Ultrasound on the Motion of Compact Particles and
Acousto-responsive Microgels | In this study, we investigate dynamic light scattering (DLS) from both
randomly diffusing silica particles and acousto-responsive microgels in aqueous
dispersions under ultrasonic vibration. Employing high-frequency ultrasound
(US) with low amplitude ensures that the polymers remain intact without damage.
We derive theoretical expressions for the homodyne autocorrelation function,
incorporating the US term alongside the diffusion term. Subsequently, we
successfully combine US with a conventional DLS system to experimentally
characterize compact silica particles and microgels under the influence of US.
Our model allows us to extract essential parameters, including particle size,
frequency, and amplitude of particle vibration, based on the correlation
function of the scattered light intensity. The studies involving non-responsive
silica particles demonstrate that US does not disrupt size determination,
establishing them as suitable reference systems. Microgels show the same
swelling/shrinking behavior as that induced by temperature, but with
significantly faster kinetics. The findings of this study have potential
applications in various industrial and biomedical fields that benefit from the
characterization of macromolecules subjected to US. | Sebastian Stock, Regine von Klitzing, Amin Rahimzadeh | 2023-07-24T11:21:54Z | http://arxiv.org/abs/2307.12696v1 | **Impact of Ultrasound on the Motion of Compact Particles and Acousto-responsive Microgels**
## Abstract
In this study, we investigate dynamic light scattering (DLS) from both randomly diffusing silica particles and acousto-responsive microgels in aqueous dispersions under ultrasonic vibration. Employing high-frequency ultrasound (US) with low amplitude ensures that the polymers remain intact without damage. We derive theoretical expressions for the homodyne autocorrelation function, incorporating the US term alongside the diffusion term. Subsequently, we successfully combine US with a conventional DLS system to experimentally characterize compact silica particles and microgels under the influence of US. Our model allows us to extract essential parameters, including particle size, frequency, and amplitude of particle vibration, based on the correlation function of the scattered light intensity. The studies involving non-responsive silica particles demonstrate that US does not disrupt size determination, establishing them as suitable reference systems. Microgels show the same swelling/shrinking behavior as that induced by temperature, but with significantly faster kinetics. The findings of this study have potential applications in various industrial and biomedical fields that benefit from the characterization of macromolecules subjected to US.
## I Introduction
Dynamic light scattering (DLS) is commonly used for the characterization of particles and molecules in solutions/dispersions, such as determining their size, size distribution as well as conformational changes. DLS offers to measure sub-micrometer (from a few nanometers to one micrometer) particles accurately. When a laser beam impinges the particles inside a liquid sample, the particles scatter light in all directions. A detector at a certain location detects a fraction of the scattered light and measures it as intensity fluctuations over time. In a conventional DLS system, these fluctuations are caused by interference between scattered light from an ensemble of particles moving due to Brownian motion. The intensity fluctuations can be analyzed by calculating the time-dependent correlation of the signal with itself in different time lags which is called the autocorrelation analysis technique. The particle size can be determined by the decay rate of the autocorrelation function according to the Stokes-Einstein relation [1, 2]. According to the Brownian motion theory, smaller particles diffuse faster and thus exhibit shorter correlation times [3].
In many cases, information about the mechanical, conformational and electrical properties of particles or molecules, including their size, shape, and surface charges are needed under the influence of external forces or fields. Therefore, _dynamic light scattering in external fields_ has many applications in biophysics, materials science, and chemical engineering [4]. In those systems- e.g., particles in alternating electric fields [5, 6], in thermal in-homogeneities [7, 8], or directional flows [9]- colloidal particles undergo another movement in addition to their original random motion. These additional movements create complicated intensity fluctuations from which the autocorrelation function (ACF)- with the conventional fitting parameters- does not impart the diffusion coefficient and subsequently the correct particle size anymore [10]. Therefore, one should modify the ACF fitting parameter in order to include the additional translational motion and to distinguish it from the pure Brownian diffusion. For instance, in the case of DLS measurements of colloidal particles in a flowing condition, researchers have modified the ACF so that they could obtain the correct particle size as well as the flowing velocity [9, 11
13]. One of the external fields that have drawn attention in recent decades is US which is employed to manipulate the physical or chemical properties of particles and molecules with applications in drug delivery [14], catalysis [15] and materials synthesis [16]. Recently, we showed that high-frequency US, in its non-destructive condition (low amplitude), can be used as a stimulus to induce a phase transition in solutions of linear poly(N-Isopropylacrylamide)(PNIPAM) [17,18]. The dehydration of PNIPAM which usually triggered by increasing the temperature above the lower critical solution temperature (LCST), is then induced by US. In the case of linear PNIPAM, the phase transition is detectable by an onset of turbidity. PNIPAM microgels are cross-linked polymer networks that upon a stimulus shrink and reduce in size having a promising application in drug delivery systems. They are also temperature sensitive and their dehydration might be also induced by US. In order to monitor the microgel size (or any other responsive particles) subjected to US, one has to modify the DLS system and analyze the resulting ACF of standard particles under the influence of ultrasonic waves as a reference system. Beside volume phase transition (VPT), ultrasound might lead to other disturbances like fluctuation of the microgels trajectory or acoustic streaming, which might induce an apparent size change of the microgels. In order to separate these effects, in this work, first we determine the size of solid particles using DLS subjected to US, assuming that they are shape invariant in US. In the first step, we derive the homodyne correlation function for vibrating particles based on the continuity equation in the work done by Berne and Pecora [19]. Then, we compare the results to a representative system of silica nanospheres subjected to US. This system is a simple combination of a conventional DLS setup with an US component that can establish a ground for more complicated experimental systems.
Based on the evaluated results of the reference system, we conduct an experiment using a sample of PNIPAM microgels to assess their US-induced VPT using their changes in size. The microgel diameter over ultrasonic actuation time is compared with its diameter with the increase in temperature.
## 2 Deriving equations
Here we show the principal derivation of the intensity autocorrelation function of the scattered light from an ensemble of particles undergoing Brownian diffusional motion as well as an external-induced flow motion with the velocity \(\mathbf{v}\). The particle concentration at point \(\mathbf{r}\) and time \(t\) is defined by \(c(\mathbf{r},t)\). The continuity equation which describes how the particles flow and diffuse (having a diffusion coefficient D) in the system can be written as:
\[\frac{\partial c}{\partial t}+\nabla\cdot(\mathbf{v}c)=D\nabla^{2}c \tag{1}\]
According to Berne and Pecora [19], whose notation we adopted in this study, it is reasonable to assume that the _probability distribution function_, \(G_{s}(\mathbf{r},t)\), satisfies the same equation. Therefore, we have:
\[\frac{\partial G_{s}}{\partial t}+\nabla\cdot(\mathbf{v}G_{s})=D\nabla^{2}G_{s} \tag{2}\]
We consider the _characteristic function of distribution_, \(F_{s}(\mathbf{q},t)\) as the Fourier transform of \(G_{s}\), based on the following definitions:
\[F_{s}(\mathbf{q},t)=\int G_{s}(\mathbf{r},t)\exp(i\mathbf{q}\cdot\mathbf{r})\,d^{3}\mathbf{r}, \tag{3}\]
\[G_{s}(\mathbf{r},t)=(2\pi)^{-3}\int F_{s}(\mathbf{q},t)\exp(-i\mathbf{q}\cdot\mathbf{r})\,d^{ 3}q \tag{4}\]
where \(q=\frac{4\pi\,n\sin\left(\frac{\theta}{2}\right)}{\lambda_{l}}\) is the scattering wave vector with a wavelength of \(\lambda_{l}\) and scattering angle of \(\theta\) and the medium refractive index of \(n\). In case the system is subjected to US waves having a wave vector of \(|\mathbf{k}|=2\pi/\lambda_{u}\) (\(\lambda_{u}\) is the US wavelength) and angular frequency of \(\omega\), the velocity of the fluid at point \(\mathbf{r}\) and time \(t\), having a mean value \(\mathbf{v_{0}}\) can be written as:
\[\mathbf{v}(\mathbf{r},t)=\mathbf{v_{0}}\exp(i\mathbf{k}\cdot\mathbf{r}-i\omega t). \tag{5}\]
By taking the spatial Fourier transform from eq. (2), the first term on the left-hand side and the term on the right-hand side yield to \(\frac{\partial F_{\mathrm{s}}(\mathbf{q},t)}{\partial t}\) and \(-Dq^{2}F_{\mathrm{s}}(\mathbf{q},t)\), respectively. The second term on the left-hand side can be written as (Ft: Fourier transform):
\[Ft[\mathbf{v}\cdot\nabla G+G\nabla\cdot\mathbf{v}]=\int\mathbf{v}\cdot\frac{\partial G_{ \mathrm{s}}}{\partial r}\exp(i\mathbf{q}\cdot\mathbf{r})\,d^{3}r+\int G_{\mathrm{s}} \frac{\partial\mathbf{v}}{\partial r}\exp(i\mathbf{q}\cdot\mathbf{r})\,d^{3}r. \tag{6}\]
Using the eq. (5), the eq. (6) yields to:
\[\mathbf{v_{0}}\exp(-i\omega t)\int\frac{\partial G_{\mathrm{s}}}{\partial r}\, \mathrm{e}^{i(\mathbf{q}+\mathbf{k})\cdot\mathbf{r}}d^{3}r+i\mathbf{k}\cdot\mathbf{v_{0}}\exp(-i \omega t)\int G_{\mathrm{s}}\mathrm{e}^{i(\mathbf{q}+\mathbf{k})\cdot\mathbf{r}}d^{3}r, \tag{7}\]
where \(\int\frac{\partial G_{\mathrm{s}}}{\partial r}\,\mathrm{e}^{i(\mathbf{q}+\mathbf{k}) \cdot\mathbf{r}}d^{3}r=-iq\int G_{\mathrm{s}}\mathrm{e}^{i(\mathbf{q}+\mathbf{k})\cdot\bm {r}}d^{3}r\). The reason is that taking a derivative from eq. (4), leads to:
\[\frac{\partial}{\partial r}G_{\mathrm{s}}(\mathbf{r},t)=-iq(2\pi)^{-3}\int F_{ \mathrm{s}}(\mathbf{q},t)\exp(-i\mathbf{q}\cdot\mathbf{r})\,d^{3}q=-iqG_{\mathrm{s}}. \tag{8}\]
Therefore, eq. (7) can be written as:
\[-i\mathbf{q}\cdot\mathbf{v_{0}}\exp(-i\omega t)\int G_{\mathrm{s}}\mathrm{e }^{i(\mathbf{q}+\mathbf{k})\cdot\mathbf{r}}d^{3}r+i\mathbf{k}\cdot\mathbf{v_{0}}\exp(-i\omega t) \int G_{\mathrm{s}}\mathrm{e}^{i(\mathbf{q}+\mathbf{k})\cdot\mathbf{r}}d^{3}r \tag{9}\] \[=-i(\mathbf{q}-\mathbf{k})\cdot\mathbf{v_{0}}\exp(-i\omega t)\,F_{\mathrm{s}} (\mathbf{q}+\mathbf{k},t).\]
Finally, the spatial Fourier transform of eq. (2) leads to:
\[\frac{\partial F_{\mathrm{s}}(\mathbf{q},t)}{\partial t}-i(\mathbf{q}-\mathbf{k})\cdot\bm {v_{0}}\exp(-i\omega t)\,F_{\mathrm{s}}(\mathbf{q}+\mathbf{k},t)=-Dq^{2}F_{\mathrm{s}} (\mathbf{q},t). \tag{10}\]
In the case of a uniform flow with the velocity of \(\mathbf{v_{0}}\) instead of ultrasonic vibration (i.e., \(k=\omega=0\)), eq. (10) reduces to the same equation derived by Berne and Pecora [19] as:
\[\frac{\partial F_{\mathrm{s}}(\mathbf{q},t)}{\partial t}-i\mathbf{q}\cdot\mathbf{v_{0}}F_{ \mathrm{s}}(\mathbf{q},t)=-Dq^{2}F_{\mathrm{s}}(\mathbf{q},t). \tag{11}\]
With the initial condition of \(F_{\mathrm{s}}(\mathbf{q},0)=1\), we have \(F_{\mathrm{s}}(\mathbf{q},t)=\exp(-Dq^{2}t)\exp(i\mathbf{q}\cdot\mathbf{v_{0}}t)\).
At sufficiently low US frequencies (i.e., less than 1 GHz) in water, as the medium, \(q\gg k\). Therefore, the eq. (10) can be simplified as:
\[\frac{\partial F_{\mathrm{s}}(\mathbf{q},t)}{\partial t}-i\mathbf{q}\cdot\mathbf{v_{0}} \exp(-i\omega t)\,F_{\mathrm{s}}(\mathbf{q},t)=-Dq^{2}F_{\mathrm{s}}(\mathbf{q},t). \tag{12}\]
By solving the eq. (12) with the initial condition of \(F_{\mathrm{s}}(\mathbf{q},0)=1\), \(F_{\mathrm{s}}\) can be obtained as:
\[F_{\mathrm{s}}(\mathbf{q},t)=\exp\{-Dq^{2}t\}\exp\{-\frac{\mathbf{q}\cdot\mathbf{v_{0}}}{ \omega}\exp(-i\omega t)\}. \tag{13}\]
Assume that \(\mathbf{v_{0}}=\mathbf{r_{0}}\omega\), where \(r_{0}\) is the amplitude of vibration, eq. (13) can be rewritten as:
\[F_{\mathrm{s}}(\mathbf{q},t)=\exp\{-Dq^{2}t\}\exp\{-\mathbf{q}\cdot\mathbf{r_{0}}\exp(-i \omega t)\}. \tag{14}\]
From Berne and Pecora [19], the homodyne correlation function can be obtained as:
\[F_{2}(\mathbf{q},t)=\langle N\rangle^{2}[1+|F_{\mathrm{s}}(\mathbf{q},t) ^{2}|]+\langle\delta N(0)\delta N(t)\rangle \tag{15}\] \[=\langle N\rangle^{2}[1+Re(\exp\{-Dq^{2}t\}\exp\{-2\mathbf{q}\cdot \mathbf{r_{0}}\exp(-i\omega t)\}]+\langle\delta N(0)\delta N(t)\rangle\]
Therefore, in the homodyne experiment, the autocorrelation function for the time lag, \(\tau\), can be written as:
\[g_{2}(\tau)=\,A[1+B\,\exp\{-2Dq^{2}\tau\}\exp\{-2qr_{0}\cos(\omega\tau)\}]. \tag{16}\]
The B is called intercept, related to light-collection efficiency, and can be omitted by normalizing the autocorrelation function [20]. Therefore, the normalized autocorrelation function (NACF) can be written as:
\[NACF=\frac{g_{2}(\tau)-A}{B}=\exp\{-2Dq^{2}\tau\}\exp\{-2qr_{0}\cos(\omega\tau)\}. \tag{17}\]
## 3 Experimental
The experimental setup, as schematically shown in Figure 1a, comprises a HeNe gas laser (HNL150L, Thorlabs GmbH, Germany, \(\lambda=633\)\(nm\)), a photodetector (APD130A2/M, Thorlabs GmbH, Germany) detecting the scattered light at an angle of 90\({}^{\circ}\), data acquisition system (BNC-2110, National Instruments, USA, sampling rate = 1 MS/s) connected to a computer. Piezoelectric transducers (STEMINC-PIEZO, Davenport, IA, USA) with different resonance frequencies (40 kHz, 255kHz, 780 kHz, 2.34 MHz, and 5.4 MHz) were used and attached to the glass cuvette (inside dimensions of 10\(\times\)10\(\times\)40 mm\({}^{3}\)) using a two-component latex glue (UHU Endfast Plus300, Buhl, Germany). The RF signal was generated using a function generator (SDG1062X, SIGLENT, Shenzhen, China) and was amplified by an RF amplifier (VBA100-30, Vectawave, UK). We used silica nanospheres (nanoComposix, CA, USA) in different diameters (80, 200, 500, and 1000 nm) as colloidal dispersions diluted in milli-Q water. PNIPAM microgel particles with 5 mol% cross-linker content (BIS) were synthesized using precipitation polymerization. For detailed information see our previous works [21, 22]. A commercially available DLS system (LS Instrument, Switzerland) was used to measure the hydrodynamic diameter of PNIPAM microgels due to changes in temperature.
## 4 Results and discussion
According to Eq. (17), the photon intensity correlation function exhibits an oscillatory behavior. Such behavior previously was observed for particles in alternating electric fields at very low frequencies [4]. From the modified NACF, the diffusion coefficient due to Brownian motion and hence the particle size can be acquired independently of the US effect. Moreover, the oscillatory component in Eq. (17) specifies both the frequency and amplitude of the particle's vibration. The experimental results are analyzed using our MATLAB code based on linear cumulant analysis and are compared with the model. The NACF in both the experiment and model (shown in Figure 1b) for 200 nm silica particles subjected to 255 kHz US shows a good agreement. However, the oscillations damp gradually in the model while the experiments show a fixed amplitude of oscillations. Nevertheless, NACF at 255 kHz gives us valuable information about the frequency and amplitude of vibrations (Figure 2). The frequency of these oscillations- which is shown clearer using the linear time scale in the graph inset- is equal to the input frequency of the US (Figure 2a). The curve without US presents a mean value around which the data
Figure 1: (a) Schematic of the experimental apparatus comprising a DLS setup and US system connected to the sample via a piezoelectric transducer and glass cuvette. (b) Normalized autocorrelation function (NACF) for 200 nm silica particles subjected to 255 kHz and 300 mV US.
recorded in the presence of US fluctuates. The amplitude of oscillations increases by increasing the input voltage although the frequency and phase remain constant. Figure 2b shows how the particles with different sizes behave under the same US properties. Particles of smaller sizes, such as those with 80 nm diameter, demonstrate significantly greater oscillation amplitude compared to their larger counterparts, like those with a size of 1000 nm. However, despite the amplitude difference, both small and large particles exhibit the same frequency and phase of oscillation. The extracted amplitude of vibration based on our model in Eq. (17) and Figure 2b is shown in Table 1 for all particles. Despite that the order of magnitude of the amplitudes in Table 1 seems reasonable, unfortunately there is no other experimental method to confirm the values.
Based on our theory and experimental results, we can assert that the estimation of particle size can be accurately determined regardless of the frequency and amplitude of the US used (Figure 3a). The frequency of US, which is identical to that of the particles, can be extracted from NACF provided that the data acquisition system has enough sampling rate. Figure 3b shows that at frequencies higher than half of the sampling rate (which is 500 kHz) the oscillations cannot be captured in the correct frequency complying with the Nyquist-Shannon sampling theorem [23]. Now, the question is how acoustic streaming does not interfere with the DLS results. Generally, when a liquid is subjected to US, a flow field develops due to the absorption of ultrasonic waves by the liquid viscosity. This flow may influence the diffusion of particles by increasing the translational velocity of particles leading to a wrong size estimation. However, in our experiments, the diffusion time scales for the range of particle sizes investigated are considerably shorter than the time scales associated with acoustic streaming. As a result, acoustic streaming does not influence the extracted diffusion coefficient. From the work by Leung et al. [11] one can realize that when there is a uniform flow having a low velocity (less than 1 cm/s), the particle size can be obtained with an acceptable error using the conventional intensity correlation functions. Acoustic streaming often has a velocity range of less than 1 mm/s [24]. That is why for the sub-micrometer particles, the diffusion is fast enough that the effect of acoustic streaming is negligible.
Taking the advantage of the analyzed reference system, we studied the US-induced VPT of PNIPAM microgels. As it is shown in Figure 4a, the hydrodynamic diameter of microgels (from 690 nm at swollen state) decrease (to 232 nm at collapsed state) due to imposing US over time. This is the same size as achieved with increasing temperature above the VPTT (Figure 4b), but with much faster kinetics. The reduction of microgel size after 10 seconds of actuation implies fast dehydration upon imposing US. However, the larger error bars during the transient size-change may arise due to two factors: the inhomogeneity of the particle sizes (collapsed and swollen microgels) within the measured spot and the measurement duration (about 10 seconds), which falls within the continuous actuation period.
Figure 2: Normalized autocorrelation function (NACF) of (a) 200 nm particles at different input voltages and (b) particles with different diameters at 300 mV input voltage. In both cases, the frequency of US is set to 255 kHz.
## V Conclusion
In this work, we developed a DLS characterization of silica particles under the influence of US. In the theory part, we rewrite the continuity equation based on the time-dependent velocity of particles due to ultrasonic waves. Then, we derive a modified intensity autocorrelation function for dilute nano-spheres undergoing Brownian diffusion as well as ultrasonic vibration. The resulting model gives valuable information about the particle vibrational behavior in addition to the Brownian diffusion coefficient. The experimental work is performed using an US-DLS setup for silica particles of different sizes (from 80 nm to 1 \(\upmu\)m) at different US frequencies (40 kHz to 5.4 MHz) and amplitudes. The findings with silica particles indicate that any potential disturbances, such as acoustic streaming, which could impact the size estimation of particles in DLS experiments, can be excluded. Therefore, the particle size can be correctly estimated even with the conventional fitting parameters for mean value of oscillating autocorrelation data points. Furthermore, we are able to extract the particle vibration amplitude and frequency in addition to their size successfully. This is because the frequency of particle vibration is high enough that the vibration and diffusion time-scales are decoupled. This method is important for the characterization of macromolecules and polymers whose size and behavior subjected to US are of
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Particle diameter (nm)** & **80** & **200** & **500** & **1000** \\ \hline
**Vibration \(r_{0}\) (nm)** & 61.95\(\pm\)8.35 & 10.7\(\pm\)1.6 & 14.06\(\pm\)2.01 & 4.68\(\pm\)0.66 \\ \hline \end{tabular}
\end{table}
Table 1: Vibration amplitude of nanoparticles subjected to 255 kHz and 300mV US extracted from NACF.
Figure 4: Hydrodynamic diameter of PNIPAM microgel (5 mol% cross-linker content), exhibiting the VPT due to both a) temperature change and b) ultrasonic actuation at room temperature (22\({}^{\circ}\) C). The US frequency and input voltage were set to 5.4 MHz and 400 mV respectively.
Figure 3: Extracted (a) diameter of silica particles and (b) frequency of particles vibration from NACF.
interest. We used the developed setup to evaluate the shrinking behavior of PNIPAM microgels subjected to US. Our findings demonstrate that PNIPAM microgels are a notable example of acousto-responsive polymer networks. Their VPT in response to US holds a great potential for their application in drug delivery systems. In addition, US is a much faster trigger than temperature change.
## Acknowledgement
Financial support from German Research Foundation (DFG)-- Project Number (460540240) -- is acknowledged.
|
2310.11988 | Emergent non-Hermitian models | The Hatano-Nelson and the non-Hermitian Su-Schrieffer-Heeger model are
paradigmatic examples of non-Hermitian systems that host non-trivial boundary
phenomena. In this work, we use recently developed graph-theoretical tools to
design systems whose isospectral reduction -- akin to an effective Hamiltonian
-- has the form of either of these two models. In the reduced version, the
couplings and on-site potentials become energy-dependent. We show that this
leads to interesting phenomena such as an energy-dependent non-Hermitian skin
effect, where eigenstates can simultaneously localize on either ends of the
systems, with different localization lengths. Moreover, we predict the
existence of various topological edge states, pinned at non-zero energies, with
different exponential envelopes, depending on their energy. Overall, our work
sheds new light on the nature of topological phases and the non-Hermitian skin
effect in one-dimensional systems. | Lumen Eek, Anouar Moustaj, Malte Röntgen, Vincent Pagneux, Vassos Achilleos, Cristiane Morais Smith | 2023-10-18T14:19:58Z | http://arxiv.org/abs/2310.11988v1 | # Emergent non-Hermitian models
###### Abstract
The Hatano-Nelson and the non-Hermitian Su-Schrieffer-Heeger model are paradigmatic examples of non-Hermitian systems that host non-trivial boundary phenomena. In this work, we use recently developed graph-theoretical tools to design systems whose isospectral reduction--akin to an effective Hamiltonian--has the form of either of these two models. In the reduced version, the couplings and on-site potentials become energy-dependent. We show that this leads to interesting phenomena such as an energy-dependent non-Hermitian skin effect, where eigenstates can simultaneously localize on either ends of the systems, with different localization lengths. Moreover, we predict the existence of various topological edge states, pinned at non-zero energies, with different exponential envelopes, depending on their energy. Overall, our work sheds new light on the nature of topological phases and the non-Hermitian skin effect in one-dimensional systems.
## I Introduction
In recent years, the study of non-Hermitian physics has gained significant attention due to its profound impact on the fields of condensed matter, meta-materials, acoustics, and photonics [1; 2; 3; 4]. Indeed, non-Hermitian platforms offer enhanced sensing capabilities [1], can exhibit Majorana bound states near exceptional points [2], and provide opportunities for utilizing topological edge modes in the field of active matter [3; 4]. In addition, the non-conservative and non-unitary dynamics of non-Hermitian systems have led to the discovery of phenomena that challenge the conventional notions of symmetry and stability [5; 6; 7; 8]. Among these, the non-Hermitian skin effect (NHSE), manifesting itself as an accumulation of modes at the boundaries of the system, has been intensely studied in the past few years [9; 10; 11; 12; 13; 14]. The NHSE has been realized in multiple platforms, such as acoustic crystals [12], electric circuits [13], and optical lattices using ultra-cold atoms [15]. Moreover, the interplay between non-Hermitian physics and topology has given rise to novel topological phases [7; 16; 17; 18]. Indeed, when considering Hamiltonians that are no longer Hermitian [7; 19], the Altland-Zirnbauer topological classification for non-interacting fermions is enlarged from 10 to 38 classes. Additionally, the bulk-boundary correspondence generally no longer holds, and requires substantial modifications to account for boundary phenomena [5; 20].
The investigation of toy models has been instrumental in shaping a theoretical comprehension of non-Hermitian systems. The Hatano-Nelson and the non-Hermitian Su-Schrieffer-Heeger (NH SSH) models [21; 22; 23], for instance, have become paradigmatic examples of systems hosting the NHSE and a non Bloch bulk-boundary correspondence, respectively. Upon departing from these idealized models, the same phenomena may take place, but may be more difficult to describe. One way to bridge this difficulty is to reduce the complicated problem into one described by simpler models, with additional features revealed through the reduction process.
An interesting technique--originally introduced for the analysis of graphs and network models--that may be used for this purpose is the so-called isospectral reduction (ISR) [24]. The idea behind the ISR is to reduce the matrix dimensionality whilst preserving the spectrum of the original Hamiltonian \(H\). This is achieved by recasting the original linear eigenvalue problem into a nonlinear one. The reduced dimensionality simplifies certain tasks and may, in particular, reveal hidden structures of the system [25]. Pivoting around this favourable properties, the ISR has been applied to different problems, for instance, to yield better eigenvalue approximations [26] or to study pseudo-spectra of graphs and matrices [27]. In physics, the ISR is often encountered in the form of an effective Hamiltonian. One example of this is the Brillouin-Wigner perturbation theory, where the partitioning is done in terms of degenerate subspaces of an unperturbed Hamiltonian [28]. Another example would be integrating out degrees of freedom, where the partitioning is done in Fock space, or integrating out high momentum modes [29]. In that context, the reduction provides a suitable starting point for perturbation theory.
In the last few years, the ISR has also been applied to uncover hidden--so-called latent--symmetries [30][31]. Latent symmetries become apparent after reduction and have been studied in a number of applications, including quantum information transfer [32], the design of lattices with flat bands [33] or the explanation of accidental degeneracies [34]. Very recently, latent symmetries have also been explored in waveguide networks [35; 36], including a possible application in secure transfer of information [37].
In this work, we propose to apply the ISR to a range of one-dimensional (1D) non-Hermitian tight-binding models, such that they reduce to the paradigmatic
Hatano-Nelson and the NH SSH models. This method allows us to predict the existence of various topological phases and non-standard NHSE as well as to uncover various properties, which were hitherto still unexplored. As an example of the unusual characteristics of this class of models, our approach reveals that they exhibit an energy (or frequency)-dependent NHSE, where eigenstates can localize on either end of the systems. The degree of localization of the NHSE is also influenced by this energy dependence. A similar behavior was recently observed in a system of coupled ring-resonators [38, 39], for which we extend the theoretical understanding. In addition, we find various topological states pinned at different non-zero energies, protected by a latent spectral symmetry that is only revealed upon applying the ISR. As a consequence of this energy dependence, their exponential envelopes vary, a feature that is also straightforwardly explained, and predicted, upon using the ISR. Throughout this work, we restrict our attention to systems from which the Hatano-Nelson [21] and the NH SSH [22, 23] models emerge. It should be noted that one could also engineer other types of systems, from which different models would emerge through the ISR. Indeed, the special case of an asymmetric, Hermitian system from which the conventional SSH model emerges, has been very recently demonstrated [40].
This article is structured as follows: in Section II, we lay down the main tool used for the analysis of our models. That is the ISR, which amounts to the construction of an effective Hamiltonian model that is already well understood. This is used on a minimal example, where we do not a-priori expect non-reciprocity to be present. The ISR allows for an intuitive understanding of the reason why the NHSE would arise in such a setup. In Section III, we extend our analysis to a slightly more complex case, and apply the ISR to a quasi one-dimensional system, resulting in an "emergent" Hatano-Nelson model [21]. In this setup, we are able to predict the existence of an energy-dependent NHSE. In Section IV, we add a connection between the unit cells that leads to an "emergent" NH SSH model [23], from which a full understanding of the topological phases can be drawn. In Section V, we generalize the construction principle for which the analysis done for the previous models can be applied. Finally, in Section VI, we conclude by summarizing our results.
## II Isospectral reduction
Given a Hamiltonian \(H\), it is possible to partition a choice of basis into a set \(S\) and its complement \(\overline{S}\), so that \(H\) can be written in block-form as
\[H\equiv\begin{pmatrix}H_{S,S}&H_{S,\overline{S}}\\ H_{\overline{S},S}&H_{\overline{S},\overline{S}}\end{pmatrix}\,. \tag{1}\]
By partitioning the eigenvalue problem \(H\ket{\psi}=E\ket{\psi}\) [where \(\ket{\psi}\equiv(\ket{\psi_{S}},\ket{\psi_{\overline{S}}})^{T}\)] into the different subsets \(S\) and \(\overline{S}\) and subsequently eliminating \(\ket{\psi_{\overline{S}}}\), we obtain the non-linear eigenvalue problem
\[\mathcal{R}_{S}(E,H)\,,\ket{\psi_{S}}=E\ket{\psi_{S}}\,. \tag{2}\]
Here,
\[\mathcal{R}_{S}(E,H)=H_{S,S}-H_{S,\overline{S}}\left(H_{\overline{S},\overline {S}}-E\mathbb{1}\right)^{-1}H_{\overline{S},S}\,, \tag{3}\]
is the effective Hamiltonian for the subsystem \(S\). In the language of graph theory, \(\mathcal{R}_{S}(E,H)\) is known as the ISR of \(H\) to \(S\)[24]. An overview of the application of the ISR to physical systems is given in Ref. [41].
In this work, we show how the ISR allows us to understand the behavior of systems without making approximations. This is done by recognizing that the ISR of a system may yield another (energy-dependent) known model, which is well understood. In this case, the properties of the known reduced system can be used to make predictions for the full system. We shall now illustrate this by means of a simple but important example.
A conventional and intuitive reason for the NHSE to appear is understood through the lens of nonreciprocity. An exemplary illustration of this phenomenon can be found in the Hatano-Nelson model [21]. Alternatively, the NHSE can be induced by a combination of on-site gain/dissipation and complex couplings, but this mechanism may appear less intuitive [42]. Consider the system depicted on the left-hand side of Fig. 1. It is a three-site non-Hermitian tight-binding Hamiltonian, with complex hopping parameters and one site featuring an imaginary on-site term. Specifically, if we enumerate the sites such that \(u,v\) are the first two, the Hamiltonian is given by
\[H=\begin{pmatrix}0&1&e^{i\phi}\\ 1&0&1\\ e^{-i\phi}&1&i\alpha\end{pmatrix}\,.\]
Through an ISR to the red sites, the resulting effective model on the right-hand side is obtained. This reduced model exhibits a new on-site potential and hopping amplitudes, given by
\[A(E) =\frac{1}{E-i\alpha},\] \[T_{\pm}(E) =1+\frac{e^{\pm i\phi}}{E-i\alpha},\]
Figure 1: Isospectral reduction of the lossy, complex hopping model on the left to the red sites \(S=\{u,v\}\) yields the nonreciprocal effective model on the right. Circles denote sites, and lines denote couplings.
respectively. Notice how the hopping displays asymmetry in its magnitude, i.e., \(|T_{+}(E)|\neq|T_{-}(E)|\), thereby indicating non-reciprocity within the model, in the same way as the Hatano-Nelson model. By employing an ISR, both avenues for realizing the NHSE--either directly by non-reciprocal couplings, or through reciprocal couplings but non-Hermitian on-site potentials--can be unified.
In the following sections, we will modify our prototypical model slightly in order to have interesting, and emergent phenomena taking place. By considering a one-dimensional chain of fully connected four-site models instead of three-site ones, like in Fig. 1, we are able to obtain an energy dependent skin-effect that induces localization on both sides of the system.
## III Emergent Hatano-Nelson model
Let us consider the model depicted in Fig. 2(a). Taking the unit cell as indicated in the figure, the Bloch Hamiltonian of this lattice is given by
\[H(k)=\begin{pmatrix}i\varepsilon_{a}-2t_{2}\sin k&t_{1}+t_{2}e^{-ik}&t_{2}+t_ {1}e^{-ik}\\ t_{1}+t_{2}e^{ik}&i\varepsilon_{b}&it_{3}\\ t_{2}+t_{1}e^{ik}&-it_{3}&i\varepsilon_{b}\end{pmatrix}. \tag{4}\]
Here \(t_{1}\), \(t_{2}\), and \(t_{3}\) are real-valued hopping parameters, \(\varepsilon_{a}\) and \(\varepsilon_{b}\) are on-site gains or losses, and \(k\) is the wave vector. Furthermore, we have set the lattice spacing to be equal to unity.
We note at this point that the spectrum has a mirror symmetry with respect to the \(\text{Re}(E)=0\) line; cf. Figs. 2(b) to 2(d). The mirror-symmetric spectrum stems from the fact that
\[\begin{split}\mathcal{S}H(k)\mathcal{S}^{-1}&=-H^{*}(- k),\\ \mathcal{S}&=(-\sigma_{z})\oplus 1,\end{split} \tag{5}\]
where \(\sigma_{z}\) acts on the two sites of the unit cell. In the literature, this symmetry is better-known as \(PHS^{\dagger}\)[7], which is one of the two non-equivalent realizations of particle-hole symmetry in a non-Hermitian system.
If we simultaneously perform an ISR to all red sites of the full lattice, and take the Bloch-Hamiltonian of the resulting effective model, we obtain
\[H_{R}(k,E)=A(E)+T_{+}(E)e^{ik}+T_{-}(E)e^{-ik}, \tag{6}\]
in which we recognize the Hatano-Nelson model with energy dependent on-site term \(A(E)\), and hopping parameters \(T_{\pm}(E)\equiv v(E)\pm g(E)\). Here
\[A(E) =i\left[\varepsilon_{a}+\frac{2\left(\varepsilon_{b}+iE\right) \left(t_{1}^{2}+t_{2}^{2}\right)}{\left(\varepsilon_{b}+iE\right)^{2}+t_{3}^{2 }}\right],\] \[v(E) =2i\frac{\left(\varepsilon_{b}+iE\right)t_{1}t_{2}}{\left( \varepsilon_{b}+iE\right)^{2}+t_{3}^{2}}, \tag{7}\] \[g(E) =i\frac{t_{2}t_{3}\left(t_{3}-t_{2}\right)+t_{2}\left(\varepsilon _{b}+iE\right)^{2}+t_{1}^{2}t_{3}}{\left(\varepsilon_{b}+iE\right)^{2}+t_{3}^ {2}}.\]
For the ordinary Hatano-Nelson model, i.e., no energy dependent parameters, it is well-known that the NHSE is present when \(|T_{+}|\neq|T_{-}|\)[6; 21]. This condition still holds for our effective Hamiltonian. After some algebraic manipulations, it can be expressed as
\[v_{R}(E)g_{R}(E)+v_{I}(E)g_{I}(E)\neq 0. \tag{8}\]
Here, the subscripts \(R\) and \(I\) represent real and imaginary part, respectively. By substituting Eq. (7) into Eq. (8), it follows that the NHSE is present when \(t_{1}\), \(t_{2}\) and \(\varepsilon_{b}\) are all non-zero (see Appendix A).
Let us now visualize the above statements in terms of the eigenvalues and eigenstates. We start by inspecting a setup with \(t_{1}=0\) and show its eigenvalue spectrum in Fig. 2(b). The spectrum (denoted by a solid purple line) is the same for open boundary conditions (OBC) and periodic boundary conditions (PBC). The (right) eigenvectors are depicted in Fig. 2(e) and show, as expected, no NHSE. However, upon close inspection of Fig. 2(e), one can see a mode sitting at the right boundary. This is a consequence of lattice termination and is elaborated upon in Section V.1.
Leaving the trivial case behind us, we next investigate a setup where \(t_{1}\), \(t_{2}\) and \(\varepsilon_{b}\) are all non-zero. Specifically, we choose \((\varepsilon_{a},\varepsilon_{b},t_{1},t_{2},t_{3})=(0,0.4,0.4,0.2,0.1)\). For this choice of parameters, the eigenvalue spectrum is shown in Fig. 2(c). The black line represents the PBC eigenvalue spectrum, which forms three simple loops in the complex energy plane. Importantly, the PBC spectrum now no longer coincides with the OBC spectrum (shown in blue). The background color in this figure represents a contourplot of the skin length scale [6]
\[\kappa(E)\equiv\log\sqrt{\left|\frac{T_{-}(E)}{T_{+}(E)}\right|}\,. \tag{9}\]
We note that \(\kappa\) is energy-dependent, which is a consequence of the energy-dependence of the system's hopping parameters. This energy-dependence is an important difference to the ordinary Hatano-Nelson model. There, \(\kappa\) is constant, such that all skin modes show the same length scale. In an emergent Hatano-Nelson model, on the other hand, each mode has its own skin length given by Eq. (9). In particular, a (right) PBC-eigenstate whose energy \(E\) lies in a region with \(\kappa<0\) (\(\kappa>0\)) will be localized at the system's left (right) boundary. Now, since all of the system's OBC eigenvalues correspond to \(\kappa<0\), we expect all of the eigenstates to be left-localized. This is indeed the case, as can be seen from Fig. 2(f).
Let us now modify the parameters to realize the so-called bipolar NHSE [43]. A system with a bipolar NHSE features two classes of right eigenstates: One being localized at the left boundary, and the other localized at the right boundary. To find this phenomenon in our setup, we choose \((\varepsilon_{a},\varepsilon_{b},t_{1},t_{2},t_{3})=(0,0.4,0.4,1,0.3)\). The system's eigenvalues are depicted in Fig. 2(d), which has an insect-like shape. Again, PBC (black) and OBC (blue/red) spectra do not coincide, as expected from the
fact that all \(t_{1}\), \(t_{2}\) and \(\epsilon_{b}\) are non-vanishing. What is interesting here is that \(\kappa(E)\) can now take both positive and negative values. In particular, we see that Fig. 2(d) splits into two regions: An outer, blue region, where \(\kappa<0\), and an inner, red region, where \(\kappa>0\). These two regions are separated by the dashed-grey line that represents \(\kappa(E)=0\). Now, since the OBC spectrum lies both in the inner and the outer region, our system features a bipolar NHSE: (Right) eigenstates whose energy \(E\) lies in the blue region are left-localized, while eigenstates with \(E\) lying in the red region are right-localized. This is demonstrated in Fig. 2(g), where we show the system's right eigenstates, with blue/red color corresponding to the region in which the respective eigenvalue lies.
The use of ISR does not limit itself to making predictions on the NHSE. On the contrary, it may also be used to explore topological properties of a given system. To illustrate this feature, we turn our attention to a different, but related model in the next section.
Figure 2: Emergent Hatano-Nelson model from ISR. (a) Lattice corresponding to the Bloch Hamiltonian given in Eq. (4) and its ISR to the Hatano-Nelson model with energy dependent hopping and an energy dependent on-site term. The unit cell of the left chain is indicated by the gray dashed lines and the sites are labeled. (b-d) The complex energy spectra are presented for the system described in (a) using different parameter values: (b) \((\varepsilon_{a},\varepsilon_{b},t_{1},t_{2},t_{3})=(0,0.4,0,0.5,0.3)\); (c) \((\varepsilon_{a},\varepsilon_{b},t_{1},t_{2},t_{3})=(0,0.4,0.4,1,0.3)\). The dots are color-coded, representing left localized (blue), right localized (red), or bulk-like (purple) modes, with black and colored lines indicating PBC and OBC, respectively. The background displays \(\kappa(E)\), where larger absolute values of \(\kappa\) indicate stronger localization of the corresponding skin mode. Dashed grey lines correspond to \(\kappa(E)=0\). (e-g) Right eigenstates are shown for the same parameter choices as the above panels, using the same coloring conventions.
## IV Emergent non-Hermitian SSH model
In this section, we study the system depicted in Fig. 3(a), which is a modified version of the Creutz ladder [44]. Each square forms a unit cell, interconnected by a real hopping parameter \(w\), making this a four band model. The momentum space Hamiltonian for this system is given by
\[H(k)=\begin{pmatrix}i\varepsilon_{a}&t_{1}&t_{2}&it_{2}+we^{-ik}\\ t_{1}&i\varepsilon_{b}&-it_{3}&t_{2}\\ t_{2}&it_{3}&i\varepsilon_{b}&t_{1}\\ -it_{2}+we^{ik}&t_{2}&t_{1}&i\varepsilon_{a}\end{pmatrix}, \tag{10}\]
where all parameters are real-valued. The setup is similar to the one used by Lee [9] to show the existence of an anomalous edge state. Our model was chosen such that its ISR to the red sites in Fig. 3(a) results in an energy-dependent NH SSH model [45, 23], described by the following Bloch Hamiltonian:
\[H_{R}(k,E)=\begin{pmatrix}A(E)&T_{+}(E)+we^{-ik}\\ T_{-}(E)+we^{ik}&A(E)\end{pmatrix}. \tag{11}\]
Here
\[A(E)=i\left[\varepsilon_{a}+\frac{(\varepsilon_{b}+iE)\left(t_{1}^{2}+t_{2}^{ 2}\right)}{\left(\varepsilon_{b}+iE\right)^{2}+t_{3}^{2}}\right] \tag{12}\]
and \(T_{\pm}(E)\) are the same as given in Eqs. (6) and (7). This reduction is graphically depicted in Fig. 3(a). We observe
Figure 3: Emergent NH SSH model from ISR. (a) Lattice corresponding to the Bloch Hamiltonian given in Eq. (10) and its ISR to the NH SSH model with energy dependent hopping and an energy dependent on-site term. (b-d) The complex energy spectra are presented for the system described in (a) using different parameter values: (b) \((\varepsilon_{a},\varepsilon_{b},t_{1},t_{2},t_{3})=(0.9,0.4,0,0.5,0.6)\); (c) \((\varepsilon_{a},\varepsilon_{b},t_{1},t_{2},t_{3})=(0.9,0.4,1,0.5,0.9)\); and (d) \((\varepsilon_{a},\varepsilon_{b},t_{1},t_{2},t_{3})=(0.9,0.4,0.2,0.5,0.2)\). In all figures we take \(w=1.8\). The dots are color-coded, representing left localized (blue), right localized (red), or bulk-like (purple) modes, with black and colored lines indicating PBC and OBC, respectively. The background displays \(\kappa(E)\), where larger absolute values of \(\kappa\) indicate stronger localization of the corresponding skin mode. Dashed grey lines correspond to \(\kappa(E)=0\). (e-g) Right eigenstates are shown for the same parameter choices as the above panels, using the same coloring conventions. We also plot the isolated topological modes in black.
that our model features a rich variety of phases, from the NHSE to topological edge modes. We note that this model enjoys the same \(PHS^{\dagger}\) symmetry as the previous three-band model Eq. (5). However \(\mathcal{S}\) now must be built from a different partitioning and is given by \(\mathcal{S}=(-\sigma_{z})\oplus\mathbbm{1}_{2\times 2}\).
### Onset of the NHSE
Similar to the Hatano-Nelson model, the NHSE is present in the NH SSH model whenever \(|T_{+}|\neq|T_{-}|\). By analogy, for our emergent NH SSH model, this results in the constraint equation \(v_{R}(E)g_{R}(E)+v_{I}(E)g_{I}(E)\neq 0\). In terms of the model parameters, this leads to the condition that the NHSE is present when \(t_{1}\), \(t_{2}\), and \(\varepsilon_{b}\) are not equal to zero. This is illustrated in Fig. 3, where the three possible scenarios are depicted. First, Fig. 3(b) shows the case without skin effect, clearly indicated by similar band structures for OBC and PBC, and the corresponding right eigenstate in Fig. 3(e). Fig. 3(c) shows the band structure when the skin-effect is present, but only in one direction, as indicated by \(\kappa(E)>0\). The modes localize on the right-hand side, as shown in Fig. 3(f). Finally, Fig. 3(d) shows the band structure when the bipolar skin-effect is present, which can be understood from the contourplot of \(\kappa(E)\), showing both regions of \(\kappa>0\) (red) and \(\kappa<0\) in blue. In all three situations, one can observe the presence of six topological edge modes, coming in three pairs of two degenerate modes pinned at the same energy. These are shown in black in Figs. 3(e)-3(g). We will now investigate the properties of these topological modes.
### Topological Edge Modes
Interestingly, we can also predict the existence of topological edge modes in the four-band model using the reduced NH SSH chain. The winding number that determines the topological phase transition for a sublattice-symmetric 1D Hamiltonian (of which the NH SSH is an example) is given by [7]
\[\mathcal{W}=\int_{-\pi}^{\pi}\frac{dk}{4\pi i}\operatorname{Tr}\left[\sigma_ {z}H^{-1}(k)\frac{dH(k)}{dk}\right]. \tag{13}\]
In our case, this expression becomes energy dependent and is only applicable when
\[A(E)=E, \tag{14}\]
where \(A(E)\) is defined in Eq. (12). This is because Eq. (13) is only well-defined for sublattice-symmetric systems, which in our case is a latent symmetry appearing at energies satisfying Eq. (14). This means that we must consider a Hamiltonian \(\tilde{H}(k)\equiv H(k,E_{t})-E_{t}\mathbbm{1}_{2\times 2}\), where \(E_{t}\) is a solution of Eq. (14). For every energy satisfying this constraint, in the topological phase, there is a degenerate pair of edge states pinned at that energy. In fact, the pair is quasi-degenerate, as a consequence of the finite size of the lattice. For the model at hand, there are three energies at which the transition takes place because Eq. (14) has three solutions. Explicit calculations of this winding number (see Appendix A for an analytic derivation) lead to
\[\mathcal{W}(E_{t})=\begin{cases}0,&\text{if }|w|<\sqrt{|v^{2}(E_{t})-g^{2}(E_{t}) |}\\ 1,&\text{if }|w|>\sqrt{|v^{2}(E_{t})-g^{2}(E_{t})|}\end{cases}. \tag{15}\]
Substituting the solutions \(E_{t}\) into Eq. (15) yields the three critical values \(w_{c}\) at which pairs of topological edge modes appear. Figs. 4(a) and 4(b) show the real and imaginary parts of the energy spectrum, respectively. The horizontal dashed lines show the calculated absolute values of the complex transition energies \(E_{t}^{(j)}=E_{R}^{(j)}+E_{I}^{(j)}\), while the vertical dashed lines indicate the value of the predicted critical hopping parameter \(w_{c}\). Robust boundary modes that persist beyond the transition point are visible in red, green and blue. The
Figure 4: Topological phase transitions of the emergent NH SSH model as a function of \(w\). (a) Real and (b) imaginary part of the energy spectrum, for an open chain consisting of \(N_{c}=35\) unit cells. The spectra are taken for the parameter choice \((\varepsilon_{a},\varepsilon_{b},t_{1},t_{2},t_{3})=(0.9,0.4,1,0.5,0.6)\).(c) The winding number given by Eq. (13), calculated at the three special energies \(E_{t}\), clearly shows its quantization and the critical points \(w\). (d-f) Corresponding topological edge modes, at \(w=3\), plotted together with the calculated exponential envelope in gray, with penetration depth given by Eq. (17). The three different colors blue, green, red are used consistently to mark the different edge modes [(d) to (f)], the behavior of their energies [(a) and (b)], and the values of the corresponding winding number (c).
presence of these modes can be quantified by calculating the winding number given by Eq. (15), as shown in Fig. 4(c). There is a clear jump to \(\mathcal{W}(E_{t})=1\) when the critical hopping \(w_{c}\) is reached. Figs. 4(d)-4(f) show the corresponding edge states, with the same color coding, at \(w=3\). The values of the calculated transition energies \(E_{t}\) are indicated in the middle of each figure. Notice that to properly visualize the edge modes in the presence of the NHSE, \(\sqrt{|\psi_{L}^{\ast}\psi_{R}|}\) is plotted rather than \(|\psi_{R}|\). Moreover, it is important to highlight that, for each transition energy, the emergent sublattice symmetry of the reduced model ensures that the edge modes appear in pairs. This is visible in Figs. 4(d)-4(f), where a pair of edge states is shown for each energy \(E_{t}\). For a further investigation of the line-gap closings of the emergent NH SSH model, we refer the reader to Appendix B. As a final note, we see that the penetration depth of these edge states is also energy dependent, and is given by [5; 6]
\[\xi_{L}(E) =\frac{1}{\log\left|\frac{v(E)-g(E)}{w}\right|}, \tag{16}\] \[\xi_{R}(E) =\frac{1}{\log\left|\frac{v(E)+g(E)}{w}\right|},\]
where the subscript \(L\) (\(R\)) stands for left (right) eigenvectors. This explains the different localization lengths observed in Figs. 4(d)-4(f). In the biorthogonal formulation the exponential envelope follows \(\exp\{-x_{j}/2\xi_{LR}\}\), where \(x_{j}=ja\) and
\[\xi_{LR}=\frac{\xi_{L}+\xi_{R}}{\xi_{R}\xi_{L}}. \tag{17}\]
This envelope is plotted in grey alongside the edge states in Figs. 4(d)-4(f).
## V Generalized construction principles
The models treated in the previous sections are individual examples of setups whose ISR has the form of an effective Hatano-Nelson or NH SSH model. In the following, we will show that one can systematically construct large families of such systems. The procedure will always be the same: an individual unit cell is built, such that its ISR to two specific sites yields equal on-site potentials, and nonreciprocal hoppings between them. Subsequently, these unit cells are connected such that either (i) a Hatano-Nelson, or (ii) a NH SSH model is obtained.
### Emergent Hatano-Nelson models
#### v.1.1 Construction principle A
The starting point for the first construction principle is a finite structure that has a non-reciprocal ISR. The graphical representation of the model is sketched in Fig. 5(a). The system consists of two lower sites \(u\) and \(v\) (marked in red), which are connected to a third site \(c\) via Hermitian couplings \(\exp(i\phi)\) and \(1\). The site \(c\) has complex on-site potential \(i\alpha\) and is further coupled to a (possibly very large) network \(C\). For simplicity, we demand that the couplings in the network \(C\), as well as the couplings between this network and the site \(c\) are real-valued. However, the sites in \(C\) could have complex on-site potentials. Denoting the Hamiltonian of the resulting total system by \(H\), its ISR to the two sites \(u\) and
Figure 5: (a) ISR of the lossy, complex hopping model above onto the red sites yields the non-reciprocal effective model below. (b) Lattice realization of (a). The unit cell is marked by a dashed line.
Figure 6: Different systems with latently symmetric sites (marked in red). In all four sytems, each line corresponds to a coupling of strength one (see Refs. [32; 46] for more details regarding the design of latently symmetric setups).
\(v\) yields (see Appendix C)
\[\mathcal{R}_{S}(E,H)=\begin{pmatrix}A(E)/2&T_{+}(E)\\ T_{-}(E)&A(E)/2\end{pmatrix}\,.\]
Note that the exact form of \(A(E)\) depends on the details of the network \(C\).
This finite building block is now used to construct a lattice, as shown in Fig. 5(b), with the unit cell comprised of one site \(c\), one network \(C\), and _one of the two_ red sites. Applying the ISR to all red sites, a Hatano-Nelson model emerges, as depicted in the lower part of Fig. 5(b). Importantly, since each red site of the lattice is coupled to two clouds, its on-site potential after the ISR now reads \(2\cdot A(E)/2=A(E)\). In an open chain, the sites on the left and right end, however, will have an on-site potential of \(A(E)/2\)[47].
#### iii.2.2 Construction principle B
The second construction principle of emergent Hatano-Nelson models relies on the concept of latent symmetry [30]. Given a system \(G\), two sites \(S=\{u,v\}\) are latently reflection symmetric if the ISR over them has the form
\[\mathcal{R}_{S}(E,G)=\begin{pmatrix}\mathcal{A}(E)&\mathcal{B}(E)\\ \mathcal{B}(E)&\mathcal{A}(E)\end{pmatrix}\,, \tag{18}\]
that is, if \(\mathcal{R}_{S}(E,H)\) commutes with the permutation matrix \(P:=\sigma_{x}\). In Fig. 6, a number of setups with latently symmetric sites (marked in red) are shown. A broader overview over this topic is given in Ref. [41].
The construction scheme is sketched in Figs. 7(a) to 7(c). Fig. 7(a) depicts a real-symmetric subsystem \(G\) (marked by a cloud) in which two sites \(u\) and \(v\) (marked in blue) are latently symmetric. In other words, if one would perform the ISR over \(u,v\), one would obtain Eq. (18). The key here is that the latent symmetry guarantees the existence of a matrix \(Q\) that commutes with the subsystem, i.e. \(QG=GQ\), and which (i) permutes the sites \(u\) and \(v\), (ii) is block-diagonal, and (iii) fulfills \(Q^{-1}=Q^{T}=Q\)[34]. In the following, we shall use this matrix extensively.
In the next step, this subsystem is modified by adding complex on-site potentials \(i\epsilon\) to \(u\) and \(v\), which are then connected via Hermitian hoppings \(it_{3}\). Note that this breaks the latent symmetry: If we denote the resulting modified subsystem by \(G^{\prime}\), then the isospectral reduction of \(G^{\prime}\) over \(u,v\) would read
\[\mathcal{R}_{S}(E,G^{\prime})=\begin{pmatrix}\mathcal{A}(E)+i\epsilon& \mathcal{B}(E)+it_{3}\\ \mathcal{B}(E)-it_{3}&\mathcal{A}(E)+i\epsilon\end{pmatrix}\,,\]
which does not commute with \(P\). However, we have \(\mathcal{R}_{S}(E,H)\,P=P\,\mathcal{R}_{S}(E,H)^{T}\). Due to the favourable
Figure 7: Construction scheme for the emergent Hatano-Nelson and NH SSH models. (a) The starting point: A simple setup with latently symmetric sites \(u,v\). (b-d) ISR of the lossy, complex hopping model above onto the red sites yields the non-reciprocal effective model below.
properties of \(Q\)--in particular, its block-diagonal form--, it can be shown that \(QG^{\prime}=G^{\prime T}Q\).
At this point, two additional sites \(a\) and \(b\) (marked in red) are coupled to the subsystem \(G^{\prime}\) with hoppings \(t_{1}\) and \(t_{2}\). Again employing the favourable properties of \(Q\), it can be easily shown that the Hamiltonian \(H\), describing the total system depicted in Fig. 7(b), obeys \(Q^{\prime}H=H^{T}Q^{\prime}\). Here, the matrix \(Q^{\prime}=P\oplus Q\), with the permutation matrix \(P\) acting on the two red sites \(a\) and \(b\). Analogously, it can be shown that the ISR has the form
\[\mathcal{R}_{S}(E,H)=\begin{pmatrix}A(E)/2&T_{+}(E)\\ T_{-}(E)&A(E)/2\end{pmatrix}\,.\]
Note that one can relate \(\mathcal{A}(E),\mathcal{B}(E)\) to \(A(E),T_{\pm}(E)\), though we omit the exact relation here.
Again, a lattice can be built by taking one red site and one subsystem \(G^{\prime}\) as a unit cell, see Fig. 7(c). Taking the ISR to all red sites of this lattice doubles the on-site potential, which then becomes \(A(E)\) instead of \(A(E)/2\).
### Emergent NH SSH model
In the previous Section V.1.1,lattices were built by taking one red site and one subsystem \(G^{\prime}\) as a unit cell, which resulted in emergent Hatano-Nelson models. One could, however, also build a lattice by taking one subsystem \(G^{\prime}\) and _two_ red sites as a unit cell, and then connect neighbouring unit cells via an additional coupling \(w\), as shown in the upper part of Fig. 7(d). This results in an emergent NH SSH model, which is depicted in the lower part of Fig. 7(d). Note that, by removing all complex couplings
Figure 8: Example of a generalized construction, with a network of eight sites connected to the two sites to which the ISR is applied. (a) The model and its ISR. (b) PBC (black) and OBC (right NHSE in red, left NHSE in blue) spectra of a system with the following parameters \((a,b,t_{1},t_{2},t_{3},t,w)=(1.19,3.74,1.18,2.97,4.24,2.03,2.5)\), together with the three predicted edge mode energies (with double degeneracy) shown with green circles at this particular \(w\) value. Once again, the contour plot shows the skin depth \(\kappa(E)\), with its values shown on the color bar in the left. (c) Same as (b), but with \(w=6.5\), where we now see all possible edge modes appear, at 9 different energies, giving a total of 18 edge modes. (d) Right-eigenstate amplitudes corresponding to the parameter choice given in (a). (e-g) Amplitudes of all six eigenstates (two in each figure) that appear in (a), and their corresponding energies, shown in the biorthogonal basis.
and on-site potentials, one would obtain an effective version of the conventional SSH model. Such emergent SSH models have been very recently investigated in Ref. [40].
Before concluding this work, we investigate a specific realization of the above procedure that results in an emergent NH SSH model. The setup and its ISR are depicted in Fig. 8(a). The resulting OBC (red and blue) and PBC (black) spectra are shown in Fig. 8(b) and 8(c), where the intercell hopping parameter is \(w=2.5\) in (b) and \(w=6.5\) in (c). This leads to the appearance of six topological edge states in (b), and eighteen in (c) (two doubly degenerate modes per energy). The overlaid transparent green circles indicate the presence of these edge modes in the OBC spectrum. The right eigenstates corresponding to the parameter choice in (b) are shown in Fig. 8(d). There, one can again observe the energy-dependent skin effect. Figs. 8(e)-8(g) show all six edge states that exist in (b), and their corresponding energies. Note that, since there are nine solutions to the equation \(A(E)-E=0\), the total amount of possible edge states is eighteen. The double degeneracy of each energy solution is, once again, a result of the emergent sublattice symmetry.
## VI Conclusion
Across many branches of physics, toy models are an essential tool to understand the key features of a given theory. In non-Hermitian physics, two such models are the Hatano-Nelson and the non-Hermitian Su-Schrieffer-Heeger (NH SSH) model. Despite their simple structure--their unit cells comprise only a single or two sites, respectively--these one-dimensional models host non-trivial boundary phenomena. In this work, we have used recent graph-theoretical insights to design systems whose so-called isospectral reduction--akin to an effective Hamiltonian--takes the form of either of these models. This procedure keeps the structure of the toy model, while simultaneously enriching it by making the couplings and on-site potentials energy-dependent. Specifically, we have shown that this leads to _emergent Hatano-Nelson_ or _emergent NH SSH models_ featuring a two-sided non-Hermitian skin effect, caused by an energy dependence of the skin localization length \(\kappa(E)\). This energy-dependence allows different states to be localized on different ends of the system, and to have different localization strengths. For the emergent NH SSH models, we observed topological edge modes which we could further link to a quantization of the winding number. In all cases, the original system--whose isospectral reduction becomes a Hatano-Nelson or NH SSH model--features only reciprocal (though complex-valued) couplings, with non-Hermiticity entering through complex on-site potentials (gain/loss).
We emphasize that the methods and ideas presented in this work are not limited to one-dimensional non-Hermitian Hamiltonians, but are rather generic. For example, they can be applied to higher-dimensional setups, which reduce under the isospectral reduction to paradigmatic models. An interesting avenue to explore would be the realization of these construction principles in different platforms, such as photonic or acoustic waveguides, electric circuits, or mechanical metamaterials.
###### Acknowledgements.
A.M. and C.M.S. acknowledge the TOPCORE project with project number OCENW.GROOT.2019.048 which is financed by the Dutch Research Council (NWO). L.E. and C.M.S. acknowledge the research program "Materials for the Quantum Age" (QuMat) for financial support. This program (registration number 024.005.006) is part of the Gravitation program financed by the Dutch Ministry of Education, Culture and Science (OCW). V.A. is supported by the EU H2020 ERC StG "NASA" Grant Agreement No. 101077954. L.E. and A.M. contributed equally to this work.
|
2306.02974 | Enhancement of quantum gravity signal in an optomechanical experiment | No experimental evidence of the quantum nature of gravity has been observed
yet and a realistic setup with improved sensitivity is eagerly awaited. We find
two effects, which can substantially enhance the signal of gravity-induced
quantum entanglement, by examining an optomechanical system in which two
oscillators gravitationally couple and one composes an optical cavity. The
first effect comes from a higher-order term of the optomechanical interaction
and generates the signal at the first order of the gravitational coupling in
contrast to the second order results in previous works. The second effect is
the resonance between the two oscillators. If their frequencies are close
enough, the weak gravitational coupling effectively strengthens. Combining
these two effects, the signal in the interference visibility could be amplified
by a factor of $10^{24}$ for our optimistic parameters. The two effects would
be useful in seeking feasible experimental setups to probe quantum gravity
signals. | Youka Kaku, Tomohiro Fujita, Akira Matsumura | 2023-06-05T15:37:20Z | http://arxiv.org/abs/2306.02974v2 | # Enhancement of quantum gravity signal in an optomechanical experiment
###### Abstract
No experimental evidence of the quantum nature of gravity has been observed yet and a realistic setup with improved sensitivity is eagerly awaited. We find two effects, which can substantially enhance the signal of gravity-induced quantum entanglement, by examining an optomechanical system in which two oscillators gravitationally couple and one composes an optical cavity. The first effect comes from a higher-order term of the optomechanical interaction and generates the signal at the first order of the gravitational coupling in contrast to the second order results in previous works. The second effect is the resonance between the two oscillators. If their frequencies are close enough, the weak gravitational coupling effectively strengthens. Combining these two effects, the signal in the interference visibility could be amplified by a factor of \(10^{24}\) for our optimistic parameters. The two effects would be useful in seeking feasible experimental setups to probe quantum gravity signals.
## I Introduction
The construction of a quantum gravity theory poses a fundamental challenge in theoretical physics [1; 2]. One of the main difficulties stems from the lack of sufficient experimental evidence to investigate quantum gravity. As a first step addressing this issue, Feynman proposed a thought experiment to observe a probe system evolving under a quantum superposition of gravitational fields [3]. This idea inspired the investigations of quantum coherent phenomena on a low-energy tabletop experiment. A novel proposal is often commonly referred to as the Bose et al.-Matletto-Vedral (BMV) proposal [4; 5]. In Ref.[4; 5], the authors considered a scenario where two quantum masses, initially in a non-entangled state and each in a spatial superposition, interact only through Newtonian gravity. They concluded that the entanglement between the masses is generated by the gravitational interaction and that such a phenomenon indicates the quantum coherent behavior of gravity. Stimulated by this statement, there are many experimental proposals based on matter-wave interferometers [6; 7; 8; 9; 10; 11], mechanical oscillator model [12; 13], optomechanical systems [14; 15; 16; 17; 18] and their hybrid model [19; 20; 21; 22]. Also, the theoretical aspects of gravity-induced entanglement have been studied. In Refs.[23; 24; 25; 26; 27; 28], the entanglement due to Newtonian gravity was shown to be consistent with quantum field theoretical description. On the other hand, it was discussed that such a Newtonian entanglement does not directly lead to the quantization of the gravitational field [29; 30]. In this context, it may also be interesting to verify the entanglement due to gravity in a relativistic regime [31; 32; 33; 34].
The above major trends pave the way to uncovering the quantum aspects of gravity. Recent advancements in optomechanics [35; 36; 37; 38; 39; 40; 41] further encourage us to investigate the quantum signal of gravity in an optomechanical setup. In this direction, Balushi et al. proposed such a setup involving two mechanical oscillators interacting through Newtonian gravity, each of which is coupled to an optomechanical interferometer [14]. The authors demonstrated that the gravitational interaction between the quantum oscillators induces an effective frequency shift of photons within the interferometer, and this results in a dephasing of photon interference visibility. In Ref.[17], the entanglement generation due to gravity in the setup was analyzed in an exact non-perturbative manner.
Despite various efforts to realize quantum gravity experiments, no experimental evidence of quantum gravity has been observed to date. In this paper, we present an amplification of the quantum signal of gravity in an optomechanical setup. Inspired by the work of Balushi et al. [14], we consider a hybrid system consisting of two oscillators and one optomechanical interferometer. In this setup, the two oscillators interact with each other by gravity, and one of the oscillators is coupled to a single photon in the interferometer via an optomechanical interaction. In Ref.[14], they considered the leading order of an optomechanical interaction and demonstrated that the modification of the photon interference visibility is of the second order of gravitational coupling. In comparison, we treat up to the
sub-leading order of the optomechanical interaction. As a result, we find that the visibility deviates by the first order of gravitational coupling. In other words, the large signal of gravity can be observed in the experiment accessible to the higher-order optomechanical coupling. We further investigate how the resonance of the two oscillators affects the gravitational deviation of the visibility. It is then demonstrated that such a deviation is amplified with the inverse of a frequency difference between two oscillators, provided that quantum coherence can be maintained in the system for a sufficiently long time. We finally evaluate the entanglement due to gravity in this system. Focusing on the resonance effect, we discuss the relationship between gravity-induced entanglement and the gravitational deviation of visibility.
This paper is organized as follows: The setup and the Hamiltonian are introduced in section II. Then, we investigate the time evolution of the system in section III. In section IV, we show the visibility of single-photon interference in the optomechanical setup and discuss that the gravitational effect appears as a lower order of the gravitational coupling constant compared to Ref.[14]. We present numerical results of the visibility in section V. In Section VI, we examine the resonance effect on the visibility.We also estimate quantum entanglement generated by gravity in section VII, and clarify the relationship between the entanglement generation and the gravitational deviation in the visibility. Finally, we summarize the paper in section VIII.
## II The setup and Hamiltonian
Let us consider a cavity optomechanical system to detect the quantum feature of gravity. Fig. 1 illustrates an experimental setup with a pair of cavities and two micro mechanical rods of length \(2L\). The two rods are suspended by independent center bars with a vertical separation \(h\) and can oscillate in a horizontal plane. A single photon emitted by the source passes through the half mirror and then is in a superposition of the state being in cavity 1 and in cavity 2. Here, the annihilation and creation operators of the photon in cavities 1 and 2 are represented as \(\{\hat{c}_{1},\ \hat{c}_{1}^{\dagger}\}\) and \(\{\hat{c}_{2},\ \hat{c}_{2}^{\dagger}\}\), respectively. The photon in the cavity 1 pushes the mirror of mass \(m\) at the left end of rod A and interacts with the mechanical mode of oscillating rod A. The oscillation of rod A is characterized by its angular position and momentum operator \(\hat{\theta}_{a},\hat{p}_{a}\). Its moment of inertia and angular frequency are defined as \(I_{a}=2mL^{2}\) and \(\Omega_{a}\), respectively. Rod B with the mirror mass \(M\) interacts with Rod A only through gravity. Similarly, the oscillation of rod B is characterized by \(\hat{\theta}_{b},\ \hat{p}_{b}\), and its moment of inertia and frequency are given by \(I_{b}=2ML^{2},\ \Omega_{b}\). This setup is based on the system proposed in Ref.[14]. They considered another set of cavities interacting with rod B, which is removed in our setup for simplicity. To analytically solve the dynamics of this system, we assume that the vertical separation of the rods is much smaller than their length \(2L\gg h\), and the oscillations of rods A and B are small \(\theta_{a},\ \theta_{b}\ll 1\).
Figure 1: Our setup with two optical cavities and two micro mechanical rods. A single photon emitted by the source is prepared in a quantum superposition state in cavity 1 and 2 by a half mirror. Rod A and cavity 1 form an optomechanical system. The photon in cavity 1 and the mirror of mass \(m\) attached to rod A interact with each other. The mirrors of rod B are coupled to the mirrors of rod A only through gravity. Quantum entanglement between rod B and the other system of the setup (i.e. rod A and the photon in the cavities) mediated by the gravitational coupling could be measured by the change of the interference visibility of the photons.
Let us consider the optomechanical coupling between the photon in cavity 1 and rod A by taking a higher-order correction into account. When the mirror \(m\) is in the original position \(\theta_{a}=0\), the photon frequency of the cavity mode would be
\[\omega_{c}=\frac{\pi c\,\mathfrak{n}}{\ell}. \tag{1}\]
where \(\ell\) is the original cavity length, \(c\) is the speed of light and \(\mathfrak{n}\) is an integer. When a photon enters cavity 1 and pushes the mirror \(m\), the frequency of the cavity mode is modified as
\[\omega_{c}^{\prime}=\frac{\pi c\,\mathfrak{n}}{\ell+L\sin\theta_{a}}\approx \omega_{c}\left(1-\frac{L}{\ell}\theta_{a}+\frac{L^{2}}{\ell^{2}}\theta_{a}^{2 }\right). \tag{2}\]
Here we include the second order of \(\theta_{a}\), which was neglected in the previous works [14; 17]. This second-order correction might appear to be a sub-leading effect of the optomechanical coupling between the photon and rod A, which slightly distorts the harmonic oscillator potential of rod A. We will show that this contribution has a significant impact on the signal of the quantum nature of gravity.
We organize the total Hamiltonian up to the second order of \(\theta_{a}\) as
\[\hat{H} = \hbar\omega_{c}^{\prime}\hat{c}_{1}^{\dagger}\hat{c}_{1}+\hbar \omega_{c}\hat{c}_{2}^{\dagger}\hat{c}_{2}+\frac{1}{2I_{a}}\hat{p}_{a}^{2}+ \frac{1}{2}I_{a}\Omega_{a}^{2}\hat{\theta}_{a}^{2}+\frac{1}{2I_{b}}\hat{p}_{b} ^{2}+\frac{1}{2}I_{b}\Omega_{b}^{2}\hat{\theta}_{b}^{2}+\frac{GmML^{2}}{h^{3}} \left(\hat{\theta}_{a}^{2}+\hat{\theta}_{b}^{2}-2\hat{\theta}_{a}\hat{\theta} _{b}\right) \tag{3}\] \[\approx \hbar\omega_{c}\hat{c}_{1}^{\dagger}\hat{c}_{1}+\hbar\omega_{c} \hat{c}_{2}^{\dagger}\hat{c}_{2}+\frac{1}{2I_{a}}\hat{p}_{a}^{2}+\frac{I_{a}} {2}\left(\Omega_{a}^{2}+\frac{GM}{h^{3}}+\frac{\hbar\omega_{c}}{m\ell^{2}}\hat {c}_{1}^{\dagger}\hat{c}_{1}\right)\hat{\theta}_{a}^{2}-\frac{\hbar\omega_{c}L }{\ell}\hat{c}_{1}^{\dagger}\hat{c}_{1}\hat{\theta}_{a}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\frac{1}{ 2I_{b}}\hat{p}_{b}^{2}+\frac{I_{b}}{2}\left(\Omega_{b}^{2}+\frac{Gm}{h^{3}} \right)\hat{\theta}_{b}^{2}+2\frac{GmML^{2}}{h^{3}}\hat{\theta}_{a}\hat{ \theta}_{b}\] \[= \hbar\omega_{c}\hat{c}_{1}^{\dagger}\hat{c}_{1}+\hbar\omega_{c} \hat{c}_{2}^{\dagger}\hat{c}_{2}+\sum_{n=0,1}\hat{H}_{a,n}|n\rangle_{c1\,c1} \langle n|+\hat{H}_{b}+\hat{H}_{g},\]
where we plugged Eq. (2) in the second line. The last term in the first line denotes the gravitational interaction part, which is derived in Appendix. A. In the last line of Eq.(3), \(|n\rangle_{c1}\) denotes an eigenstate of the photon number in cavity 1, \(\hat{c}_{1}^{\dagger}\hat{c}_{1}\), and \(\hat{H}_{a,n}\) is an effective Hamiltonian of rod A depending on the photon number \(n\) inside cavity 1. This Hamiltonian containing the optomechanical coupling with the cavity photon is given as
\[\hat{H}_{a,n}=\hbar\omega_{a,n}\left\{\hat{a}_{n}^{\dagger}\hat{ a}_{n}-n\lambda_{n}(\hat{a}_{n}^{\dagger}+\hat{a}_{n})+\frac{1}{2}\right\}, \tag{4}\] \[\hat{a}_{n}=\sqrt{\frac{I_{a}\omega_{a,n}}{2\hbar}}\hat{\theta}_ {a}+\frac{i}{\sqrt{2I_{a}\omega_{a,n}\hbar}}\hat{p}_{a},\quad\lambda_{n}= \left(\frac{\omega_{a,0}}{\omega_{a,n}}\right)^{3/2}\frac{\omega_{c}}{\omega _{a,0}}\sqrt{\frac{\hbar}{2I_{a}\omega_{a,0}}}\frac{L}{\ell}, \tag{5}\]
where \(\lambda_{n}\) denotes an optomechanical coupling constant, and the oscillation frequency of rod A depends on the photon number \(n\) as
\[\omega_{a,n}=\sqrt{\Omega_{a}^{2}+\frac{GM}{h^{3}}}\times\left\{ \begin{array}{ll}1&\mbox{ ($n=0$; no photon in cavity 1)}\\ \sqrt{1+\frac{2\hbar\omega_{c}}{I_{a}\omega_{c}^{2}}\frac{L^{2}}{\ell^{2}}}& \mbox{ ($n=1$; photon pressure distorts the potential)}\end{array}\right.. \tag{6}\]
The effective Hamiltonian of rod B, \(\hat{H}_{b}\), and the gravitational interaction term \(\hat{H}_{g}\) in Eq.(3) are defined as
\[\hat{H}_{b}=\hbar\omega_{b}\hat{b}^{\dagger}\hat{b},\quad\hat{b}=\sqrt{\frac{I _{b}\omega_{b}}{2\hbar}}\hat{\theta}_{b}+\frac{i}{\sqrt{2I\hbar\omega_{b} \hbar}}\hat{p}_{b},\quad\omega_{b}=\sqrt{\Omega_{b}^{2}+\frac{Gm}{h^{3}}}, \tag{7}\]
and
\[\hat{H}_{g}=-g\hbar\omega_{a,0}(\hat{a}_{0}^{\dagger}+\hat{a}_{0})(\hat{b}^{ \dagger}+\hat{b}),\quad g=\frac{G}{2h^{3}\omega_{a,0}}\sqrt{\frac{mM}{\omega_{a,0}\omega_{b}}}, \tag{8}\]
respectively.
As observed in Eq.(6), the oscillation frequency of rod A is shifted from \(\Omega_{a}\) due to the gravitational interaction between the rods. Moreover, the frequency also differs depending on the photon number \(n\); If the photon hits the mirror of rod A (\(n=1\)), the mechanical potential of rod A is not only displaced by the \(\mathcal{O}[\theta_{a}]\) term in Eq. (2), but also distorted according to the \(\mathcal{O}[\theta_{a}]^{2}\) term. The distortion of the potential is reinterpreted as the shift in the cavity mode frequency from \(\omega_{a,0}\) to \(\omega_{a,1}\). Remark that this frequency shift due to the optomechanical coupling was not considered in Ref.[14], and this is the key novelty in our paper. In comparison to the previous works, the frequency ratio \(\omega_{a,0}/\omega_{a,1}\) will be an important parameter as the new effect from the higher order contribution of \(\theta_{a}\).
## III The evolution of the system
Now we are ready to solve the time evolution of our system. The initial state of the total system is prepared as a non-entangled state
\[\left|\psi(t=0)\right\rangle=\frac{1}{\sqrt{2}}\big{(}\left|0\right\rangle_{c1} \left|1\right\rangle_{c2}+\left|1\right\rangle_{c1}\left|0\right\rangle_{c2} \big{)}\otimes\left|\alpha\right\rangle_{a}\otimes\left|\beta\right\rangle_{b}. \tag{9}\]
Here, \(\left|n\right\rangle_{c1}\left|n^{\prime}\right\rangle_{c2}\) denotes the photon state when \(n\) number of photons enters cavity \(1\) and \(n^{\prime}\) number of photons enters cavity \(2\). \(\left|\alpha\right\rangle_{a}\) is a coherent state of the mechanical mode of rod A for \(\hat{a}_{0}\) (not for \(\hat{a}_{1}\)), and \(\left|\beta\right\rangle_{b}\) is a coherent state of that of rod B. Since the gravitational coupling is very small \(g\ll 1\), we evaluate the evolved state by perturbation theory with respect to \(g\). Up to the first order of \(g\), we obtain
\[\left|\psi(t)\right\rangle =e^{-i\hat{H}t/\hbar}\left|\psi(0)\right\rangle\] \[=\frac{e^{-i\omega_{c}t}}{\sqrt{2}}\sum_{n=0,1}\left|n\right\rangle _{c1}\left|1-n\right\rangle_{c2}e^{-i\left(\hat{H}_{a,n}+\hat{H}_{b}\right)t/ \hbar}\left[1-\frac{i}{\hbar}\int_{0}^{t}dt^{\prime}\hat{H}_{g,n}^{I}(t^{ \prime})\right]\left|\alpha\right\rangle_{a}\left|\beta\right\rangle_{b}+ \mathcal{O}(g^{2}),\] \[=\frac{e^{-i\omega_{c}t}}{\sqrt{2}}\sum_{n=0,1}\left|n\right\rangle _{c1}\left|1-n\right\rangle_{c2}\Big{[}1+2ig\left(\hat{\mathcal{I}}_{n}(t)+n \hat{\mathcal{J}}(t)\right)\Big{]}e^{-i\left(\hat{H}_{a,n}+\hat{H}_{b}\right) t/\hbar}\left|\alpha\right\rangle_{a}\left|\beta\right\rangle_{b}+\mathcal{O}(g^{2}), \tag{10}\]
where \(\hat{H}_{g,n}^{I}(t)=e^{i\left(\hat{H}_{a,n}+\hat{H}_{b}\right)t/\hbar}\hat{H }_{g}e^{-i\left(\hat{H}_{a,n}+\hat{H}_{b}\right)t/\hbar}\) is the gravitational interaction in the interaction picture. From the second line to the third line, we used \(e^{-i\left(\hat{H}_{a,n}+\hat{H}_{b}\right)t/\hbar}\hat{H}_{g,n}^{I}(t^{ \prime})=\hat{H}_{g,n}^{I}(t^{\prime}-t)e^{-i\left(\hat{H}_{a,n}+\hat{H}_{b} \right)t/\hbar}\) and performed the \(t^{\prime}\) integration, which yielded new Hermitian operators
\[\hat{\mathcal{I}}_{n}(t) :=\sqrt{\frac{\omega_{a,0}^{3}}{\omega_{a,n}}}\left\{\frac{\sin[ \omega_{n,+}t/2]}{\omega_{n,+}}\left(e^{-i\omega_{n,+}t/2}\hat{a}_{n}^{\dagger }\hat{b}^{\dagger}+e^{i\omega_{n,+}t/2}\hat{a}_{n}\hat{b}\right)+\frac{\sin[ \omega_{n,-}t/2]}{\omega_{n,-}}\left(e^{-i\omega_{n,-}t/2}\hat{a}_{n}^{\dagger }\hat{b}+e^{i\omega_{n,-}t/2}\hat{a}_{n}\hat{b}^{\dagger}\right)\right\}, \tag{11}\] \[\hat{\mathcal{J}}(t) :=\lambda_{0}\frac{\omega_{a,0}^{3}}{\omega_{a,1}^{2}\omega_{b} }\left(F^{*}(t)\hat{b}^{\dagger}+F(t)\hat{b}\right),\quad F(t):=i\,\frac{ \omega_{a,1}^{2}+e^{i\omega_{b}t}\left\{-\omega_{a,1}^{2}+i\omega_{a,1}\omega _{b}\sin[\omega_{a,1}t]+(1-\cos[\omega_{a,1}t])\omega_{b}^{2}\right\}}{\omega_ {1,+}\,\omega_{1,-}}, \tag{12}\]
where \(\omega_{n,\pm}:=\omega_{a,n}\pm\omega_{b}\). \(\hat{\mathcal{I}}_{n}\) denotes the direct gravitational interaction between rod A and rod B, while \(\hat{\mathcal{J}}\) represents the effective coupling between rod B and the photon in cavity \(1\). In addition, these operators contain an inverse of \(\omega_{n,-}\), which indicates a resonance effect occurring in the limit of \(\omega_{a,n}\rightarrow\omega_{b}\). We will see how the resonance appears in the photon interference visibility in the section VI.
In passing, we note that the free evolution of the initial coherent state of rod A leads to a squeezed coherent state. When the photon is not in cavity \(1\) (\(n=0\)), the initial state \(|\alpha\rangle_{a}\) is evolved by the free Hamiltonian of \(\hat{a}_{0}\) and \(\hat{a}_{0}^{\dagger}\) in Eq. (4) into another coherent state \(|\alpha e^{-i\omega_{a,0}t}\rangle_{a}\). However, when the photon is in cavity \(1\), not only the optomechanical coupling is involved, but also the Hamitonian is composed of \(\hat{a}_{1}\) and \(\hat{a}_{1}^{\dagger}\), which are associated with the different frequency \(\omega_{a,1}\). In Appendix. B, we show that the time-evolved state of rod A becomes a squeezed coherent state. However, our main result can be understood without being familiar with these lengthy calculations and intricate states.
## IV The calculation of the visibility
Based on the time-evolved state in Eq. (III), we calculate the interference visibility of the photon in the cavities, which is defined with the absolute value of the interference term, as
\[\mathcal{V}_{c}(t) :=2\left|\mathrm{Tr}\left[\right._{c1}\langle 0|_{c2}\langle 1| \psi(t)\rangle\langle\psi(t)|1\rangle_{c1}|0\rangle_{c2}\right]\right|,\] \[=\left|{}_{a}\langle\alpha|_{b}\langle\beta|e^{i\left(\hat{H}_{a,0}+\hat{H}_{b}\right)t/\hbar}\left\{1-2ig\left(\hat{\mathcal{I}}_{0}^{ \dagger}(t)-\hat{\mathcal{I}}_{1}(t)-\hat{\mathcal{J}}(t)\right)\right\}e^{-i \left(\hat{H}_{a,1}+\hat{H}_{b}\right)t/\hbar}|\alpha\rangle_{a}|\beta\rangle_{b }+\mathcal{O}(g^{2})\right|,\] \[=\mathcal{V}_{0}^{(c)}(t)\left(1+2g\,\,\mathrm{Im}\left[\right._{0} \langle\hat{\mathcal{I}}_{0}^{\dagger}(t)\rangle_{1}-\left._{0}\langle\hat{ \mathcal{I}}_{1}(t)\rangle_{1}\right]\right)+\mathcal{O}(g^{2}), \tag{13}\]
where \(\mathcal{V}_{c}^{(0)}\) is the result without the gravitational coupling, and \({}_{0}\langle\cdots\rangle_{1}\) is not the expectation value but an off-diagonal element of the photon state.
(14)
Their full expressions can be found in Appendix. C.
It is interesting to note that the contribution from \(\hat{\mathcal{J}}(t)\) does not appear at \(\mathcal{O}(g)\) in Eq. (13), since it is a Hermitian operator and appears as \(\mathrm{Im}[\langle\hat{\mathcal{J}}(t)\rangle]\), where \(\langle\hat{\mathcal{J}}(t)\rangle={}_{b}\langle\beta|e^{i\hat{H}_{a}t/\hbar} \,\hat{\mathcal{J}}(t)\,e^{-i\hat{H}_{b}t/\hbar}|\beta\rangle_{b}\). In contrast, the contribution from \(\hat{\mathcal{I}}_{n}(t)\) survives, because the difference in the frequency of rod A, \(\omega_{a,0}\neq\omega_{a,1}\), originating in the second-order contribution of \(\theta_{a}\) distinguishes \(\hat{\mathcal{I}}_{0}(t)\) and \(\hat{\mathcal{I}}_{1}(t)\) and prevents their cancellation. Hence, we gain the \(\mathcal{O}(g)\) contribution to the visibility. In the previous works [14], however, this frequency difference was not appreciated. In that case, \(\hat{\mathcal{I}}_{1}(t)\) is replaced by \(\hat{\mathcal{I}}_{0}(t)\) and they were canceled in Eq. (13). Then, the leading contribution from gravity to the visibility would be the second order of \(g\),
\[\mathcal{V}_{c}(t)\approx\mathcal{V}_{c}^{(0)}(t)\left(1+4g^{2}\left|\langle \mathcal{J}(t)\rangle\right|^{2}\right),\qquad(\omega_{a,0}=\omega_{a,1};\; \text{without the higher order correction of }\theta_{a}). \tag{15}\]
Therefore, it is possible to make a remarkable signal amplification of the gravitational quantum effect by considering the higher-order contribution of \(\theta_{a}\) in our setup.
By assuming \(\beta\) is a real number for simplicity, the explicit form of visibility is given by
\[\mathcal{V}_{c}(t)\approx\mathcal{V}_{c}^{(0)}(t)\left[1+2g\,\omega_{a,0}\beta \left\{\left(\frac{\sin[\omega_{0,+}t/2]}{\omega_{0,+}}+\frac{\sin[\omega_{0, -}t/2]}{\omega_{0,-}}\right)C_{0}+\left(\frac{\sin[\omega_{1,+}t/2]}{\omega_{ 1,+}}+\frac{\sin[\omega_{1,-}t/2]}{\omega_{1,-}}\right)C_{1}\right\}\right], \tag{16}\]
where we introduced \(\omega_{n,-}:=\omega_{a,n}-\omega_{b}\). The coefficient of each term is given by
\[C_{0} =\sqrt{\frac{\omega_{a,0}}{\omega_{a,1}}}\cos\left[\frac{\omega_{ a,0}t}{2}\right]\mathrm{Im}\left[{}_{0}\langle\hat{a}_{1}\rangle_{1}+{}_{0} \langle\hat{a}_{1}^{\dagger}\rangle_{1}\right]+\sqrt{\frac{\omega_{a,1}}{ \omega_{a,0}}}\sin\left[\frac{\omega_{a,0}t}{2}\right]\mathrm{Re}\left[{}_{0} \langle\hat{a}_{1}\rangle_{1}-{}_{0}\langle\hat{a}_{1}^{\dagger}\rangle_{1} \right],\] \[C_{1} =-\sqrt{\frac{\omega_{a,1}}{\omega_{a,0}}}\left(\cos\left[\frac{ \omega_{a,1}t}{2}\right]\mathrm{Im}\left[{}_{0}\langle\hat{a}_{1}\rangle_{1}+ {}_{0}\langle\hat{a}_{1}^{\dagger}\rangle_{1}\right]+\sin\left[\frac{\omega_ {a,1}t}{2}\right]\mathrm{Re}\left[{}_{0}\langle\hat{a}_{1}\rangle_{1}-{}_{0} \langle\hat{a}_{1}^{\dagger}\rangle_{1}\right]\right), \tag{17}\]
where \({}_{0}\langle\cdots\rangle_{1}\) was defined in Eq. (14). The full expressions for \(C_{n}\) without the assumption of a real \(\beta\) can be found in Appendix. C. Note that the dependence on the initial coherent state \(\alpha\) of rod A and the optomechanical coupling constant \(\lambda_{n}\) are encoded in \(C_{n}\). In the limit of \(\omega_{a,1}\rightarrow\omega_{a,0}\), \(C_{1}\) becomes \(-C_{0}\), their coefficients become identical and the linear term of \(g\) in Eq. (16) vanishes.
In Eq.(16), we can see that the first-order gravity-induced contribution to the visibility is proportional to \(\beta\). These terms physically represent some superposed states in which the oscillation of rod B gravitationally affects rod A in distinct ways depending on the frequency \(\omega_{a,n}\). The factors \(\sin[\omega_{n,\pm}t/2]/\omega_{n,\pm}\) indicate that how much rod B changes the motion of rod A depends on their frequency matching between \(\omega_{b}\) and \(\omega_{a,n}\). In particular, if they are very close \(\omega_{b}\approx\omega_{a,n}\), a resonance phenomenon takes place and the term with \(\omega_{n,-}:=\omega_{a,n}-\omega_{b}\) is significantly amplified, as we will see in Sec. VI.
## V The \(\mathcal{O}(g)\) contribution to the visibility
In this section, we will present some numerical results demonstrating the visibility (14) amplified by the new \(\mathcal{O}(g)\) contribution. Note that we avoid the resonance in this section to separately study the two different amplification effects, and we will explore it in the next section. We set the mirror masses of both rods \(m=M=10^{-13}\,\mathrm{[kg]}\), the vertical interval between the two rods \(h=2\times 10^{-6}\,\mathrm{[m]}\), the original frequency of rod A \(\Omega_{a}=3\times 10^{3}\,\mathrm{[Hz]}\), the original frequency of rod B \(\Omega_{b}=0.84\times\Omega_{a}\,\mathrm{[Hz]}\), the initial coherent parameter of the rods \(\alpha=\beta=1\), the photon wave frequency \(\omega_{c}=450\times 10^{12}\,\mathrm{[Hz]}\) and the original cavity length \(\ell=0.01\,\mathrm{[m]}\). Using these parameters, the dimensionless parameters contained in the visibility are computed as
\[\lambda_{0}=4.5\left(\frac{m}{10^{-13}\,\mathrm{[kg]}}\right)^{-1/ 2}\left(\frac{\Omega_{a}}{3\times 10^{3}\,\mathrm{[Hz]}}\right)^{-3/2}\left(\frac{ \omega_{c}}{450\times 10^{12}\,\mathrm{[Hz]}}\right)\left(\frac{\ell}{0.01\,\mathrm{[m]} }\right)^{-1},\] \[\frac{\omega_{b}}{\omega_{a,0}}=1.9\left(\frac{\Omega_{b}}{3\times 10^{3} \,\mathrm{[Hz]}}\right)\left(\frac{\Omega_{a}}{3\times 10^{3}\,\mathrm{[Hz]}} \right)^{-1}, \tag{18}\]
where we set \(\omega_{b}\) not very close to \(\omega_{a,1}\) to avoid the resonance. The following two parameters are especially important.
\[1-\frac{\omega_{a,0}}{\omega_{a,1}}=2.8\times 10^{-10}\left(\frac{ \lambda_{0}}{4.5}\right)^{2}\left(\frac{\Omega_{a}}{3\times 10^{3}\left[\mathrm{Hz} \right]}\right)\left(\frac{\omega_{c}}{450\times 10^{12}\left[\mathrm{Hz} \right]}\right)^{-1},\] \[g=5.1\times 10^{-14}\left(\frac{m}{10^{-13}\left[\mathrm{kg} \right]}\right)^{1/2}\left(\frac{M}{10^{-13}\left[\mathrm{kg}\right]}\right)^{ 1/2}\left(\frac{\Omega_{a}}{3\times 10^{3}\left[\mathrm{Hz}\right]}\right)^{-3/2} \left(\frac{h}{2\times 10^{-6}\left[\mathrm{m}\right]}\right)^{-3}. \tag{19}\]
We see that \(\omega_{a,0}/\omega_{a,1}-1\), which originates from the \(\mathcal{O}[\theta_{a}^{2}]\) contribution in Eq. (2), is extremely small, although the gravitational coupling parameter \(g\) is even smaller.
Let us first study the case without the gravitational coupling \(g=0\), namely the \(0\)-th order results. Fig. 2 shows the time dependence of the visibility \(\mathcal{V}_{c}^{(0)}\). The left panel shows the behavior in the early time and the right panel shows for the longer time period. The red lines represent the result of our calculation that takes the \(\mathcal{O}[\theta_{a}^{2}]\) contribution into account, leading to \(\omega_{a,0}<\omega_{a,1}\). The blue dashed lines ignore the correction and adopt \(\omega_{a,0}=\omega_{a,1}\) in the same way as the previous works [14]. As seen in the left panel, the visibility decoheres and decoheres due to the optomechanical coupling between the photon and the rod A systems. No visible difference between the two cases is observed for the early time. However, we see a clear difference in the photon visibility in the right panel of Fig. 2, which comes from the frequency difference in \(\omega_{a,n}\) even without the gravitational coupling. This strong dephasing at around \(\omega_{a,0}t/(2\pi)\approx 2\times 10^{9}\) is caused by the fact that the two states of rod A with and without the photon, namely \(e^{-i\hat{H}_{a,0}t/\hbar}|\alpha\rangle_{a}\) and \(e^{-i\hat{H}_{a,1}t/\hbar}|\alpha\rangle_{a}\), oscillate for the different time scales \(1/\omega_{a,0}\) and \(1/\omega_{a,1}\), respectively. These two states become nearly orthogonal for every period of time when their phase difference accumulates to \((2N+1)\pi\),
\[(\omega_{a,1}-\omega_{a,0})t=(2N+1)\pi\quad\Longrightarrow\quad\frac{\omega _{a,0}t}{2\pi}=\frac{N+1/2}{\omega_{a,1}/\omega_{a,0}-1}\approx 1.8(2N+1) \times 10^{9}, \tag{20}\]
where \(N=0,1,2,...\) is integer. This explains why the recoherence of the red line is repeatedly suppressed in the right panel of Fig. 2.
In Fig. 3 and Fig. 4, we present the gravitational contribution to the visibility as the relative correction from the no gravity cases seen above, \(\mathcal{V}_{c}(t)/\mathcal{V}_{c}(t)^{(0)}-1\). The parameters are the same as Eqs. (18) and (19) again. The left panel of Fig. 3 depicts the result for the \(\omega_{a,0}=\omega_{a,1}\) case as in Ref.[14]. We see a periodic motion of the visibility correction from gravity. Its amplitude is roughly estimated from Eq. (15) as \(4g^{2}\left|\left(\mathcal{J}(t)\right)\right|^{2}\approx\mathcal{O}[4g^{2} \lambda_{0}^{2}]\approx 9.4\times 10^{-26}\). The right panel of Fig. 3 shows the result in the \(\omega_{a,0}<\omega_{a,1}\) case respecting the second-order contribution of \(\theta_{a}\). We can see that the visibility repeats decoherence and decoherence which amplitude is linear growing. We can derive the growth rate of the oscillation from Eq. (16). If we replace periodic functions contained in \(C_{0},\ C_{1}\) to \(1\), we get order estimation of these functions as \(C_{0}\approx-C_{1}\approx\mathcal{O}\left[2(\alpha+\lambda_{0})\right]\), which indicates that the initial coherent state of rod A is displaced by \(\lambda_{0}\) due to the photon pressure. By substituting these estimations into Eq. (16) and considering a leading
Figure 2: The time dependence of the visibility without the gravitational contribution, \(\mathcal{V}_{c}^{(0)}(t)\) given in Eq. (14). The red line denotes our result that takes into account the higher order contribution \(\mathcal{O}(\theta_{a}^{2})\) and appreciates the frequency difference \(\omega_{a,0}<\omega_{a,1}\), while the blue dashed line denotes the previous result that neglects the higher order correction. The parameters are set as in Eqs. (18) and (19), except for \(\omega_{a,1}=\omega_{a,0}\) for the blue dashed line. The left panel shows a log plot for an early time, and the right panel shows a linear plot for a much longer time scale. As seen in the right panel, the frequency difference causes a strong dephasing at the corresponding time scale, \(\omega_{a,0}t/(2\pi)\approx 1.8(2N+1)\times 10^{9}\), only in our result.
term of \(1-\omega_{a,0}/\omega_{a,1}\), we obtain
\[\mathcal{V}_{c}(t)/\mathcal{V}_{c}(t)^{(0)}-1\approx\mathcal{O}\left[4g(\alpha+ \lambda_{0})\left(1-\frac{\omega_{a,0}}{\omega_{a,1}}\right)\right]\times\omega_ {a,0}t\approx 1.3\times 10^{-21}\;\frac{\omega_{a,0}t}{2\pi}\,. \tag{21}\]
This estimation holds in a short time scale satisfying \(t\ll(\omega_{a,1}-\omega_{a,0})^{-1}\) as in the right panel of Fig. 3. This is about \(\mathcal{O}\left[(1-\omega_{a,0}/\omega_{a,1})(\alpha+\lambda_{0})/(g^{2} \lambda_{0}^{2})\right]\omega_{a,0}t\approx 1.4\times 10^{4}\;\omega_{a,0}t/(2\pi)\) larger compared to the \(\omega_{a,0}=\omega_{a,1}\) case in the left panel. In Fig. 4, we present the gravitational contribution to the visibility for a longer time period in our case of \(\omega_{a,0}<\omega_{a,1}\). Again, we observe the periodic dephasing at \(\omega_{a,0}t/(2\pi)\approx 1.8(2N+1)\times 10^{9}\) as explained in Eq. (20). The amplitude reaches an order of \(10^{-13}\) at that time. In contrast, in the \(\omega_{a,0}=\omega_{a,1}\) case, the amplitude does not exceed \(\sim 10^{-23}\) even in the longer time scale.
The significant amplification of the longer time period in Fig. 4 arises because the visibility is given by the first order of \(g\) in our case, whereas it appears from the second order of \(g\) if we disregard the second-order contribution of \(\theta_{a}\). After a sufficient amount of time has passed, the terms inside the bracket \(\{\cdots\}\) in Eq. (16) become \(\mathcal{O}\left[2(\alpha+\lambda_{0})\right]\), and the gravitational shift of the visibility extends to \(\mathcal{O}\left[4g(\alpha+\lambda_{0})\right]\), which is on the order of \(7.5\times 10^{-13}\) and consistent with Fig. 4. It should be noted that we chose the value of \(\omega_{b}\) for which the resonance is ineffective, and thus, this amplification of the visibility results only from the reduction of the order of the gravitational coupling \(g\). In the next section, we will discuss how to further enhance the gravitational signal in visibility using the resonance effect.
## VI The resonance effect
Since the setup contains two oscillators, we expect that a resonant behavior affects the visibility if their frequencies are close enough. In this section, we discuss the case where \(\omega_{a,1}\) is close to \(\omega_{b}\), focus on the resonance term in the visibility in Eq. (16), and discuss how much the resonance effect amplifies the visibility.
We will consider the resonance for \(\omega_{b}\approx\omega_{a,1}\). This physically means that the oscillating rod A resonates with rod B only when the photon enters cavity 1. Since the visibility captures the state difference between the photon within cavity 1 and cavity 2, the resonance effect is supposed to affect the visibility significantly. However, remember that \(\omega_{a,1}\) and \(\omega_{a,0}\) are very close as seen in Eqs. (19). Therefore, when we suppose to set \(\omega_{b}\) to be close to \(\omega_{a,1}\), \(\omega_{b}\) is inevitably close to \(\omega_{a,0}\) as well. If \(\omega_{a,1}\) is closer to \(\omega_{b}\) much more than \(\omega_{a,0}\), the system has a exclusive resonance only between \(\omega_{b}\) and \(\omega_{a,1}\). Then, the strength of the resonance effect is controlled by their frequency difference. We introduce such a frequency matching parameter as
\[\epsilon:=\frac{\omega_{1,-}}{\omega_{a,1}}=1-\frac{\omega_{b}}{\omega_{a,1}}\,. \tag{22}\]
In contrast, if \(\omega_{a,1}\) is much closer to \(\omega_{a,0}\) than \(\omega_{b}\), that is \(\omega_{b}\) is close to both of \(\omega_{a,1}\) and \(\omega_{a,0}\), the resonance takes place in both superposed states simultaneously. Then the resonant contribution from the gravitational coupling to
Figure 3: The time dependence of the gravitational contribution to the visibility, \(\mathcal{V}_{c}/\mathcal{V}_{c}^{(0)}-1\). The parameters are set as in Eqs. (18) and (19). The left panel shows the result in Eq. (15) when we neglect the higher order contribution of \(\mathcal{O}[\theta_{a}^{2}]\), or namely \(\omega_{a,0}=\omega_{a,1}\). We see a periodic recoherence whose amplitude is order estimated as \(\mathcal{O}[4g^{2}\lambda_{0}^{2}]\). The right panel displays the result in Eq. (16), which takes into account the \(\mathcal{O}[\theta_{a}^{2}]\) contribution and \(\omega_{a,0}<\omega_{a,1}\). Its amplitude \(\mathcal{O}\left[4g(\alpha+\lambda_{0})(1-\omega_{a,0}/\omega_{a,1})\right] \times\omega_{a,0}t\) in this plot is larger than one in the left panel only by a factor of \(\sim 10^{4}\). However, we will see a much greater growth of the visibility change at a sufficiently longer time scale in Fig. 4.
the visibility is suppressed, because this effect does not distinguish the two superposed states labeled by \(n=0\) and \(n=1\). To determine which of the above two cases happens, we compare \(\epsilon\) to \(1-\omega_{a,0}/\omega_{a,1}\). For \(\epsilon\ll 1-\omega_{a,0}/\omega_{a,1}\), the exclusive resonance occurs, while the simultaneous resonance takes place for \(\epsilon\gg 1-\omega_{a,0}/\omega_{a,1}\). We will confirm this physical argument by analytic and numerical investigations below.
Assuming \(\alpha=0\) and \(\beta\in\mathbb{R}\) to simplify the expression, Eq, (16) reduces to
\[\mathcal{V}_{C}(t)\approx\mathcal{V}_{C}^{(0)}(t)\left\{1-2g \lambda_{0}\beta\omega_{a,0}\left(\frac{\sin[\omega_{1,-}t/2]}{\omega_{1,-}}- \frac{\sin[\omega_{0,-}t/2]}{\omega_{0,-}}\right)\left(\sin\left[\frac{\omega _{1,+}t}{2}\right]+\sin\left[\frac{\omega_{1,-}t}{2}\right]\right)\right\} \tag{23}\]
The second term denotes the gravitational contribution to the visibility in \(\mathcal{O}[g]\) and can exhibit the resonance. If we make a measurement at some time around \(t\approx 1/\omega_{n,-}\), the resonance effect would be significant. Particularly, at around the time \(t\approx\pi/\omega_{1,-}\), we obtain
\[\frac{\mathcal{V}_{C}(t)}{\mathcal{V}_{C}^{(0)}(t)}\approx 1-\left(1+\sin \left[\frac{\omega_{1,+}t}{2}\right]\right)\times\left\{\begin{array}{ll}2g \lambda_{0}\beta/\epsilon&(\epsilon\ll 1-\frac{\omega_{a,0}}{\omega_{a,1}} :\text{ Exclusive resonance})\\ 2g\lambda_{0}\beta\left(1-\frac{\omega_{a,0}}{\omega_{a,1}}\right)/\epsilon^ {2}&(\epsilon\gg 1-\frac{\omega_{a,0}}{\omega_{a,1}}:\text{ Simultaneous resonance})\end{array}\right., \tag{24}\]
where we used \(\omega_{0,-}/\omega_{a,0}=\epsilon+(1-\omega_{a,0}/\omega_{a,1})+\mathcal{O} \left[(1-\omega_{a,0}/\omega_{a,1})^{2}\right]\) to obtain the expression on the lower case. The upper case indicates the exclusive resonance and the lower case corresponds to the simultaneous resonance. Compared to the upper case, the lower case is suppressed by a factor of \((1-\omega_{a,0}/\omega_{a,1})/\epsilon\ll 1\).
Fig. 5 shows the resonance behavior of the visibility for the varying frequency matching parameter \(\epsilon\). A gray line denotes the absolute value of the relative modification of the visibility due to gravity \(|\mathcal{V}_{c}/\mathcal{V}_{c}^{(0)}-1|\) at the observation time \(t=\pi/\omega_{1,-}\). Red and blue lines represent the absolute value of the last factor in Eq. (24) for \(\epsilon\ll 1-\omega_{a,0}/\omega_{a,1}\) and \(\epsilon\gg 1-\omega_{a,0}/\omega_{a,1}\), respectively, namely \(|2g\lambda_{0}\beta/\epsilon|\) and \(|2g\lambda_{0}\beta(1-\omega_{a,0}/\omega_{a,1})/\epsilon^{2}|\). Note that we set \(\alpha=0\) to justify the assumption of Eq. (23). The other parameters are chosen as in Eqs. (18) and (19). The red and blue lines agree well with the numerical calculation in the corresponding parameter regions. As we expected, the resonance enhancement is characterized by the inverse of \(\epsilon\) on the left side, while the double inverse of \(\epsilon\) on the right side. We also observe that the transition takes place when the \(\omega_{a,0}\) resonance becomes comparable with the \(\omega_{a,1}\) resonance at \(\epsilon\approx(1-\omega_{a,0}/\omega_{a,1})=2.8\times 10^{-10}\).
In Fig. 6, we present the gravitational contribution to the visibility with parameters yielding the resonance effect. We take \(\omega_{b}/\omega_{a,0}\approx 1+2.7\times 10^{-10}\), which corresponds to \(\epsilon=10^{-11}\); This indicates the exclusive resonance of \(\omega_{a,1}\) and \(\omega_{b}\) which we find in the left region in Fig. 5. Otherwise, we adopt the parameters in Eq. (18) and (19). The left and the right panels show the \(\omega_{a,0}=\omega_{a,1}\) case and the \(\omega_{a,0}<\omega_{a,1}\) case, respectively. Comparing with Fig. 3 and 4, we see the significant enhancement of the amplitude in both panels arising from the resonance effect. Note that the resonance also occurs even if we ignore the second order of \(\theta_{a}\) as seen in the left panel of Fig. 6. This is because the visibility correction in Eq. (15) is given by \(\hat{\mathcal{J}}\), which also contains a term inversely proportional to \(\omega_{1,-}=\omega_{0,-}\) (see Eq. (12)). This leads to a periodic enhancement in the visibility change of \(\mathcal{O}\left[4g^{2}\lambda_{0}^{2}/(\omega_{0,-}/\omega_{a,0})^{2}\right] \approx 1.3\times 10^{-6}\) with a time scale \(\omega_{a,0}t\approx(\omega_{0,-}/\omega_{a,0})^{-1}\approx 3.8\times 10^{9}\). In the right panel of Fig. 6, we see an even larger amplitude of the
Figure 4: The gravitational contribution to the visibility given in Eq. (16) is shown for a longer time scale. We consider the higher-order optomechanical contribution \(\mathcal{O}[\theta_{a}^{2}]\), which gives \(\omega_{a,0}<\omega_{a,1}\). At the dephasing time derived in Eq. (20), we observe a large amplification of the gravitational signal about \(\mathcal{O}[4g(\alpha+\lambda_{0})]\), denoted by the first order of the gravitational coupling \(g\). In contrast, the \(\omega_{a,0}=\omega_{a,1}\) result shown in the left panel of Fig. 3 was given by the second-order of \(g\), namely \(\mathcal{O}[4g^{2}\lambda_{0}^{2}]\). Hence, the signal is enhanced due to the reduction of the gravitational coupling order from \(\mathcal{O}[g^{2}]\) into \(\mathcal{O}[g]\) by taking the higher-order contribution \(\mathcal{O}[\theta_{a}^{2}]\) into account.
visibility change, which reaches the percent level. The resonance effect amplifies the result of Fig. 4, whose amplitude was \(\mathcal{O}[4g(\alpha+\lambda_{0})]\), by the factor of \(\mathcal{O}[1/\epsilon]\) and achieves the amplitude of \(\mathcal{O}\left[4g(\alpha+\lambda_{0})/\epsilon\right]\approx 7.5\times 10^{-2}\) periodically with a time scale \(\omega_{a,0}t\approx 1/\epsilon=10^{11}\).
To see the resonant amplification of \(1/\epsilon\) as in the right panel of Fig. 6, the system is required to maintain its quantum coherence for about 3 years, \(t\approx 1/(\omega_{a,0}\epsilon)\approx 3.3\times 10^{7}[s]\), with our parameter choice. However, this is technically difficult to achieve at present due to the environmental decoherence of the quantum system. Also, it is challenging to tune two frequencies to be sufficiently close with high accuracy of \(\epsilon=10^{-11}\). These difficulties indicate that there is a lower bound of \(\epsilon\) in the realistic situation. Let us suppose that \(\epsilon\) is fixed at some value in the right region of Fig. 5 regarding as the possible lower bound in the setup, and control the parameter \(1-\omega_{a,0}/\omega_{a,1}\) to obtain the best resonance enhancement. As we raise \(1-\omega_{a,0}/\omega_{a,1}\), the blue plot in Fig. 5 shifts upward, which means that we gain more enhancement at fixed \(\epsilon\). Hence, if the experimental setup is possible to achieve \(\epsilon\left(>1-\omega_{a,0}/\omega_{a,1}\right)\), we observe the signal enhancement of \(2g\lambda_{0}\beta(1-\omega_{a,0}/\omega_{a,1})/\epsilon^{2}\) due to the resonance effect, which amplification is improved by setting a larger \(1-\omega_{a,0}/\omega_{a,1}\).
To summarize the results of Sec. V and Sec. VI, we found two ways to enhance the gravitational contribution to the visibility; First, we take the second-order term in the optomechanical coupling into account, which leads to \(\omega_{a,0}<\omega_{a,1}\). Then, the visibility is given by the first order of the gravitational coupling \(g\) at \(t\approx(\omega_{a,1}-\omega_{a,0})^{-1}\), while it appears from its second order in the previous works [14]. This first effect can amplify the signal by a factor of \(1/(g\lambda_{0})\). Second, by adjusting the frequencies of two oscillators to be close enough \(\epsilon:=1-\omega_{b}/\omega_{a,1}\ll 1\), we gain a resonance effect depending on the parameter regions. For \(\epsilon\ll 1-\omega_{a,0}/\omega_{a,1}\), rod A resonates with rod B only when the photon enters cavity 1, and the signal gains \(1/\epsilon\) amplification due to this exclusive resonance. While for \(\epsilon\gg 1-\omega_{a,0}/\omega_{a,1}\), each oscillating mode of rod A resonates with rod B, and the signal is amplified about \((1-\omega_{a,0}/\omega_{a,1})/\epsilon^{2}\) due to the simultaneous resonance. Finally, the signal on the right panel in Fig. 6 is \(1/(g\lambda_{0}\epsilon)\sim 10^{24}\) times amplified compared to the original result on the left panel in Fig. 3.
## VII Gravity-induced entanglement
The quantum entanglement [44] created by gravity between systems is one of the major targets to probe the quantum feature of gravity [4; 5]. In this section, we adopt the entanglement negativity as a measure of quantum entanglement; Negativity of bipartite states is defined as the sum of negative eigenvalues of the partially transposed density matrix [45; 46; 47]. This is closely related to the maximum number of distillable Bell pairs in the system. Especially, the value of negativity of a state vanishes when the state is separable, and takes \(1/2\) when the state is given by the Bell state. We evaluate the negativity between rod B and the other systems which should be induced by the quantum gravitational interaction between the two rods.
To obtain the negativity, we calculated the partially transposed total density matrix \(\hat{\rho}^{\rm T_{B}}(t)\) and expand it with respect to the small parameter \(g\). Then, we compute its eigenvalues up to the first order of \(g\). The negativity between
rod B and the others is given by a summation of the negative eigenvalues of \(\tilde{\rho}^{\rm T_{B}}(t)\). As a result, we obtain the following expressions,
\[\mathcal{N}_{B:A+c}=2g\sqrt{\sum_{n=0,1}{}_{a}\langle\alpha|e^{i \tilde{H}_{a,n}t/\hbar}\hat{\mathcal{K}}_{n}^{\dagger}(t)\hat{\mathcal{K}}_{n}( t)e^{-i\hat{H}_{a,n}t/\hbar}|\alpha\rangle_{a}}\,, \tag{25}\] \[\hat{\mathcal{K}}_{n}(t)=\sqrt{\frac{\omega_{a,0}^{3}}{\omega_{a, n}}}\left(\frac{\sin[\omega_{n,+}t/2]}{\omega_{n,+}}e^{i\omega_{n,+}t/2}\hat{a}_{n}+ \frac{\sin[\omega_{n,-}t/2]}{\omega_{n,-}}e^{-i\omega_{n,-}t/2}\hat{a}_{n}^{ \dagger}+n\lambda_{0}\left(\frac{\omega_{a,0}}{\omega_{a,1}}\right)^{3/2} \frac{F(t)}{\omega_{b}}\right)\,. \tag{26}\]
Even in the limit of \(\omega_{a,1}\rightarrow\omega_{a,0}\), we find a non-zero value of the negativity (25) in the first order of \(g\), although the gravitational contribution in visibility appears only from its second order. This implies that the entanglement generation reflects in the gravitational correction to the visibility only in a very suppressed way, when we ignore the higher order optomechanical contribution \(\mathcal{O}[\theta_{a}^{2}]\). In the meantime, the operator \(\hat{\mathcal{K}}_{n}\) is closely related to \(\hat{\mathcal{I}}_{n}\) and \(\hat{\mathcal{J}}\), which are used in the calculation of the visibility and given in Eqs. (11) and (12), as \(\hat{\mathcal{I}}_{n}+n\hat{\mathcal{J}}=\hat{\mathcal{K}}_{n}\hat{b}+\hat{ \mathcal{K}}_{n}^{\dagger}\hat{b}^{\dagger}\).
To explore how the negativity and the visibility are related, we simplify Eq. (25) under several assumptions. We focus on the situation where \(\omega_{a,1}\) is much closer to \(\omega_{b}\) than \(\omega_{a,0}\), and the resonance due to \(\omega_{a,1}\approx\omega_{b}\) exclusively takes place. Its condition is given by \(1\gg 1-\omega_{a,0}/\omega_{a,1}\gg\epsilon\). In addition, we assume \(\alpha=0\) for simplicity. We also make use of the relation \(\lambda_{0}^{2}\gg 1-\omega_{a,0}/\omega_{a,1}\), which means that the second-order contribution of \(\theta_{a}\) is sub-dominant compared to its first order contribution. Then the negativity reduces to the following form.
\[\mathcal{N}_{B:A+c}\approx 2g\,\omega_{a,0}\lambda_{0}\left|\frac{\sin[ \omega_{1,-}t/2]}{\omega_{1,-}}\right|\,. \tag{27}\]
Here, we see that the resonance effect amplifies the negativity, if we wait until \(t\approx 1/\omega_{1,-}\), in the same way as the visibility. By comparing this simplified form of negativity to the visibility in Eq. (23) under the assumption \(1\gg 1-\omega_{a,0}/\omega_{a,1}\gg\epsilon>0\), we acquire a relationship between visibility and negativity as
\[\mathcal{V}_{c}(t)\approx\mathcal{V}_{c}^{(0)}(t)\left[1-\beta\, \mathcal{N}_{B:A+c}(t)\times\left\{\left|\sin\left[\frac{\omega_{1,-}t}{2} \right]\right|+\mathrm{sgn}\left[\sin\left[\frac{\omega_{1,-}t}{2}\right] \right]\sin\left[\frac{\omega_{1,+}t}{2}\right]\right\}\right] \tag{28}\]
The second term on the right-hand side is proportional to the negativity, and it clearly indicates that the visibility of the photon system alters due to the gravity-induced entanglement between rod B and other systems. Moreover, the last term depending on \(\omega_{1,+}\) is a highly oscillating mode, and the visibility behavior in a long time scale is almost determined by \(|\sin[\omega_{1,-}t/2]/\omega_{1,-}|\) under the assumptions we imposed. Hence, when the resonance effect of the visibility exists, the production of the gravity-induced entanglement is also amplified due to the resonance.
Figure 6: The gravitational contribution to the visibility \(\mathcal{V}_{c}/\mathcal{V}_{c}^{(0)}-1\) enhanced by the resonance effect for the long time scale. We set parameters as \(\epsilon:=1-\omega_{b}/\omega_{a,1}=10^{-11}\) to induce the resonance and \(\alpha=0\) for simplicity. Otherwise, we choose the same parameters as given in Fig. 3 and 4. The left panel shows the case when we ignored the higher order contribution \(\mathcal{O}[\theta_{a}^{2}]\), i.e. \(\omega_{a,0}=\omega_{a,1}\), which is evaluated using Eq. (15). This corresponds to the resonant version of the left panel in Fig. 3, and we see a resonance enhancement of \(\mathcal{O}[(\omega_{0,-}/\omega_{a,0})^{-2}]\sim 10^{20}\) compared to the result given in the previous section. The right panel shows the result when we take \(\mathcal{O}[\theta_{a}^{2}]\) into account, i.e. \(\omega_{a,0}<\omega_{a,1}\), which is given in Eq. (16). This result is the resonant version of Fig. 4, and is about \(\mathcal{O}[\epsilon^{-1}]\sim 10^{11}\) times larger than the result in Fig. 4.
Comparing Fig. 7 with the right panel of Fig. 6, the time scale that the negativity grows is approximately the same as it that \(\mathcal{V}_{c}/\mathcal{V}_{c}^{(0)}-1\) has a large negative value, that is, the visibility degrades \(\mathcal{V}_{c}\) due to gravity. This means that gravity-induced entanglement can lead to the decoherence of the photon.
In Fig. 7, we present the time dependence of the negativity between rod B and the others in the resonant case. A red line denotes the \(\omega_{a,0}<\omega_{a,1}\) case, while a blue line denotes the \(\omega_{a,0}=\omega_{a,1}\) case. We take the resonance parameters which are the same as in Fig. 6; \(\lambda_{0}=4.5,\ \omega_{b}/\omega_{a,0}\approx 1+2.7\times 10^{-10}\) (i.e. \(\epsilon=10^{-11}\)), \(1-\omega_{a,0}/\omega_{a,1}=2.8\times 10^{-10},\ g=5.1\times 10^{-14},\ \alpha=0,\ \beta=1\). For \(\omega_{a,0}<\omega_{a,1}\) case, we see the amplitude is enhanced by \(\mathcal{O}[2g\lambda_{0}/\epsilon]\approx 3.1\times 10^{-2}\) periodically with a time scale \(1/\epsilon=10^{11}\). Also, for \(\omega_{a,0}=\omega_{a,1}\) case, the resonance enhancement is about \(\mathcal{O}[2g\lambda_{0}(\omega_{a,0}/\omega_{0,-})]\approx 1.2\times 10^{-3}\) with its time period \(\omega_{a,0}/\omega_{0,-}\approx 3.8\times 10^{9}\). It should be noted that the negativity is given by the first order of the gravitational coupling \(g\) even for \(\omega_{a,0}=\omega_{a,1}\) case as shown in Eq. (25), while the visibility appears from its second order. This implies that the entanglement generation is not fully captured in the visibility when we ignore the higher order optomechanical contribution \(\mathcal{O}[\theta_{a}^{2}]\). Comparing the two cases, we find that the amplitude of \(\omega_{a,0}<\omega_{a,1}\) case is about 10 times larger than \(\omega_{a,0}=\omega_{a,1}\) case, which arises from the exclusive resonance effect as in Fig. 6. Also, we find that the visibility decoheres as the negativity increases by comparing Fig. 6 and 7, as we expected from Eq. (28). Physically, this result indicates that the optomechanical system decoheres due to the entanglement generation between rod B and the photon systems.
## VIII Discussion and Conclusion
Today, numerous experimental approaches are proposed to discover the quantum aspect of gravity. However, nobody has observed the quantum gravitational signal yet. Recently, inspired by the experimental progress in optomechanical systems, the optomechanical Cavendish experiment was proposed as a realistic way to probe the quantum nature of gravity [14]. Based on the previous research [14; 17], we considered an experimental setup with an optical cavity system and two mechanical rods A and B. In the setup, a cavity photon is coupled to rod A, and two rods A and B gravitationally interacts. We suppose to read the quantum gravity effect from the interference visibility of the photon. In contrast to the previous research [14; 17], it should be remarked that we treat up to the second order of the oscillation angle of rod A, \(\theta_{a}\), which is considered as a higher order of optomechanical coupling between the photon and rod A systems. According to the first order of optomechanical coupling, the rod A state evolves into a coherent state due to the photon pressure when the photon hits the oscillator of rod A. Furthermore, in our present analysis, the effective frequency of rod A alters by the second order of optomechanical interaction depending on the photon number; \(\omega_{a,0}\) if a single photon hits the rod A, while \(\omega_{a,1}\) if not.
As a result, we found two effects that amplify the gravitational signal in the visibility of the quantum optomechanical system. First, we showed that the higher-order contribution of \(\theta_{a}\) makes the visibility further sensitive to the quantum gravity effect. The gravitational modification in the visibility was given by the first order of the gravitational coupling \(g\), while it appears from the second order of \(g\) in the previous works [14; 17]. Another way to enhance the quantum gravity signal is to make use of the resonance. Since the setup contains two oscillators A and B, the resonance occurs
Figure 7: Time dependence of the quantum entanglement generated between rod B and the others (i.e. rod A and the photon) in the resonant scenario. The vertical axis denotes a measure of entanglement called negativity given in Eq. (25), which value takes zero for a separable state. A red line shows the result when we consider the higher-order contribution \(\mathcal{O}[\theta_{a}^{2}]\), that leads to \(\omega_{a,0}<\omega_{a,1}\). A blue line shows the case when we ignored \(\mathcal{O}[\theta_{a}^{2}]\), or equivalently \(\omega_{a,0}=\omega_{a,1}\). The parameters are the same as in Fig. 6. The negativity behaves along with the visibility in Fig. 6 as expected from their relation Eq. (28). The resonance amplification can be seen in both figures.
when the two frequencies of these oscillators are close enough. We also found the relational equation between the visibility and negativity. This reveals that the resonance effect occurs both in the visibility and negativity at the same time.
By combining the two effects found in this work, we expect to improve the quantum gravity signal significantly in an optomechanical experiment, which may lead to the implementation of the quantum Cavendish experiment in the near future. However, there are still some difficulties preventing a sufficient profit in our approach. In our analysis, the characteristic times of the two enhancements are given by \(t\approx(\omega_{a,1}-\omega_{a,0})^{-1}\) and \(t\approx(\omega_{b}-\omega_{a,1})^{-1}\), respectively. The frequency difference between \(\omega_{a,0}\) and \(\omega_{a,1}\) is typically tiny, and the large resonance enhancement is realized for the small matching parameter \(\epsilon:=1-\omega_{b}/\omega_{a,1}\). Hence, for utilizing the two enhancements, we need to coherently sustain our system for a long time, and this may be a challenging issue. Despite that, our investigation gives remarkable suggestions to enhance the quantum gravity signal in the conventional experimental setup. Particularly, the resonance effect can be very useful not only in our setup but also in many systems containing several oscillators.
###### Acknowledgements.
This work is supported by the Nagoya University Interdisciplinary Frontier Fellowship (Y.K.), JSPS KAKENHI grants 20H05854, 23K03424 (T.F.) and 23K13103 (A.M.).
## Appendix A Gravitational interaction Hamiltonian
We will show how to obtain the gravitational interaction Hamiltonian in Eq. (3). We first assume \(1\gg h/L\gg\theta_{b}-\theta_{a}\). This assumption indicates that the vertical separation of two rods is much smaller than the length of each rod to focus on gravity mediating only between mirrors located near each other. Also, the oscillation of rods is negligible compared to the vertical separation of rods. Considering a quantized form of Newtonian gravity between mirrors of rod A and B with the above assumption, we get
\[\frac{-2GmM}{\sqrt{h^{2}+\left(2L\sin[(\hat{\theta}_{b}-\hat{\theta}_{a})/2] \right)^{2}}}\approx\frac{GmML^{2}}{h^{3}}\left(\hat{\theta}_{a}^{2}+\hat{ \theta}_{b}^{2}-2\hat{\theta}_{a}\hat{\theta}_{b}\right), \tag{10}\]
where we neglected a constant term. The first and the second terms in the last line play a role to shift the original oscillation frequency of each rod. The last term coupling angular positions of two rods induces gravity-induced entanglement between them. We inserted this expression to the first line of Eq. (3).
## Appendix B Time evolved state
Here, we derive a time evolution of the total state given in Eq. (10) and show its explicit form. Using the Hamiltonian in Eq. (3) and the initial state in Eq. (9), the time evolved state is given by
\[\ket{\psi(t)}=\frac{e^{-i\omega_{c}t}}{\sqrt{2}}\sum_{n=0,1}\ket{n,1-n}_{c}e^ {-i\left(\hat{H}_{a,n}+\hat{H}_{b}+\hat{H}_{g}\right)t/h}\ket{\alpha}_{a}\ket{ \beta}_{b} \tag{11}\]
In the following, we focus on the state of rod A and B written as \(e^{-i\left(\hat{H}_{a,n}+\hat{H}_{b}+\hat{H}_{g}\right)t/h}\ket{\alpha}_{a} \ket{\beta}_{b}\). First, we move on to the interaction picture and consider up to the first order of gravitational coupling constant \(g\). We denote the free evolution Hamiltonian without gravity and gravitational interacting Hamiltonian as follows.
\[\hat{H}_{n}^{(0)}=\hat{H}_{a,n}+\hat{H}_{b},\quad\hat{H}_{g,n}^{I}(t):=e^{i \hat{H}_{n}^{(0)}t/h}\hat{H}_{g}e^{-i\hat{H}_{n}^{(0)}t/h} \tag{12}\]
Using these Hamiltonians, the time-evolved state of rods A and B is rewritten as
\[e^{-i\hat{H}_{n}^{(0)}t/h}\ket{\alpha}_{a}\ket{\beta}_{b} =e^{-i\hat{H}_{n}^{(0)}t/h}\ \mathcal{T}\left[\exp\left[-\frac{i}{h}\int_{0}^{t}dt^{\prime}\hat{H}_{g,n}^{I }(t^{\prime})\right]\right]\ket{\alpha}_{a}\ket{\beta}_{b}\] \[\approx e^{-i\hat{H}_{n}^{(0)}t/h}\left(1-\frac{i}{h}\int_{0}^{t}dt^{ \prime}\hat{H}_{g,n}^{I}(t^{\prime})\right)\ket{\alpha}_{a}\ket{\beta}_{b}+ \mathcal{O}[g^{2}]. \tag{13}\]
In the second line, we take into account the first order of \(g\). By using the following relation satisfied for the interaction picture Hamiltonian
\[e^{-i\hat{H}_{n}^{(0)}t/\hbar}H_{g,n}^{I}(t^{\prime})=H_{g,n}^{I}(t^{\prime}-t)e^ {-i\hat{H}_{n}^{(0)}t/\hbar}, \tag{10}\]
we obtain the time-evolved state as
\[e^{-i\hat{H}_{n}^{(0)}t/\hbar}\left|\alpha\right\rangle_{a}\left|\beta\right\rangle _{b}\approx\left(1-\frac{i}{\hbar}\int_{0}^{t}dt^{\prime}\hat{H}_{g,n}^{I}(t^{ \prime}-t)\right)e^{-i\hat{H}_{a,n}t/\hbar}\left|\alpha\right\rangle_{a}e^{-i \hat{H}_{b}t/\hbar}\left|\beta\right\rangle_{b}. \tag{11}\]
Next, we investigate the explicit form of the free evolution state of rod A, \(e^{-i\hat{H}_{a,n}t/\hbar}\left|\alpha\right\rangle_{a}\), contained in Eq. (11). Beforehand, we should note that the initial coherent state \(|\alpha\rangle_{a}\) is a coherent eigenstate of \(\hat{a}_{0}\), but not of \(\hat{a}_{1}\). As we see in the following, \(|\alpha\rangle_{a}\) is regarded as a squeezed coherent state in terms of \(\hat{a}_{1}\). The relationship between \(\hat{a}_{0}\) and \(\hat{a}_{1}\) is given by
\[\hat{a}_{1}=\hat{S}[\zeta_{1}]\hat{a}_{0}\hat{S}^{\dagger}[\zeta_ {1}]=\cosh[\zeta_{1}]\hat{a}_{0}+\sinh[\zeta_{1}]\hat{a}_{0}^{\dagger}, \tag{12}\] \[\zeta_{n}:=-\frac{1}{2}\log\left[\frac{\omega_{a,0}}{\omega_{a,n} }\right],\qquad\hat{S}[\xi]:=\exp\left[\frac{1}{2}\left(\xi^{*}\hat{a}_{0}^{2 }-\xi\hat{a}_{0}^{2}\right)\right]=\exp\left[\frac{1}{2}\left(\xi^{*}\hat{a}_ {1}^{2}-\xi\hat{a}_{1}^{2}\right)\right]. \tag{13}\]
Here, \(\hat{S}\) is a squeezing operator and \(\zeta_{n}\) is a squeezing parameter. This leads to another relative equation combining two vacuum states of \(\hat{a}_{0}\) and \(\hat{a}_{1}\).
\[|0\rangle_{a,0}=\hat{S}[-\zeta_{1}]|0\rangle_{a,1},\qquad\text{ where}\quad\hat{a}_{0}|0\rangle_{a,0}=\hat{a}_{1}|0\rangle_{a,1}=0 \tag{14}\]
Furthermore, the above equation is extended to a relative equation connecting a coherent state of \(\hat{a}_{0}\) to another state of \(\hat{a}_{1}\).
\[|\alpha\rangle_{a}=|\alpha\rangle_{a,0}=\hat{D}_{n}\left[\alpha_{ n}\right]\,\hat{S}\left[-\zeta_{n}\right]|0\rangle_{a,n}=|\alpha,-\zeta_{1} \rangle_{a,1}, \tag{15}\] \[\hat{D}_{n}[\eta]:=\exp\left[\eta^{*}\hat{a}_{n}^{\dagger}-\eta \hat{a}_{n}\right],\qquad\alpha_{n}:=\cosh\left[\zeta_{n}\right]\alpha+\sinh \left[\zeta_{n}\right]\alpha^{*},\qquad|\eta,\xi\rangle_{a,n}=\hat{D}_{n} \left[\eta\right]\,\hat{S}\left[\xi\right]|0\rangle_{a,n} \tag{16}\]
Here, \(\hat{D}_{n}\) and \(\alpha_{n}\) are the displacement operator and the coherent parameter defined in terms of \(\hat{a}_{n}\) respectively. \(|\eta,\xi\rangle_{a,n}\) is the squeezed coherent state concerning \(\hat{a}_{n}\). This relational equation indicates that the initial coherent state of \(\hat{a}_{0}\) is equivalent to a squeezed coherent state of \(\hat{a}_{1}\). Since the Hamiltonian of rod A contains both \(\hat{a}_{0}\) and \(\hat{a}_{1}\), we need to solve time evolution for the squeezed coherent state in general. The squeezing effect in our calculation arises from the fact that we consider the higher-order optomechanical contribution \(\mathcal{O}[\theta_{a}^{2}]\), two different frequencies \(\omega_{a,0},\ \omega_{a,1}\) were introduced, and two different annihilation operators \(\hat{a}_{0},\ \hat{a}_{1}\) appears in the Hamiltonian.
The free time evolution operator of rod A is rewritten as follows.
\[e^{-i\hat{H}_{a,n}t/\hbar}=e^{i\phi_{n}^{\prime}}\hat{D}_{n}\left[n\lambda_{n} \right]\exp\left[-i\omega_{a,n}t\hat{a}_{n}^{\dagger}\hat{a}_{n}\right]\hat{D }_{n}^{\dagger}\left[n\lambda_{n}\right],\quad\phi_{n}^{\prime}:=\omega_{a,n} \left(n\lambda_{n}^{2}-\frac{1}{2}\right)t \tag{17}\]
This expression clearly shows that the original harmonics oscillator potential \(e^{-i\omega_{a,n}t\hat{a}_{n}^{\dagger}\hat{a}_{n}}\) is shifted horizontally with the coherent parameter \(n\lambda_{n}\). Combining Eq. (15) and (17), the free evolution state of rod A is given as
\[e^{-i\hat{H}_{a,n}t/\hbar}|\alpha\rangle_{a,0} =e^{i\phi_{n}^{\prime}}\hat{D}_{n}\left[n\lambda_{n}\right]\exp \left[-i\omega_{a,n}t\hat{a}_{n}^{\dagger}\hat{a}_{n}\right]\hat{D}_{n}^{ \dagger}\left[n\lambda_{n}\right]\hat{D}_{n}\left[\alpha_{n}\right]\,\hat{S} \left[-\zeta_{n}\right]|0\rangle_{a,n}\] \[=e^{i\phi_{n}}\left|\Phi_{a,n},e^{-2i\omega_{a,n}t}\zeta_{n} \right\rangle_{a,n}, \tag{18}\]
where
\[\phi_{n}:=\phi_{n}^{\prime}+n\lambda_{n}\left\{\text{Im}\left[\alpha_{a,n} \left(1-e^{-i\omega_{a,n}t}\right)\right]-\lambda_{n}\sin[\omega_{a,n}t] \right\},\qquad\Phi_{a,n}:=e^{-i\omega_{a,n}t}\alpha_{n}+n\lambda_{n}\left(1-e^{ -i\omega_{a,n}t}\right). \tag{19}\]
From the first line to the second line in Eq. (18), we make use of the following relation.
\[e^{-i\omega_{a,n}t\hat{a}_{n}^{\dagger}\hat{a}_{n}}\hat{D}_{n}\left[\eta\right] =\hat{D}_{n}\left[e^{-i\omega_{a,n}t}\eta\right]e^{-i\omega_{a,n}t\hat{a}_{n}^{ \dagger}\hat{a}_{n}},\qquad e^{-i\omega_{a,n}t\hat{a}_{n}^{\dagger}\hat{a}_{n}} \hat{S}\left[\xi\right]=\hat{S}\left[e^{-2i\omega_{a,n}t}\xi\right]\,e^{-i \omega_{a,n}t\hat{a}_{n}^{\dagger}\hat{a}_{n}} \tag{20}\]
With a similar calculation, we obtain the free time evolution of rod B as follows.
\[e^{-i\hat{H}_{b}t/\hbar}|\beta\rangle_{b}=|\Phi_{b}\rangle,\quad\Phi_{b}:=e^{-i \omega_{b}t}\beta \tag{21}\]
Next, we show the explicit form of the gravitational interacting part in the time evolution operator \(\frac{i}{\hbar}\int_{0}^{t}dt^{\prime}\hat{H}^{I}_{g,n}(t^{\prime}-t)\). First, we rewrite the gravitational interacting Hamiltonian in the context of \(\hat{a}_{n}\).
\[\hat{H}_{g}:=-g\hbar\omega_{a,0}(\hat{a}_{0}^{\dagger}+\hat{a}_{0})(\hat{b}^{ \dagger}+\hat{b})=-g\hbar\sqrt{\omega_{a,n}\omega_{a,0}}(\hat{a}_{n}^{\dagger} +\hat{a}_{n})(\hat{b}^{\dagger}+\hat{b}) \tag{101}\]
Using the above expression and Eq. (100), we get
\[\hat{H}^{I}_{g,n}(t) =-g\hbar\sqrt{\omega_{a,n}\omega_{a,0}}\ \hat{D}_{n}\left[n\lambda_{n}\right]e^{i\omega_{a,n}t \hat{a}_{n}^{\dagger}\hat{a}_{n}}\hat{D}_{n}^{\dagger}\left[n\lambda_{n}\right] (\hat{a}_{n}^{\dagger}+\hat{a}_{n})\hat{D}_{n}\left[n\lambda_{n}\right]e^{-i \omega_{a,n}t\hat{a}_{n}^{\dagger}\hat{a}_{n}}\hat{D}_{n}^{\dagger}\left[n \lambda_{n}\right]\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad
where
\[A[\xi]:=\frac{1+(\xi/|\xi|)\tanh[|\xi|]}{1-(\xi/|\xi|)\tanh[|\xi|]}. \tag{100}\]
Next, we focus on the gravitational contribution to visibility. In advance, the inner product of \(\hat{a}\) using a general coherent squeezed state is given as follows.
\[\langle\eta^{\prime},\xi^{\prime}|\hat{a}|\eta,\xi\rangle=\mathcal{ E}\left[\eta^{\prime},\xi^{\prime}|\;\eta,\xi\right]\langle\eta^{\prime},\xi^{ \prime}|\eta,\xi\rangle, \tag{101}\] \[\mathcal{E}\left[\eta^{\prime},\xi^{\prime}|\;\eta,\xi\right]:= \frac{(1+A^{*}[\xi^{\prime}])(A[\xi]\mathrm{Re}[\eta]+i\,\mathrm{Im}[\eta])+(1 -A[\xi])(A^{*}[\xi^{\prime}]\mathrm{Re}[\eta^{\prime}]-\mathrm{i}\,\mathrm{Im}[ \eta^{\prime}])}{A[\xi]+A^{*}[\xi^{\prime}]} \tag{102}\]
Also, the inner product of \(\hat{a}^{\dagger}\) is given by
\[\langle\beta^{\prime},\zeta^{\prime}|\hat{a}^{\dagger}|\beta,\zeta\rangle= \mathcal{E}^{*}[\beta,\zeta,\beta^{\prime},\zeta^{\prime}]\langle\beta^{ \prime},\zeta^{\prime}|\beta,\zeta\rangle. \tag{103}\]
Then, the inner products of the annihilation and creation operators of rod A appearing in the visibility are given by
\[{}_{0}\langle\hat{a}_{1}\rangle_{1} :=\frac{a\langle\alpha|_{b}\langle\beta|e^{i\left(\hat{H}_{a,0}+ \hat{H}_{b}\right)t/\hbar}\,\hat{a}_{1}\,e^{-i\left(\hat{H}_{a,1}+\hat{H}_{b} \right)t/\hbar}|\alpha\rangle_{a}|\beta\rangle_{b}}{a\langle\alpha|e^{i\hat{H} _{a,0}t/\hbar}\,e^{-i\hat{H}_{a,1}t/\hbar}|\alpha\rangle_{a}}\] \[=\frac{a_{,1}\langle\tilde{\Phi}_{a,0},-\zeta_{1}|\,\hat{a}_{1}\,| \,\Phi_{a,1},e^{-2i\omega_{a,1}t}\zeta_{1}\rangle_{1}}{a_{,1}\langle\tilde{ \Phi}_{a,0},-\zeta_{1}|\,\Phi_{a,1},e^{-2i\omega_{a,1}t}\zeta_{1}\rangle_{1}}= \mathcal{E}\left[\tilde{\Phi}_{a,0},-\zeta_{1}\,|\;\Phi_{a,1},e^{-2i\omega_{a,1}t}\zeta_{1}\right], \tag{104}\] \[{}_{0}\langle\hat{a}_{1}^{\dagger}\rangle_{1} :=\frac{a\langle\alpha|_{b}\langle\beta|e^{i\left(\hat{H}_{a,0}+ \hat{H}_{b}\right)t/\hbar}\,\hat{a}_{1}^{\dagger}\,e^{-i\left(\hat{H}_{a,1}+ \hat{H}_{b}\right)t/\hbar}|\alpha\rangle_{a}|\beta\rangle_{b}}{a\langle\alpha| e^{i\hat{H}_{a,0}t/\hbar}\,e^{-i\hat{H}_{a,1}t/\hbar}|\alpha\rangle_{a}}\] \[=\frac{a_{,1}\langle\tilde{\Phi}_{a,0},-\zeta_{1}|\,\hat{a}_{1}^{ \dagger}\,|\Phi_{a,1},e^{-2i\omega_{a,1}t}\zeta_{1}\rangle_{1}}{a_{,1}\langle \tilde{\Phi}_{a,0},-\zeta_{1}|\Phi_{a,1},e^{-2i\omega_{a,1}t}\zeta_{1} \rangle_{1}}=\mathcal{E}^{*}\left[\Phi_{a,1},e^{-2i\omega_{a,1}t}\zeta_{1} \right]\tilde{\Phi}_{a,0},-\zeta_{1}\right],\] (105) \[{}_{0}\langle\hat{a}_{0}\rangle_{1} :={}_{0}\langle\cosh[\zeta_{1}]\,\hat{a}_{1}-\sinh[\zeta_{1}]\, \hat{a}_{1}^{\dagger}\rangle_{1}=\cosh[\zeta_{1}]\,{}_{0}\langle\hat{a}_{1} \rangle_{1}-\sinh[\zeta_{1}]\,{}_{0}\langle\hat{a}_{1}^{\dagger}\rangle_{1}\] (106) \[{}_{0}\langle\hat{a}_{0}^{\dagger}\rangle_{1} :={}_{0}\langle-\sinh[\zeta_{1}]\,\hat{a}_{1}+\cosh[\zeta_{1}]\, \hat{a}_{1}^{\dagger}\rangle_{1}=-\sinh[\zeta_{1}]\,{}_{0}\langle\hat{a}_{1} \rangle_{1}+\cosh[\zeta_{1}]\,{}_{0}\langle\hat{a}_{1}^{\dagger}\rangle_{1} \tag{107}\]
Similarly, the inner products of rod B operators are given as follows.
\[{}_{0}\langle\hat{b}\rangle_{1}=\Phi_{b},\qquad{}_{0}\langle\hat{b}^{\dagger} \rangle_{1}=\Phi_{b}^{*} \tag{108}\]
Based on these equations, we obtain the inner product of \(\hat{\mathcal{I}}_{n}\) as
\[{}_{0}\langle\hat{\mathcal{I}}_{n}(t)\rangle_{1}=\sqrt{\frac{ \omega_{a,0}^{3}}{\omega_{a,n}}}\left\{\frac{\sin[\omega_{n,+}t/2]}{\omega_{n, +}}\left(e^{-i\omega_{n,+}t/2}\Phi_{b}^{*}\;{}_{0}\langle\hat{a}_{n}^{\dagger} \rangle_{1}+e^{i\omega_{n,+}t/2}\Phi_{b}\;{}_{0}\langle\hat{a}_{n}\rangle_{1} \right)\right.\] \[\left.+\frac{\sin[\omega_{n,-}t/2]}{\omega_{n,-}}\left(e^{-i \omega_{n,-}t/2}\Phi_{b}\;{}_{0}\langle\hat{a}_{n}^{\dagger}\rangle_{1}+e^{i \omega_{n,-}t/2}\Phi_{b}^{*}\;{}_{0}\langle\hat{a}_{n}\rangle_{1}\right)\right\} \tag{109}\]
where the expressions of \({}_{0}\langle\hat{a}_{n}\rangle_{1},\;{}_{0}\langle\hat{a}_{n}^{\dagger}\rangle_{1}\) are shown in Eq. (104)-(107).
At last, by substituting Eq. (109) into Eq. (13), we obtain the final expression for the visibility.
\[\mathcal{V}_{c}(t) =\mathcal{V}_{c}^{(0)}(t)\left(1+2g\;\mathrm{Im}\left[{}_{0} \langle\hat{\mathcal{I}}_{0}^{\dagger}(t)\rangle_{1}-{}_{0}\langle\hat{ \mathcal{I}}_{1}(t)\rangle_{1}\right]\right)+\mathcal{O}[g^{2}] \tag{110}\] \[\approx\mathcal{V}_{c}^{(0)}(t)\left\{1+2g\,\omega_{a,0}\left( \frac{\sin[\omega_{0,+}t/2]}{\omega_{0,+}}D_{0,+}+\frac{\sin[\omega_{1,+}t/2]}{ \omega_{1,+}}D_{1,+}\right.\right.\] \[\left.\left.+\frac{\sin[\omega_{0,-}t/2]}{\omega_{0,-}}D_{0,-}+ \frac{\sin[\omega_{1,-}t/2]}{\omega_{1,-}}D_{1,-}\right)\right\} \tag{111}\]
Here, the coefficient of each term is given by
\[D_{0,\pm} =\sqrt{\frac{\omega_{a,0}}{\omega_{a,1}}}\mathrm{Re}\left[e^{\mp i \omega_{a,0}t/2}\beta\right]\mathrm{Im}\left[{}_{0}\langle\hat{a_{1}}\rangle_{1 }+{}_{0}\langle\hat{a_{1}}^{\dagger}\rangle_{1}\right]\mp\sqrt{\frac{\omega_{a,1}}{ \omega_{a,0}}}\mathrm{Im}\left[e^{\mp i\omega_{a,0}t/2}\beta\right]\mathrm{Re} \left[{}_{0}\langle\hat{a_{1}}\rangle_{1}-{}_{0}\langle\hat{a_{1}}^{\dagger} \rangle_{1}\right] \tag{112}\] \[D_{1,\pm} =-\sqrt{\frac{\omega_{a,1}}{\omega_{a,0}}}\left(\mathrm{Re}\left[e^ {\pm i\omega_{a,1}t/2}\beta\right]\mathrm{Im}\left[{}_{0}\langle\hat{a_{1}} \rangle_{1}+{}_{0}\langle\hat{a_{1}}^{\dagger}\rangle_{1}\right]\pm\mathrm{Im} \left[e^{\pm i\omega_{a,1}t/2}\beta\right]\mathrm{Re}\left[{}_{0}\langle\hat{a_{ 1}}\rangle_{1}-{}_{0}\langle\hat{a_{1}}^{\dagger}\rangle_{1}\right]\right), \tag{113}\]
and \(\mathcal{V}_{c}^{(0)}(t)\) is given in Eq. (109). We see that this explicit form reduces to Eq. (16) when \(\beta\) is a real number.
## Appendix D Negativity between the rod B and other systems
Here, we show the derivation of the negativity between rod B and other systems in section VII, and display its explicit form.
We need to get a density matrix of the state. To do so, we define the unit orthogonal bases of each state to construct the matrix. Since there are only two kinds of state \(|\Phi_{b}\rangle\) and \(\hat{b}^{\dagger}|\Phi_{b}\rangle\) for rod B state in Eq. (149), the bases for rod B system is given by two orthogonal states.
\[|b_{0}\rangle:=|\Phi_{b}\rangle,\quad|b_{1}\rangle:=\hat{b}^{\dagger}|\Phi_{b} \rangle-\Phi_{b}^{*}|\Phi_{b}\rangle \tag{150}\]
Then, the time-evolved state is rewritten as
\[|\psi(t)\rangle=\frac{1}{\sqrt{2}}e^{-i\omega_{e}t}\left(|\psi_{0}\rangle|b_{0 }\rangle+|\psi_{1}\rangle|b_{1}\rangle\right), \tag{151}\]
where \(|\psi_{j}\rangle\) is the state of rod A and the photon systems
\[|\psi_{0}\rangle =|0\rangle\left\{1+2ig\left(\Phi_{b}\hat{\mathcal{K}}_{0}+\Phi_{b }^{*}\hat{\mathcal{K}}_{0}^{\dagger}\right)\right\}|\Phi_{a,0}\rangle_{a,0}+| 1\rangle\left\{1+2i\gamma\left(\Phi_{b}\hat{\mathcal{K}}_{1}+\Phi_{b}^{*}\hat {\mathcal{K}}_{1}^{\dagger}\right)\right\}|\Phi_{a,1},e^{-2i\omega_{a,1}t} \zeta_{1}\rangle_{a,1}, \tag{152}\] \[|\psi_{1}\rangle =2ig\left(|0\rangle\hat{\mathcal{K}}_{0}^{\dagger}|\Phi_{a,0} \rangle_{a,0}+|1\rangle\hat{\mathcal{K}}_{1}^{\dagger}|\Phi_{a,1},e^{-2i\omega_ {a,1}t}\zeta_{1}\rangle_{a,1}\right). \tag{153}\]
\(\hat{\mathcal{K}}_{n}\) is defined in Eq. (26). We also introduce unit orthogonal bases for the complement system of rod B \(|\psi_{0}\rangle,\ |\psi_{1}\rangle\).
\[|\bar{b}_{0}\rangle:=|\psi_{0}\rangle,\quad|\bar{b}_{1}\rangle:=\frac{1}{\sqrt {N_{\bar{b}}}}\left(|\psi_{1}\rangle-\langle\psi_{0}|\psi_{1}\rangle|\psi_{0} \rangle\right) \tag{154}\]
\(\sqrt{N_{\bar{b}}}\) is a normalization factor given by
\[\sqrt{N_{\bar{b}}}=2g\sqrt{\sum_{n=0,1}{}_{a}\langle\alpha|e^{i\bar{H}_{a,n}t /\hbar}\hat{\mathcal{K}}_{n}^{\dagger}(t)\hat{\mathcal{K}}_{n}(t)e^{-i\bar{H}_ {a,n}t/\hbar}|\alpha\rangle_{a}}\,. \tag{155}\]
Using the bases introduced above, we construct the density matrix.
\[\rho(t)=|\psi(t)\rangle\langle\psi(t)|=\sum_{I,J=0}^{3}\left(\rho_{IJ}^{(0)}+ \rho_{IJ}^{(1)}\right)|e_{I}\rangle\langle e_{J}| \tag{156}\]
\(\rho^{(0)}\) and \(\rho^{(1)}\) are the density matrix of the 0th order and the first order of \(g\). \(|e_{J}\rangle\) is the composite bases of the total system
\[|e_{0}\rangle=|b_{0}\rangle|\bar{b}_{0}\rangle,\quad|e_{1}\rangle=|b_{1} \rangle|\bar{b}_{0}\rangle,\quad|e_{2}\rangle=|b_{0}\rangle|\bar{b}_{1} \rangle,\quad|e_{3}\rangle=|b_{1}\rangle|\bar{b}_{1}\rangle, \tag{157}\]
and the matrix components are given by
\[\rho^{(0)}=\begin{pmatrix}1&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\end{pmatrix},\quad\rho^{(1)}=\begin{pmatrix}0&0&\langle\psi_{1}| \psi_{0}\rangle&\sqrt{N_{\bar{b}}}\\ 0&0&0&0\\ \langle\psi_{0}|\psi_{1}\rangle&0&0&0\\ \sqrt{N_{\bar{b}}}&0&0&0\end{pmatrix}\,. \tag{158}\]
Then, we perform a partial transpose to the density matrix, solve its eigenvalues up to the first order of \(g\). Note that \(\rho^{(0)}\) is triple-degenerated, so we need to solve a degenerated eigensystem. Finally, by estimating a total sum of the
negative eigenvalues, we find that the negativity is given by the normalization factor \(\sqrt{N_{\tilde{b}}}\).
\[\mathcal{N}_{\mathrm{B:others}}=\sqrt{N_{\tilde{b}}} \tag{100}\] \[=2g\omega_{a,0}\left[\left(\frac{\sin[\omega_{0,+}t/2]}{\omega_{0, +}}\right)^{2}(|\Phi_{a,0}|^{2}+1)+\left(\frac{\sin[\omega_{0,-}t/2]}{\omega_{ 0,-}}\right)^{2}|\Phi_{a,0}|^{2}\right.\] \[\qquad\left.+4\frac{\sin[\omega_{0,+}t/2]}{\omega_{0,+}}\frac{ \sin[\omega_{0,-}t/2]}{\omega_{0,-}}\cos\left[\frac{\omega_{b}t}{2}\right]\, \mathrm{Re}\left[e^{i\omega_{a}t}\Phi_{a,0}^{2}\right]\right.\] \[\qquad\left.+k^{2}\left\{\left(\frac{\sin[\omega_{1,+}t/2]}{ \omega_{1,+}}\right)^{2}(|\Phi_{a,1}|^{2}+\cosh^{2}|\zeta_{1}|)+\left(\frac{ \sin[\omega_{1,-}t/2]}{\omega_{1,-}}\right)^{2}(|\Phi_{a,1}|^{2}+\sinh^{2}| \zeta_{1}|)\right.\right.\] \[\qquad\left.\left.+4\frac{\sin[\omega_{1,+}t/2]}{\omega_{1,+}} \frac{\sin[\omega_{1,-}t/2]}{\omega_{1,-}}\cos\left[\frac{\omega_{b}t}{2} \right]\,\mathrm{Re}\left[e^{i\omega_{a,1}t}(\Phi_{a,1}^{2}-e^{-2i\omega_{a,1} t}\sinh|2\zeta_{1}|)\right]\right.\right.\] \[\qquad\left.\left.+2\lambda_{1}\mathrm{Re}\left[\frac{F(t)}{ \omega_{b}}\left(\frac{\sin[\omega_{1,+}t/2]}{\omega_{1,+}}e^{-i\omega_{1,+}t /2}\Phi_{a,1}^{*}+\frac{\sin[\omega_{1,-}t/2]}{\omega_{1,-}}e^{i\omega_{1,-}t /2}\Phi_{a,1}\right)\right]+\lambda_{1}^{2}\left|\frac{F(t)}{\omega_{b}}\right| ^{2}\right\}\right]^{1/2}. \tag{111}\]
From the first line to the second line, we calculate the inner product in Eq. (106) using the following formulas
\[\langle\eta,\xi|\hat{a}^{2}|\eta,\xi\rangle=\eta^{2}-e^{i\theta} \sinh 2r,\quad\langle\eta,\xi|\hat{a}^{\dagger 2}|\eta,\xi\rangle=\eta^{*2}-e^{-i \theta}\sinh 2r, \tag{112}\] \[\langle\eta,\xi|\hat{a}\hat{a}^{\dagger}|\eta,\xi\rangle=|\eta|^ {2}+\cosh^{2}r,\quad\langle\eta,\xi|\hat{a}^{\dagger}\hat{a}|\eta,\xi\rangle= |\eta|^{2}+\sinh^{2}r, \tag{113}\]
where \(\xi=re^{i\theta}\).
|
2301.09886 | Entry and leaving arcs of turnpikes: their exact computation in the
calculus of variations | We settle the question of how to compute the entry and leaving arcs for
turnpikes in autonomous variational problems, in the one-dimensional case using
the phase space of the vector field associated to the Euler equation, and the
initial/final and/or the transversality condition. The results hinge on the
realization that extremals are the contours of a well-known function and that
that the transversality condition is (generically) a curve. An approximation
algorithm is presented and an example included for completeness. | L. Bayón, P. Fortuny Ayuso, J. M. Grau, M. M Ruiz | 2023-01-24T09:44:39Z | http://arxiv.org/abs/2301.09886v1 | # Entry and leaving arcs of turnpikes: their exact computation in the calculus of variations
###### Abstract.
We settle the question of how to compute the entry and leaving arcs for turnpikes in autonomous variational problems, in the one-dimensional case using the phase space of the vector field associated to the Euler equation, and the initial/final and/or the transversality condition. The results hinge on the realization that extremals are the contours of a well-known function and that that the transversality condition is (generically) a curve. An approximation algorithm is presented and an example included for completeness.
2020 Mathematics Subject Classification: 49J15, 49M05, 49M99
## 1. Introduction
The idea of _turnpike_ in the Calculus of Variations or in Optimal Control describes the (usual) phenomenon which takes place when, in problems with finite but arbitrarily large time, the optimal solutions _spend most of their time_ near a specific point. Even more, these solutions tend to be composed of three parts: an entry arc, the turnpike arc, and the leaving arc. The first and last ones are _transitory_ arcs which take little time of the solution, whereas the middle arc (the turnpike) is a long arc which is essentially stationary, and tends to be exponentially near an equilibrium (see [1]). Roughly speaking, in the long term, approximate solutions to problems having a turnpike are determined essentially by the integrand function of the objective functional, and are --again, essentially-- independent of their endpoints and time interval.
Although the first works on the topic investigated specific problems arising in the context of economics and econometrics [2], [3], today the turnpike property has become of interest in other areas [4, 5]. Recent studies have proposed its use in applications varying from membrane-filtration systems [6] to control of chemical reactors with uncertain models [7] or shape optimization in aircraft design [8].
The property has also been noticed in Optimal Control Problems of almost any type: with/without terminal constraints [1], [9]; with/without discounted cost functionals [10], [11]; discrete-time problems with constraints [5], [12]; and in a continuous-time problems without constraints [1]... Of course, no work on the turnpike property can omit referencing Zaslavski's exhaustive studies, whose results and complete references are collected in [13, 14].
From a practical point of view, the interest of the turnpike phenomenon arises from the fact that under this condition, the computation of (approximate) optimal trajectories in all areas of optimal control and variational problems becomes trivial for long enough time spans. In this sense, one of the first applications is [15], where a time-invariant linear quadratic optimal control problem is studied. They prove that the optimal trajectory is approximately composed of two solutions of two infinite-horizon optimal control problems. With \(x(0)\) fixed, the solution for the interval \((0,+\infty)\) defines the part of the trajectory for the original problem from \(x=0\) to the turnpike. With \(x(T)\) fixed, the solution for the interval \((-\infty,T)\) defines the part of the trajectory of the original problem from the turnpike to \(t=T\)
The two parts are then pieced together and exhibit a similar transient behavior. Their approach is elementary and points out very clearly that the hyperbolicity phenomenon is the heart of the turnpike property.
Recently, in [1], the authors investigate the relation between the turnpike proprietry and numerical methods (direct and indirect) for a general nonlinear optimal control problem, without any specific assumption, and for very general terminal conditions. In the context of turnpike theorem, they provide a new method to ensure the successful initialization of numerical methods. Assuming that the Pontryagin maximum principle has been applied, the usual shooting method can be used. However, this is in general very hard to initialize. As a solution, they propose a variant: as the extremal is approximately known along the interval \([\varepsilon,T-\varepsilon]\), for some \(\varepsilon>0\) (i.e. the turnpike), but it not at the endpoints \(t=0\) and \(t=T\), the idea is choose some arbitrary point of \([\varepsilon,T-\varepsilon]\) (for instance \(t=T/2\)), and then integrate backwards over \([0,T/2]\) to get an approximation to \(x(0)\), and forward over \([T/2,T]\) to get the approximation to \(x(T)\). The unknown value of \(x(T/2)\) must be adjusted, for instance, through a Newton method, so that transversality conditions are satisfied.
Even more recently, in the same spirit, in [16] the authors use the turnpike property in the numerical computation of optimal trajectories, splitting the optimization horizon at the turnpike. They proceed as follows: given the turnpike equilibrium \(x^{e}\), the optimization horizon \(T>0\), an initial condition \(x(0)\) and, a terminal condition \(x(T)\), they compute an optimal trajectory \(x_{1}(\cdot)\) with finite horizon \(T_{1}<T\) and initial and terminal conditions \(x_{1}(0)=x(0)\), \(x_{1}(T_{1})=x^{e}\); and an optimal trajectory \(x_{2}(\cdot)\) with horizon \(T_{2}<T-T_{1}\) and initial and terminal conditions \(x_{2}(0)=x^{e}\), \(x_{2}(T_{2})=x(T)\). Finally, an approximation of the optimal trajectory is then obtained by concatenating the three arcs: \(x_{1}(t)\), \(t\in[0,T_{1}]\); \(x^{e}\), \(t\in[T_{1},T-T_{2}]\) and \(x_{2}(t-T+T_{2})\), \(t\in[T-T_{2},T]\). The resulting error can be estimated if the speed of convergence towards the turnpike is known (as in the case of exponential turnpike). They also use a second approach via Model Predictive Control (MPC) which may have has some advantages.
To illustrate the turnpike and their methods, they consider a well known harvest example [17], with both bilinear and quadratic objective. Remarkably enough, the authors do not seem to notice that in the free-endpoint case, the leaving arc ends always at the same value of \(x(T)\), regardless of \(x(0)\) and \(T\). Something similar happens in [18]: the author, who studies two examples of optimal investment problems, states literally: "_Without any terminal constraints all predictions end in \(x=2\)_", but does not delve into this happening. We shall see that this is a general property of turnpikes with free-endpoint solutions.
As a matter of fact, one of us had already noticed this in the previous paper [19]. There, a model of renewable resource exploitation in an open-access fishery [20], more detailed and general than [17], is studied. It was noticed that, without constraints on the terminal state (which force the trajectory to leave the turnpike), the solution spontaneously leaves the turnpike in order to reduce the cost of the overall trajectory, and the leaving arc always ends at the same value of \(x(T)\), for all \(T\).
In this note, we intend to settle the question of the entry and leaving arcs of the turnpike in the generic hyperbolic situation for _variational problems_. The key point was suggested in [19] but not led to its natural consequence there. In short, and loosely speaking, our statement can be summarized as follows (for autonomous problems in \(\mathbb{R}\)): assume \(P=(x_{P},\dot{x}_{P})\) is a turnpike for a problem with initial condition \(x_{0}\) and free terminal condition, and let \(T(x,\dot{x})=0\) denote the equation giving the transversality condition. Then:
**Statement:** There is a function \(C(x,\dot{x})\) such that:
* The entry arc of the turnpike starts at \[Q_{e}=\{x=x_{0}\}\cap\{C(x,\dot{x})=C(x_{P},\dot{x}_{P})\}\,.\]
* The leaving arc of the turnpike ends at \[Q_{l}=\{T(x,\dot{x})=0\}\cap\{C(x,\dot{x})=C(x_{P},\dot{x}_{P})\}\,.\]
The function \(C(x,\dot{x})\) is well-known to any practitioner: it is the function whose level sets are the extremals [21]. Certainly, the statement needs to be properly formalized, but its spirit should be clear to anyone familiar with the turnpike property. It is also more general (the problem may have both endpoints fixed, or none).
The main tool in our argument is to study the phase space of the plane vector field equivalent to the Euler equation associated to the variational problem. This vector field has very nice properties (among other things, its trajectories are both the extremals of the problem and the level sets of \(C(x,\dot{x})\)) and a direct application of the classical results on ordinary differential equations is enough to prove the statement.
The consequences of that result are straightforward: in order to determine the entry and leaving arcs, one only needs to know the intersection points between \(C(x,\dot{x})\), the transversality condition and/or the initial and final conditions (if any). Once these points are known, the entry arc can be computed by forward integration, and the leaving arc by backwards integration, as the question has become an initial value problem at this point.
We hope this work provides a useful support for the study of long-term autonomous variational (and possibly control) problems near a steady state.
Our results are all straightforward consequences of the standard results on continuous dependence on parameters of solutions of ordinary differential equations, as well as the local structure of hyperbolic singularities. Despite this fact, we dedicate Section 3 to a thorough description of the geometric setting of the problem, with the aim of helping the reader understand the situation. We hope this is clearer, briefer and simpler than a technical proof which would provide no insight and would be no more informative than what we provide.
After the formal statements in Section 4 and a suggestion for an approximate algorithm, we dedicate Section 5 to a hopefully illustrative example, Section 6 to numerical computations in it. A final section provides some remarks on the \(n\)-dimensional case.
## 2. Statement of the problem
Consider the autonomous variational problem in one dimension:
\[\mathcal{P}\equiv\left\{\begin{array}{l}\min\int_{0}^{T}F(x(t),\dot{x}(t)) \,dt\\ x(0)=x_{0}\end{array}\right. \tag{1}\]
where \(F\) is a \(\mathcal{C}^{2}(\mathbb{R}^{2})\) function, and \(T\) is large enough. It is well known since Samuelson [2] that many of these problems have a _turnpike_: a value \(x_{P}\) such that "most" solutions of (1) pass near it during a long period (i.s. \(x(t)\simeq x_{P}\) and \(\dot{x}(t)\simeq 0\) for a "large" inner subinterval of \([0,T]\)), for \(T\to\infty\). Moreover, as Zaslavsky has proved [22], there are also initial and final curves (the _entry_ and _leaving_ arcs) \(\gamma_{e}\) and \(\gamma_{l}\) such that, as \(T\to\infty\), any solution of that problem is very near \(\gamma_{e}\) at the beginning, then near \(x_{P}\), and finally, it is near \(\gamma_{l}\) in the end. Of course, all the terms between quotation marks can be properly defined [10].
However, despite all the results around turnpikes, and as we have remarked in the Introduction, there is still no programmatic way to find their entry and exit
arcs. The aim of this work is to explicitly show which curves these arcs are and how to compute them in the generic case.
## 3. Extremal curves, level sets and hyperbolic saddles
Given problem (1), Euler's equation
\[\frac{\partial F(x(t),\dot{x}(t))}{\partial x}-\frac{d}{dt}\left(\frac{\partial F (x(t),\dot{x}(t))}{\partial\dot{x}}\right)=0 \tag{2}\]
is best rewritten, after simplifying a common factor \(u\), for our purposes, as a vector field, using \(x\) and \(u\) as subindices to indicate partial differentiation with respect to the first and second variables:
\[\mathcal{E}\equiv\left\{\begin{array}{l}\dot{x}=u\\ \dot{u}=\frac{F_{x}-uF_{xu}}{F_{uu}}\end{array}\right. \tag{3}\]
This vector field might have singularities where \(F_{uu}=0\) (this implies, for instance, that if the problem is strictly convex in \(u\), then there are no such singularities). The extremal curves (solutions to Euler's equation) then coincide with the trajectories of \(\mathcal{E}\). Moreover, as the problem is autonomous, it is well known (see, for instance [21]) that the following function
\[C(x,u)=F(x,u)-uF_{u}(x,u) \tag{4}\]
is constant in the extremals. Thus, _extremals, as \(1\)-dimensional manifolds, are the level sets of \(C(x,u)\)_ in \(\mathbb{R}^{2}\), for the problem \(\mathcal{P}\).
Let us work away from the points where \(F_{uu}=0\), that is, we limit ourselves to an open set \(W\) where \(F_{uu}(x,u)\neq 0\). Consider an equilibrium point \(P\) of \(\mathcal{E}\), that is: a point with \(u=0,F_{x}-uF_{xu}=0\). The linear part of \(\mathcal{E}\) is always of the form
\[L=\begin{pmatrix}0&\star_{1}\\ 1&\star_{2}\end{pmatrix}\]
where the stars are unknown values. Except in degenerate cases, \(P\) is then either a center-focus (when both eigenvalues of \(L\) are complex), a node (both eigenvalues of \(L\) are real and have the same sign) or a hyperbolic saddle (real eigenvalues with different sign). Obviously, the nature of either center-foci or nodes prevents such a point from being a turnpike with entry and leaving arcs: if \(P\) is a center, trajectories turn around it, if it is a focus, then they either converge to it (so that \(P\) is not strictly speaking a turnpike) or move away from it (again, not a turnpike). For the same reasons as for foci, nodes cannot be turnpikes. Hence, turnpikes with entry and leaving arcs correspond, in the non-degenerate case, to hyperbolic saddles, as is well known.
Assume then that \(P=(x_{P},u_{P})\) is a hyperbolic saddle of \(\mathcal{E}\), which by definition will have \(u_{P}=0\) (this is exactly what makes \(P\) a turnpike: near \(P\), the velocity of \(\mathcal{E}\) tends to \(0\) and extremals spend "a long time" near \(P\)). On the other hand, we have \(F_{x}(x_{P},u_{P})-u_{P}F_{xu}(x_{p},u_{P})=0\), which becomes at \(P\) just \(F_{x}(x_{P},0)=0\). As \(P\) is a hyperbolic saddle, there are two invariant manifolds adherent to \(P\): the stable \(X_{s}\) and unstable \(X_{u}\) ones, meeting transversely at \(P\) (see Figure 1: the stable manifold "falls" towards \(P\) and the unstable one "goes away" from it). As these manifolds are unions of extremal curves (they are trajectories of \(\mathcal{E}\)), they correspond also to level curves of \(C(x,u)\) and, as \(P\) belongs to both, if we denote by \(M=X_{s}\cup X_{u}\) their union, must necessarily have:
\[M\equiv C(x,u)=C(x_{P},0).\]
That is, the invariant set near \(P\) is given by \(C(x,u)=C(x_{P},0)\).
Near \(P\), the set \(M\) can be divided into 4 different trajectories of \(\mathcal{E}\): \(\gamma_{s}^{1},\gamma_{s}^{2}\), which are the two components of \(X_{s}\setminus\{P\}\) and \(\gamma_{u}^{1}\), \(\gamma_{u}^{2}\) for \(X_{u}\setminus\{P\}\). Any connected open set \(V\) containing \(P\) with sufficiently smooth border is divided by these four curves into four open subsets: \(U_{11}\), \(U_{12}\), \(U_{21}\), \(U_{22}\), each \(U_{ij}\) corresponding to the "angle" defined by \(\gamma_{s}^{i}\) and \(\gamma_{u}^{j}\), in that order (See Figure 1).
The solutions of the variational problem \(\mathcal{P}\) are extremals (so, they correspond to trajectories of \(\mathcal{E}\)) which verify the initial condition \(x(0)=x_{0}\) and also satisfy the transversality condition \(F_{u}(x(T),u(T))=0\). The equation given by the trasnversality condition \(F_{u}(x,u)=0\) defines (usually) a curve in the \((x,u)\)-plane. Figure 2 shows the "general" situation in which we find ourselves. The trajectory \(\gamma_{e}\), part of the stable manifold, and \(\gamma_{l}\), part of the unstable one, are the entry and leaving arcs, respectively.
The intersection points between the transversality condition \(F_{u}(x,u)=0\) and \(M\) are, key in our statements. These points are the solutions of the system of equations:
\[\left\{\begin{array}{l}F(x,u)-uF_{u}(x,u)=C(x_{P},0)\\ F_{u}(x,u)=0\end{array}\right.\]
which, after simplifying, becomes (see [19], where this system of equations appeared for the first time):
\[\left\{\begin{array}{l}F(x,u)=C(x_{P},0)\\ F_{u}(x,u)=0\end{array}\right. \tag{5}\]
In the problem \(\mathcal{P}\), the initial value \(x(0)=x_{0}\) is set. Assume \(Q_{e}=(x_{0},u_{e})\) belongs to \(x=x_{0}\cap\{C(x,u)=C(x_{P},0)\}\) and to the stable manifold of \(P\), and let \(Q=(x_{l},u_{l})\) be the solution of (5) in the unstable manifold (as in Figure 2) which is nearest to \(P\). Under these assumptions, the Turnpike property happens relative to \(P\) (as in Figure 2) and extremals with \(x(0)=x_{0}\) start near \(Q_{e}=(x_{0},u_{e})\), so that \(u(0)\to u_{e}\), and finish near \(Q_{l}\), so that and \(x(T)\to x_{l}\) as \(T\to\infty\).
If there existed a solution \(R=(x_{e},u_{e})\) of (5), belonging to the stable manifold and satisfying the transversality condition (this case is not plotted in Figure 2), a dual argument using \(f(x,-\dot{x})\) shows that there are extremals of the variational
Figure 1. Hyperbolic saddle \(P\) and the open sets \(U_{ij}\).
problem with no initial or terminal condition:
\[\mathcal{P}^{\prime}\equiv\min\int_{0}^{T}F(x(t),\dot{x}(t))\,dt \tag{6}\]
with starting points \(x(0)\to x_{e}\) (and endpoints ending at \(Q_{l}\) as above).
This theoretical description which is just a qualitative expression of the well-known results on the continuous dependence of solutions of ODEs on the parameters, and of the local structure of hyperbolic singularities (see [24], for example) is enough to prove our results, so that instead of proofs, we just refer to this section.
Our statements have two versions: one in which a solution \(\gamma\) of \(\mathcal{P}\) is already known, and one in which all depends just on the solutions of (5).
## 4. Statements of the results
As explained in the introduction, \(F(x,u)\) is of class \(\mathcal{C}^{2}\) in \(\mathbb{R}^{2}\), and all our statements are in an open set \(W\subset\mathbb{R}^{2}\) where \(F_{uu}(x,u)\neq 0\). We shall make frequent reference to the vector field \(\mathcal{E}\) defined in (3).
Our first result assumes the existence of an extremal "sufficiently" near a hyperbolic turnpike:
**Theorem 1**.: _Let \(P=(x_{P},u_{P})\) be a hyperbolic saddle of \(\mathcal{E}\) and \(\gamma\) an extremal of \(\mathcal{P}\) included in an open set \(U\subset W\) containing \(P\) which admits a subdivision \(U_{ij}\) for \(i,j\in\{1,2\}\) as above. We assume the following conditions:_
1. _The curve_ \(F_{u}(x,u)=0\) _meets_ \(\gamma_{u}^{1}\) _and_ \(\gamma\) _transversely at the points_ \(Q=(x_{l},u_{l})\) _and_ \((x(T),u(T))\)_._
2. _That curve_ \(F_{u}(x,u)=0\) _admits an injective parametrization near_ \(Q\)_,_ \(\eta:[-1,1]\to\mathbb{R}^{2}\) _with_ \(\eta(0)=Q\)_,_ \(\eta(1)=(x(T),u(T))\) _such that_ \(\eta\) _is transverse to any extremal meeting it._
3. _The extremals_ \(\gamma_{s}^{1}\) _and_ \(\gamma\) _meet the manifold_ \(x=x_{0}\) _transversely at_ \((x_{0},u_{e})\)_,_ \((x_{0},u_{0})\) _respectively._
Figure 2. Hyperbolic saddle \(P\) (turnpike), extremal (\(\gamma\)), and entry (\(\gamma_{e}\)) and leaving (\(\gamma_{l}\)) arcs. In yellow, the “slow” zone. As long as there are no singularities of \(\mathcal{E}\) in the cyan zone, the turnpike property holds inside it, and as \(T\to\infty\), the corresponding extremal of \(\mathcal{P}\) approaches \(\gamma_{e}\) at the beginning and \(\gamma_{l}\) at the end. The entry arc starts at \(Q_{e}\) and the leaving arc ends at \(Q_{l}\).
_._
4. _The open set_ \(V\) _(delimited by_ \(\gamma_{s}^{1}\) _and_ \(\gamma_{u}^{1}\)_,_ \(\eta\) _and the line_ \(x=x_{0}\) _contains no more singularities of_ \(\mathcal{E}\)_._
_Then:_
1. _For any_ \(\overline{T}>T\)_, the problem_ \[\mathcal{S}_{\overline{T}}\ \left\{\begin{array}{l}\min\int_{0}^{ \overline{T}}F(x(t),\dot{x}(t))\,dt\\ x(0)=x_{0}\end{array}\right.\] _has an extremal_ \(\gamma\) _which is totally included in_ \(V\)_._
2. _For any_ \(\epsilon>0\) _there is_ \(T_{\epsilon}>T\)_,_ \(T_{\epsilon,\epsilon}>0\) _and_ \(T_{\epsilon,\infty}<T\)_, such that, for_ \(\overline{T}>T_{\epsilon}\)_, the corresponding extremal_ \(\gamma_{\overline{T}}\) _satisfies:_ * _The metric distance between_ \(\gamma_{\overline{T}}:[0,T_{e}]\to\mathbb{R}^{2}\) _and_ \(\gamma_{e}\) _is less than_ \(\epsilon\)_._ * _The metric distance between_ \(\gamma_{\overline{T}}:[T_{e},T_{l}]\to\mathbb{R}^{2}\) _and_ \(P\) _is less than_ \(\epsilon\)_._ * _The metric distance between_ \(\gamma_{\overline{T}}:[T_{l},\overline{T}]\to\mathbb{R}^{2}\) _and_ \(\gamma_{l}\) _is less than_ \(\epsilon\)_._ _Moreover, one can also choose_ \(T_{e}\) _and_ \(T_{l}\) _such that_ \(T_{l}-T_{e}\to\infty\) _as_ \(\epsilon\to 0\) _and_ \(T_{l},T_{e}<K\) _for some_ \(K<\infty\)_._
Proof.: The first conclusion follows from the continuous dependence of solutions of an ordinary differential equation on the parameters (and from all the qualitative descriptions in Section 3). The second one follows also from the local structure of hyperbolic saddle singularities, see for instance [24].
**Definition 1**.: _The curve \(\gamma_{e}\) from \(x=x_{0}\cap\gamma_{e}\) to \(P\) is called the entry arc to the turnpike \(P\). The curve \(\gamma_{l}\) from \(P\) to \(Q\) is called the leaving arc of the turnpike \(P\)._
The previous statement seems to require a lot from the equation. As it happens, most of the hypotheses are just technical and will hold in generic situations. On the other hand, we can also say a lot (locally) if we just know that the transversality condition meets \(\gamma_{l}\) transversely. The following result is again a straightforward consequence of the local structure of hyperbolic saddles and the continuous dependence on the parameters of solutions of ODEs. All the statements are inside \(W\subset F_{uu}(x,u)\neq 0\).
**Theorem 2**.: _Let \(P\) be a hyperbolic saddle and \(\gamma_{l}\subset\gamma_{u}^{1}\cup\gamma_{u}^{2}\) one of the components of the unstable manifold. Assume that the transversality condition \(Tr\equiv F_{u}(x,u)=0\) meets \(\gamma_{l}\) transversely at \(Q_{l}=(x_{l},u_{l})\) and that there are no more intersection points between \(Q_{l}\) and \(P\) belonging to \(\gamma_{l}\). Then there is \(\epsilon>0\) and a parametrization \(\tau:[-\epsilon,\epsilon]\to T_{c}\) with \(\tau(0)=Q_{l}\) such that the two extremals containing \(\tau(-\epsilon)\) and \(\tau(\epsilon)\) satisfy all the properties of Theorem 1. As a consequence, \(P\) is a turnpike for \(\mathcal{P}\). Moreover, assume \(\gamma_{s}^{1}\) is to the right of \(P\) and \(\gamma_{s}^{2}\) is to its left. Assume, for simplicity, that \(Q_{l}\) is to the left of \(P\) (i.e. \(x_{l}<x_{P}\)). Then:_
1. _The point_ \(P\) _is a turnpike for_ \(\mathcal{P}\) _for_ \(x_{0}\in(x_{P},x_{P}+\epsilon)\)_, for some_ \(\epsilon>0\)_,_ \(\gamma_{l}\) _(from_ \(P\) _to_ \(Q\)_) is the leaving arc and_ \(\gamma_{s}^{1}\) _(from_ \(x=x_{0}\) _to_ \(P\)_) is the entry arc._
2. _At the same time,_ \(P\) _is a turnpike for_ \(\mathcal{P}\) _for_ \(x_{0}\in(x_{P}-\epsilon,x_{P})\) _for some_ \(\epsilon>0\)_,_ \(\gamma_{l}\) _(from_ \(P\) _to_ \(Q\)_) is the leaving arc and_ \(\gamma_{s}^{2}\) _(from_ \(x=x_{0}\) _to_ \(P\)_) is the entry arc._
Notice how there is a switch of entry arcs when the initial condition \(x_{0}\) changes from being "greater than \(x_{P}\)" to "less than \(x_{P}\)". This is easy to see in Figure 2: if \(x_{0}\) is less than \(x_{P}\), the extremals ending near \(Q\) must approach, at their beginning, the top-left separatrix for \(T\to\infty\), instead of \(\gamma_{e}\). Obviously, for the dual problem (final condition set but initial condition free), it is the leaving arc that changes.
Consider the problem (6) with no initial or final condition. Near a hyperbolic singularity \(P\) of \(\mathcal{E}\) we _may_ have a turnpike result if the system of equations (5) has two solutions near \(P\). Again, everything is restricted to some open set \(W\subset F_{uu}(x,u)\neq 0\).
**Theorem 3**.: _With the notations above, assume \(P\) is a hyperbolic singularity of \(\mathcal{E}\). If \(Q_{e}\in\gamma_{s}^{1}\) and \(Q_{l}\in\gamma_{u}^{1}\) are two solutions of (5) and there are no more solutions of (5) between \(Q_{e}\) and \(P\) and \(P\) and \(Q_{l}\), then \(P\) is a turnpike for the problem (6). That is, for \(T\to\infty\), there are extremals \(\gamma=(x(t),u(t))\) of (6) satisfying:_
1. _The origin tends to_ \(Q_{e}\)_:_ \((x(0),u(0))\to Q_{e}\)_,_
2. _The end tends to_ \(Q_{l}\)_:_ \((x(T),u(T))\to Q_{l}\)_,_
3. _The curve_ \(\gamma\) _approaches the part of_ \(\gamma_{s}^{1}\) _between_ \(Q_{e}\) _and_ \(P\) _at the beginning (the entry arc),_
4. _The curve_ \(\gamma\) _approaches the part of_ \(\gamma_{u}^{1}\) _between_ \(P\) _and_ \(Q_{l}\) _at the end (the leaving arc)._
Proof.: As previously, the proof is a straightforward application of the description in 3, the structure of hyperbolic singularities and the continuous dependence on parameters of solutions of ODEs.
Finally, consider the problem with fixed endpoints:
\[\overline{\mathcal{P}}\equiv\left\{\begin{array}{l}\min\int_{0}^{T}F(x(t), \dot{x}(t))\,dt\\ x(0)=x_{0},\ x(T)=x_{T}\end{array}\right. \tag{7}\]
In this case the statements holds regardless of the transversality condition.
**Theorem 4**.: _Assume \(P\) is a hyperbolic singularity of \(\mathcal{E}\). Assume \(x=x_{0}\) meets \(\gamma_{1}^{s}\) at \(Q_{e}\) and \(x=x_{T}\) meets \(\gamma_{1}^{u}\) at \(Q_{l}\). Then \(P\) is a turnpike for the problem (7), and there is an open neighborhood \(V\) of \(P\) such that, for \(T\to\infty\) any extremal \(\gamma\) of (7) included in \(V\) satisfies statements (1)-(4) of Theorem 3._
Notice that \(Q_{e}\) and \(Q_{l}\) in previous results can be easily computed using the fact that \(\gamma_{i}^{s}\) and \(\gamma_{i}^{u}\) are level sets of \(C(x,u)\). Thus, if they exist, then
\[Q_{e}\in\{C(x,u)=C(P)\wedge x=x_{0}\}\,,\,\,\,Q_{l}\in\{C(x,u)=C(P)\wedge x=x_ {T}\}\]
However, those solution sets might have more than one point, and one needs to verify which (if any) can be a candidate.
### Suggestion for an approximate algorithm
From the discussions above, the following method is suggested to use the turnpike and the entry and leaving arcs as approximate extremals for \(T\gg 0\). Specifically, for Problem (1) with \(x(0)\) fixed and \(x(T)\) free:
1. State a tolerance \(\epsilon>0\).
2. Find the possible turnpike \(P=(x_{P},0)\). This requires studying the phase space of (3), its singularities and the separatrices \(\gamma_{s}^{i}\) and \(\gamma_{u}^{i}\), for \(i=1,2\). For this one can just use the level set \(C(x,u)=C(x_{P},0)\).
3. Find the adequate \(Q_{e}\) and \(Q_{l}\). As explained above, \(Q_{e}\) belongs to \(x=x_{0}\) and \(C(x,u)=C(x_{P},0)\), whereas \(Q_{l}\) is found using system (5).
4. From \(Q_{e}\) compute the trajectory \(\gamma_{e}(t)\) of \(\mathcal{E}\) with \(\gamma_{e}(0)=Q_{e}\) and ending at \(|\gamma_{e}(T_{e})-x_{P}|<\epsilon\). This is an IVP integrated until some condition is met.
5. From \(Q_{l}\) compute the trajectory \(\gamma_{l}(t)\) of \(\mathcal{E}\) with \(\gamma_{l}(T_{l})=Q_{l}\) and \(|\gamma_{l}(0)-x_{P}|<\epsilon\). This is a backwards IVP integrated until some condition is met.
After those computations, if \(T>T_{e}+T_{l}\), then any extremal \(\gamma_{T}\) of (1) can be approximated by the turnpike as:
\[\gamma_{T}(t)\simeq\left\{\begin{array}{l}\gamma_{e}(t)\mbox{ if }t\in[0,T_{e})\\ x_{P}\mbox{ if }t\in[T_{e},T-T_{l}]\\ \gamma_{l}(t)\mbox{ if }t\in(T-T_{l},T]\end{array}\right. \tag{8}\]
## 5. An example: shallow lakes
In this section we showcase the well-known shallow lakes model without discount (see for instance [23] for the details), with a modified cost function to prevent, in our example, the issues with the logarithm. The problem to be solved is, initially, the Optimal Control problem with control variable \(v\):
\[\mathcal{P}\equiv\left\{\begin{array}{l}\max\int_{0}^{T}\left(v^{2}-cx^{2} \right)\,dt\\ \dot{x}=v-bx+r\frac{x^{2}}{x^{2}+1}\\ x(0)=x_{0}\end{array}\right. \tag{9}\]
with \(c,b,r\) positive constants. This is, in fact, a variational problem, as \(v\) can be expressed as a function of \(x,\dot{x}\) and there are no restrictions. Thus, we shall in fact study the variational problem
\[\mathcal{P}\equiv\left\{\begin{array}{l}\max\int_{0}^{T}F(x,\dot{x})\,dt\\ x(0)=x_{0}\end{array}\right. \tag{10}\]
with
\[F(x,\dot{x})=b^{2}x^{2}-\frac{2brx^{3}}{x^{2}+1}+2bx\dot{x}-cx^{2}+\frac{r^{2} x^{4}}{\left(x^{2}+1\right)^{2}}-\frac{2rx^{2}\dot{x}}{x^{2}+1}+\dot{x}^{2}\]
and we shall set the value of the constants to \(r=1,c=0.1\), and \(b=0.7\). The Euler equation for this problem is, once divided by \(\dot{x}\):
\[\frac{1}{(1+x^{2})^{3}}\bigg{(}x^{6}(-2\ddot{x}-1.4)+x^{4}(-6 \ddot{x}-5.6)+x^{2}(-6\ddot{x}-4.2)-\\ (2\ddot{x}+0.78x^{7}+2.34x^{5}+6.34x^{3}+0.78x\bigg{)}=0 \tag{11}\]
And the vector field associated to this second order equation is, in the \((x,u)\) plane corresponding to \((x,\dot{x})\):
\[\mathcal{E}\equiv\left\{\begin{array}{l}\dot{x}=u\\ \dot{u}=\frac{0.39x\left(x^{6}-1.79487x^{5}+3.x^{4}-7.17949x^{3}+8.12821x^{2} -5.38462x+1\right)}{\left(x^{2}+1\right)^{3}}\end{array}\right. \tag{12}\]
The denominator in \(\dot{u}\) is never \(0\), so that \(\mathcal{E}\) is well-defined in all \(\mathbb{R}^{2}\). The vector field \(\mathcal{E}\) has three singular points: two hyperbolic saddles \(P_{1}=(0,0)\) and \(P_{2}\simeq(1.5062,0)\) and a center/focus, \(O=(0.2747,0)\). Figure 3 shows the structure of \(\mathcal{E}\) near its singularities (in red).
The function whose level sets are the extremals (the trajectories of \(\mathcal{E}\)) is
\[C(x,u)=x^{2}\left(\frac{x^{2}}{\left(x^{2}+1\right)^{2}}-\frac{1.4x}{x^{2}+1} +0.39\right)-u^{2}\]
so that we need to focus our attention on the level sets
\[L_{1}\equiv C(x,u)=C(P_{1})=0\]
\[L_{2}\equiv C(x,u)=C(P_{2})=-0.097\]
Finally, the transversality condition in this case is given by
\[Tr\equiv 2u+1.4x-\frac{2x^{2}}{1+x^{2}}=0.\]
In Figure 4 we have plotted the sets \(L_{1}\) (cyan), \(L_{2}\) (yellow) and \(Tr\) (black). Notice how \(Tr\cap L_{1}\) is just the hyperbolic point \(P_{1}\) whereas \(Tr\cap L_{2}\) has two points, one above \(u=0\) and the other one below (both in green).
Surprisingly enough, the transversality condition (in black in Figure 4) only meets the curve \(M_{1}\equiv C(x,u)=C(P_{1})\) (in cyan) at \((0,0)\) so that our results only apply to \(P_{1}\) in the fixed-endpoints versions (because \(Tr\) never meets \(M_{1}\) transversely).
Consider the hyperbolic saddle \(P_{2}\). We are going to showcase the four turnpike possibilities for it under problem (10).
The transversality condition meets the (yellow) curve \(M_{2}=C(x,u)=C(P_{2})\) at the (green) points \(Q_{1}\simeq(-0.9852,1.1822)\) and \(Q_{2}\simeq(0.9852,-1.971)\). Clearly, the top left and bottom right parts of \(M\) are the stable manifolds, call them \(\gamma_{s}^{1}\) and \(\gamma_{s}^{2}\), respectively, whereas \(\gamma_{u}^{1}\) is the bottom-left part.
### Initial condition fixed. Change of entry arc and of turnpike
Recall that we have called: \(\gamma_{s}^{1}\) the top-left branch of \(M_{2}\) (in yellow in Figure 4) and \(\gamma_{s}^{2}\) the bottom-right branch (these are the stable trajectories and will give rise to the entry arcs). Also, \(\gamma_{u}^{1}\) is the bottom-left branch, and \(\gamma_{u}^{2}\) the top-right one, which will give rise to the leaving arcs.
In this subsection we are going to study the problem (9) (i.e. with initial condition but no end condition).
If \(x_{0}>1.5062\), the extremals meet the transversality condition near \(Q_{2}\simeq(0.9852,-1.971)\) for \(T\to\infty\), whatever the value of \(x_{0}\). The entry arc to the
Figure 3. Stream lines of \(\mathcal{E}\) in the example. The red dots are its singulaties, at \(u=0\), \(x\in\{0,0.2747,1.5062\}\).
turnpike \(P_{2}\) in this case is \(\gamma_{2}^{s}\) from \(x=x_{0}\) to \(P_{2}\): this happens for any \(x_{0}>1.5062\) because the transversality condition does not meet \(M_{2}\) for \(x>1.5026\).
However, the moment \(x_{0}\) is to the left of \(P_{2}\), that is \(x_{0}<1.5062\), the entry arc to turnpike \(P\) changes from \(\gamma_{s}^{2}\) to \(\gamma_{s}^{1}\) (which is above \(u=0\)). As \(x_{0}\to-0.9852\) (the \(x\)-coordinate of \(Q_{1}\)), the extremals approach \(\gamma_{s}^{1}\). The problem with \(x_{0}=-0.9852\) has no solution because \(Q_{1}\) is the only intersection point between an extremal which meets the transversality condition (this is easily seen in the Figure 4).
Finally, for \(x_{0}<-0.9852\), the candidate extremals for problem \(\mathcal{P}_{f}\) for \(T\to\infty\) approach \(P_{1}=(0,0)\), the intersection point of \(Tr\) and \(M_{1}\) (the black and cyan lines in Figure 4). Thus, there is an entry arc, from \(x=x_{0}\cap Tr\) to \(P_{1}\) but the turnpike is never left, in this case.
### Initial and final conditions fixed
When \(x(0)=x_{0}\) and \(x(T)=x_{T}\) are both fixed, the transversality condition plays no role and one needs only study the relation between these conditions and the hyperbolic singularities \(P_{1}\) and \(P_{2}\). For the sake of simplicity, we are only going to show some cases. Let \(x_{P_{1}}=0\) denote the \(x\)-coordinate of \(P_{1}\) and \(x_{P_{2}}\simeq 1.5062\) the one of \(P_{2}\). Of course, in order to have a turnpike behavior, there must be at least one singularity between \(x_{0}\) and \(x_{T}\).
* When \(x_{0}>x_{P_{2}}>x_{T}\), then \(P_{2}\) is a turnpike and the entry arc is \(\gamma_{s}^{2}\cap\{x=x_{0}\}\), and the leaving arc is \(\gamma_{u}^{1}\cap\{x=x_{T}\}\).
* On the other hand, if \(x_{0}<x_{P_{2}}<x_{T}\) then the situation reverses at \(P_{2}\) (we are "above \(u=0\)" and the arcs are now: \(\gamma_{s}^{1}\) the entry one from \(x=x_{0}\) to \(P_{2}\) and \(\gamma_{u}^{2}\) from \(P_{2}\) to \(x=x_{T}\).
* If \(x_{0},x_{T}\in(x_{P_{1}},x_{P_{2}})\) (that is, both endpoints are between \(P_{1}\) and \(P_{2}\)) it is easy to realize that \(P_{1}\) is still a turnpike and the entry and leaving paths
Figure 4. Hyperbolic structure of the example. The leftmost singularity is hyperbolic but its level set (blue) meets the transversality condition only at the singularity. The level set of the rightmost singularity (yellow) meets the transversality condition twice (at the green dots).
correspond to \(\gamma_{s}^{1}\) and \(\gamma_{u}^{2}\), respectively (starting at \(x=x_{0}\) and ending at \(x=x_{T}\), also).
* When, say \(x_{0}<x_{P_{1}}\) and \(x_{T}<x_{P_{2}}\), there are two candidate extremal curves for \(T\to\infty\): one having a turnpike at \(P_{1}\), and the other one at \(P_{2}\); it is necessary here to discern the optimality by other methods (which we shall not do, as this is out of our aim). Obviously, each turnpike has his respective entry and leaving arcs (in this case, \(P_{1}\) arcs are the cyan unbounded curves to its left).
### The free endpoint problem
Finally, the free endpoint problem requires the extremals to meet the transversality condition at \(x(0)\) and \(x(T)\). In our case, only \(M_{2}\) meets \(Tr\) twice away from a singularity, whereas \(M_{1}\cap Tr=\{P_{1}\}\). As far as extremals go, the "constant curve" \((x(t),u(t))=P_{1}\) for all \(t\in[0,T]\) is always a candidate trajectory (as it is an extremal which satisfies the transversality conditions). These have obviously constant cost \(F(P_{1})\times T\).
There is, however, a second possibility giving rise to a true turnpike: the solutions starting near \(Q_{1}\) below \(\gamma_{s}^{1}\), approaching \(P_{2}\) and ending near \(Q_{2}\) above \(\gamma_{u}^{1}\). In this case, the entry arc is \(\gamma_{s}^{1}\) from \(Q_{1}\) to \(P\) and the leaving arc is \(\gamma_{u}^{1}\) from \(P_{1}\) to \(Q_{2}\).
## 6. Simulations
In this section we plot the simulations corresponding to some of the cases in Section 5. We have used a budget computer (Intel Core i5 with 16Gb RAM) and Mathematica, with no excessive time used (the simulations can be run in several hours, the longest time taken by the very precise computation of the entry and leaving arcs and, unfortunately, the plotting commands, as the numerical solutions are interpolating functions and their evaluation is quite slow). We restrict ourselves, for the sake of brevity, to the initial problem (1) with \(x(0)=x_{0}=0.5\) and \(T\gg 0\).
The above requires computing the turnpike entry arc starting at the point \(P_{e}=(0.5,u_{e})\), which is the solution of the first equation in (5) with \(x=0.5\), that is, \(u_{e}\) is the solution of:
\[F(0.5,u)=C(P_{2})=C(1.5062,0), \tag{13}\]
giving \(u_{e}\simeq 0.30751221580\). However, _one needs to compute \(u_{e}\) with a huge precision in order to really obtain a fine approximation to the turnpike_. In our computations, we used 30 values of precision when computing the solution of (13) (so that \(P_{2}\) was also computed with that precision).
We also need to compute the leaving arc of that turnpike, which requires knowing the point \(P_{l}=Q_{2}\), solution of (5):
\[\left\{\begin{array}{l}F(x_{l},u_{l})=C(P_{2})\\ Tr(x_{l},u_{l})=0\end{array}\right. \tag{14}\]
which gives, as indicated above, \(P_{l}\simeq(0.9852,-1.971)\) (with the same caveat regarding the precision).
Figure 5 contains the plot of the extremal \(x(t)\) (in blue) and its derivative \(u(t)=\dot{x}(t)\) (orange), corresponding to (1) with \(x_{0}=0.5\) and \(T=63\). Overlain (in dashed lines) we have plotted the entry arc from \(t=0\) to \(t=24\), and the leaving arc, from \(t=41\) to \(t=63\). There is no noticeable difference.
Figure 6 shows, on the left, the difference between \(x(t)\) and the entry arc for the same \(T=63\), and on the right, the difference between \(u(t)=\dot{x}(t)\) and the corresponding value on the turnpike, for \(t=T-22\) to \(T\) (where \(22\) is taken as a value where the \(x-value\) of the leaving arc is less than \(10^{-5}\) from the true turnpike \(P_{2}\)). Notice that the time is inverted in the latter plot because we have
computed the leaving arc "backwards". The errors are, as can be seen, irrelevant to all purposes.
Finally, Figure 7 contains the plots of the different solutions \(x(t)\) for times \(T\) between \(51\) and \(56\) and for time \(T=63\). The structure of the entry arc is essentially the same for all and all are, obviously, indistinguishable, whereas the leaving arc is also essentially equal but starts at different times. Figure 8 shows the difference between the corresponding entry and leaving arcs and the ones of the turnpike (where the cutting point is set as above).
Figure 5. Turnpike entry and leaving arcs compared to solution for \(T=63\).
Figure 6. Absolute differences between entry (left) and leaving (right) arcs and the corresponding part of the solution for \(T=63\). On the right, the time is reversed (from \(Q_{l}\) to \(P\)).
Figure 7. Solutions for times between \(51\) and \(56\), and for \(T=63\).
## 7. Final remarks
Our aim in this paper is just to show, in the case of dimension 1, which is the most graphical one, how to compute the entry and leaving arcs of the turnpike of an autonomous variational problem in order to settle this question. Of course, the generalization to variational problems in which the functional \(F(x_{1},\dot{x}_{1},\ldots,x_{k},\dot{x}_{k})\) has "separated variables", that is problems with
\[\frac{\partial^{2}F}{\partial u\partial v}=0\]
whenever \(u,v\) correspond to variables with different indices (i.e. \(u\in\{x_{i},\dot{x}_{i}\}\) and \(v\in\{x_{j},\dot{x}_{j}\}\) with \(i\neq j\)) is straightforward, as the associated vector fields are defined by independent equations.
The most general autonomous case is, for the time being, inaccessible to us but we hope the technique presented in this work may be useful to elucidate their solution.
|
2302.01752 | Long Distance Nonlocality Test with Entanglement Swapping and
Displacement-Based Measurements | We analyze an all-optical setup which enables Bell-inequality violation over
long distances by exploiting probabilistic entanglement swapping. The setup
involves only two-mode squeezers, displacements, beamsplitters, and on/off
detectors. We analyze a scenario with dichotomic inputs and outputs, and check
the robustness of the Bell inequality violation for up to 6 parties, with
respect to phase-, amplitude-, and dark-count noise, as well as loss. | Anders J. E. Bjerrum, Jonatan B. Brask, Jonas S. Neergaard-Nielsen, Ulrik L. Andersen | 2023-02-02T12:04:18Z | http://arxiv.org/abs/2302.01752v1 | # Long Distance Nonlocality Test with Entanglement Swapping and Displacement-Based Measurements
###### Abstract
We analyze an all-optical setup which enables Bell-inequality violation over long distances by exploiting probabilistic entanglement swapping. The setup involves only two-mode squeezers, displacements, beamsplitters, and on/off detectors. We analyze a scenario with dichotomic inputs and outputs, and check the robustness of the Bell inequality violation for up to 6 parties, with respect to phase-, amplitude-, and dark-count noise, as well as loss.
## I Introduction
These... may be termed conditions of possible experience. When satisfied they indicate that the data _may_ have, when not satisfied they indicate that the data _cannot_ have resulted from an actual observation.
George Boole [1862]
As pointed out already by Boole in his work on probability theory, logical relations between observable events imply inequalities for the probabilities of their occurrence [1; 2]. Bell later demonstrated that the inequalities implied by a local realist description of nature can be violated within quantum mechanics [3], implying that quantum mechanics cannot be recast as a local realist theory. Subsequent experimental investigations by Clauser, Aspect and their collaborators [4; 5; 6] confirmed the nonlocal predictions of quantum mechanics, and nonlocality gradually became accepted as an aspect of nature. These early experiments were however not loophole-free, and while loophole-free violations have since been realised [7; 8; 9], it still remains experimentally challenging.
Loopholes constitute ways in which nature, or an eavesdropper, can arrange experimental outcomes, such that an experiment appears nonlocal, while in reality it is not. The detection loophole is relevant when inconclusive measurements are discarded from the experimental data [10]. Such inconclusive measurements typically occur due to losses during transmission of the particles, or non-unit efficiency of the detectors. It has been demonstrated that discarding inconclusive measurement rounds renders it possible to violate a Bell inequality using classical optics [11]. The locality loophole is present if measurements are performed such that a sub-luminal signal can transfer information between measurement stations during a measurement sequence. Such a sequence includes the act of choosing a measurement basis, and performing the measurement in this basis. The locality loophole can be closed by separating the measurement stations and keeping the duration of the measurement sequence short. However, this separation tends to induce losses and noise in the state shared by the participants of the experiment, and these losses tend to make the shared quantum state local, i.e. it cannot be used to demonstrate a Bell inequality violation.
In spite of these difficulties, the utilization of nonlocality is now moving from fundamental science towards practical applications, where the provable nonlocality of a quantum state is used in device-independent protocols to certify the security of a cryptographic key [12; 13]. Crucial to the realization of device-independent quantum key distribution is the ability to close relevant loopholes, and to demonstrate the violation of Bell inequalities across distances relevant for telecommunication.
In this work we propose an experiment capable of violating a Bell inequality when the parties are separated by channels of low transmission. Our experiment is designed to be capable of closing the detection and locality loophole, and invokes only standard quantum optics tools, such as two-mode squeezers, displacements, and click detectors (on/off detectors). A sketch of the setup with N parties is shown in Fig. 1. The proposed experiment is inspired by the setup in [14], in which displacement-based measurements are used to demonstrate a Bell inequality violation. Two-mode squeezers (T) generate weakly squeezed two-mode squeezed vacuum states with half of each state sent a short distance to an on/off detector, and the other half sent to an interferometer B. The left-going modes in Fig. 1 are labelled \(p_{n}\) and the right-going modes are labelled \(s_{n}\), we group them into two sets \(P=\{p_{1},p_{2},\ldots,p_{N}\}\) and \(S=\{s_{1},s_{2},\ldots,s_{N}\}\). We use the same label for a mode and the corresponding detector. The interferometer B mixes the modes \(S\), so that a photon arriving at one of the input ports of B, has an equal probability of triggering each of the detectors in \(S\). We then require that only detector \(s_{N}\) clicks, and that the remaining detectors in \(S\) do not click, similar to an event-ready scheme [15]. Following this post-selection, the measurement outcomes for the detectors in \(P\) are approximately the same as if the parties shared the single-photon state \(\frac{1}{\sqrt{N}}(|1,0,\ldots,0\rangle+|0,1,\ldots,0\rangle+|0,0,\ldots,1\rangle)\), in the limit
of low squeezing. The nonlocality of this single-photon state was already analysed in [16; 17; 18], and we expect to see similar results for the approximate single-photon state analysed in this work.
Each detector in \(P\) is considered as a party, with the possible measurement outcomes, click or no click, corresponding to whether any light arrives at the detector or not. Prior to each detector, either of two different displacements (D in Fig. 1) is applied to the field. These displacements make up the two different measurement settings. We write the displacement applied on mode \(p\in P\) as \(X_{p}^{(n_{p})}=\left(x_{p}^{(n_{p})}\;\;y_{p}^{(n_{p})}\right)^{T}\), with \(n_{p}\in(0,1)\) labelling which of two possible displacements is implemented (measurement setting). We assume that all parties are choosing between the same two displacements, when the phases of the N two-mode squeezers are the same. This assumption is invoked to simplify our analysis, and we found no advantage when deviating from it. The displacement operator for mode \(p\) is defined as,
\[D_{p}\left(X_{p}^{(n_{p})}\right)=\exp\left[i(\hat{q}_{p}\;y_{p}^{(n_{p})}-\hat {p}_{p}\;x_{p}^{(n_{p})})\right], \tag{1}\]
where \(\hat{q}_{p}\) and \(\hat{p}_{p}\) are the quadrature operators for mode \(p\). We follow the convention \([\hat{q}_{k},\hat{p}_{l}]=2i\delta_{kl}\). From the quadrature operators we obtain the annihilation operator, \(\hat{a}_{p}=\frac{1}{2}\left(\hat{q}_{p}+i\hat{p}_{p}\right)\). The coherent state generated by the displacement \(X_{p}^{(n_{p})}\), i.e. the state, \(\left|X_{p}^{(n_{p})}\right\rangle=D_{p}\left(X_{p}^{(n_{p})}\right)\left|0\right\rangle\), is centred on the coordinates \(\left(q_{p}\;\;p_{p}\right)=\left(2x_{p}^{(n_{p})}\;\;2y_{p}^{(n_{p})}\right)\) in phase space. We associate a click at a detector with the value 1, and no click with the value -1. The observable associated with detector \(p\) is then given by,
\[M_{p} = \left(I_{p}-\left|0\right\rangle_{p}\!\left\langle 0\right| \right)-\left|0\right\rangle_{p}\!\left\langle 0\right| \tag{2}\] \[= I_{p}-2\left|0\right\rangle_{p}\!\left\langle 0\right|, \tag{3}\]
where \(I_{p}\) is the identity operator associated with mode \(p\). We may transfer the displacement applied prior to detector \(p\) onto the observable to obtain,
\[M_{p}^{(n_{p})}=I_{p}-2\left|-X_{p}^{(n_{p})}\right\rangle_{p}\!\left\langle- X_{p}^{(n_{p})}\right|. \tag{4}\]
We attempt to violate the W\({}^{3}\)ZB (Werner-Wolf-Weinfurter-Zukowski-Brukner) inequality [19; 20; 21],
\[2^{-N}\sum_{b}\left|\sum_{n}(-1)^{(b,n)}\langle M^{(n)}\rangle\right|\leq 1. \tag{5}\]
\(b\) and \(n\) are binary lists of length \(N\), and the sums run over all possible binary lists. \(\langle b,n\rangle\) is the dot product between \(b\) and \(n\). The entries of \(n\) label the measurement settings of the involved parties. \(\langle M^{(n)}\rangle\) is the correlator given by the product \(\langle M^{(n)}\rangle=\langle\prod_{p}M_{p}^{(n_{p})}\rangle\). We will refer to the left side of Eq. 5 as the Bell value of the W\({}^{3}\)ZB inequality. The maximal violation of the W\({}^{3}\)ZB inequality increases with the number of parties [22]. We therefore expect that when some loss and noise does not scale with the number of parties, then a violation of a W\({}^{3}\)ZB inequality with more parties is more robust against this loss and noise, as compared to a W\({}^{3}\)ZB inequality with fewer parties.
To close both the locality and detection loophole with two parties, \(p_{1}\) and \(p_{2}\), we require that the events of the experiment are positioned as shown in the space-time diagram in Fig. 2. The events T\({}_{p_{1}}\) and T\({}_{p_{2}}\) correspond to the generation of two-mode squeezed vacuum. These events occur along a temporal (vertical) line, since the light emitted from the source has a finite duration \(t_{p}\). For this reason there exists at each position \(x\) a duration of time where we expect the light to arrive with very high probability, this is marked with a darker shaded area. The measurements by \(p_{1}\) and \(p_{2}\) are labelled M\({}_{p_{1}}\) and M\({}_{p_{2}}\) respectively. M\({}_{s}\) correspond to the event
Figure 1: _Sketch of the analysed setup with N parties. The left-going modes are labelled \(p_{n}\) and the right-going modes are labelled \(s_{n}\). A detector associated with a mode is given the same label as that mode. The measurement performed by the detectors in \(S\) effectively swaps the N entangled states from the two-mode squeezers into an N-mode entangled state. ch abbreviates channel._
where \(s_{1}\) and \(s_{2}\) measure. The choosing of measurement setting is labelled C\({}_{p_{1}}\) and C\({}_{p_{2}}\). The measurements M\({}_{p_{1}}\) and M\({}_{p_{2}}\) collapse the temporal width of the pulses, as illustrated in the figure by an \(\times\). The swap M\({}_{s}\) occurs with very high probability along the vertical black line inside the central black and yellow dashed diamond. The backwards light cone for a swapping event will then typically be bounded by the dashed backwards light cone.
To close the detection loophole, \(p_{1}\) and \(p_{2}\) must choose their measurement settings at a time and place such that information about their choices cannot influence the swapping measurement M\({}_{s}\) via a sub-luminal signal. If the experiment is executed in this way, then we anticipate that an eavesdropper cannot tamper with the swap to falsify nonlocal correlations [23]. Most swapping events will obey this requirement if C\({}_{p_{1}}\) and C\({}_{p_{2}}\) are outside the dotted backward time cone shown in Fig. 2. The critical distance \(d_{c}\), which is the characteristic distance the event C\({}_{p_{2}}\) must be separated from the two-mode squeezer T\({}_{p_{2}}\), can be found by geometric arguments as \(d_{c}=(1/2)ct_{p}\), and is associated with a waiting time \(t_{c}=(1/2)t_{p}\). Ideally \(p_{2}\) could make her choice of measurement setting at a distance \(d_{c}\) from T\({}_{p_{2}}\), at a time \(t_{c}\) after the light started to be emitted from the squeezer. Then her choice would most likely not be able to influence the swap M\({}_{s}\), while at the same time ensuring that the light pulse has not passed by her yet.
The experimental constraints discussed above generalize to the scenario where N parties attempt to obtain a Bell inequality violation, while closing the detection and locality loophole. That is, the parties should ensure that the events C\({}_{p_{n}}\) are outside the backward timecone for the swapping event M\({}_{s}\). However, one should also ensure that the parties are sufficiently distant from each other, so that information on the choice of measurement setting and outcome cannot travel between parties during a measurement sequence.
## II Model
We now give an outline of how we model the optical field, and how we include experimental imperfections in our analysis. A full description can be found in appendix A1. The fields generated by the two-mode squeezers are distributed in time and space according to some mode functions [24]. The amplitudes of these modes are quantum uncertain with Gaussian statistics described by a covariance matrix \(\sigma\) with elements \(\sigma_{kl}=1/2\langle\{Q_{k},Q_{l}\}\rangle-\langle Q_{k}\rangle\langle Q_{l}\rangle\), where \(\{.,.\}\) denotes the anti-commutator and \(Q=Q_{P}\oplus Q_{S}\), where \(Q_{P}=\bigoplus_{p\in P}\left(\hat{q}_{p}\ \hat{p}_{p}\right)\) with \(Q_{S}=\bigoplus_{s\in S}\left(\hat{q}_{s}\ \hat{p}_{s}\right)\)[25]. The corresponding density matrix, also describing the statistics of the field, is denoted \(\rho\). We denote the squeezing parameter of the N squeezers as \(r\) and introduce the symbols, \(a=\sinh(2r)\) and \(v=\cosh(2r)\). The covariance
Figure 2: _Space-time diagram of a loophole-free experiment with two parties, showing the space-time ordering of important events (marked by \(\times\)). The events T\({}_{p_{1}}\) and T\({}_{p_{2}}\) correspond to the generation of two-mode squeezed vacuum. C\({}_{p_{1}}\) and C\({}_{p_{2}}\) are the events where \(p_{1}\) and \(p_{2}\) decide their measurement settings. M\({}_{p_{1}}\) and M\({}_{p_{2}}\) correspond to events where \(p_{1}\) and \(p_{2}\) measure. M\({}_{s}\) correspond to the event where \(s_{1}\) and \(s_{2}\) measure. At the bottom we sketch the experimental setup (compare with Fig. 1), where S corresponds to the swap following the interferometer B._
matrix of the 2N modes can be written as,
\[\sigma=\begin{pmatrix}v\mathbf{I}&\mathbf{R}_{\phi}\\ \mathbf{R}_{\phi}&v\mathbf{I}\end{pmatrix}, \tag{6}\]
where \(\mathbf{I}\) is the identity matrix of dimension 2N and \(\mathbf{R}_{\phi}\) is the block diagonal matrix,
\[\mathbf{R}_{\phi}=\bigoplus_{p}\begin{pmatrix}a\cos(\phi_{p})&-a\sin(\phi_{p}) \\ -a\sin(\phi_{p})&-a\cos(\phi_{p})\end{pmatrix}, \tag{7}\]
where \(\phi_{p}\) is the phase angle of the squeezer for party \(p\). The expectation value of the field amplitude is assumed zero. The Wigner characteristic function corresponding to \(\rho\) is given by \(\chi_{\rho}(\Lambda)=\exp[-(1/2)\Lambda^{T}\Omega\sigma\Omega^{T}\Lambda]\) where \(\Lambda\) is a vector of conjugate quadratures (the Fourier transform dual to the quadratures) for the modes \(P\) and \(S\), i.e. \(\Lambda=\Lambda_{P}\oplus\Lambda_{S}\), where \(\Lambda_{P}=\bigoplus_{p\in P}\Lambda_{p}\) and \(\Lambda_{S}=\bigoplus_{s\in S}\Lambda_{s}\). The conjugate quadratures for mode \(k\) is a vector \(\Lambda_{k}=\left(\lambda_{kx}\;\;\lambda_{ky}\right)^{T}\). We have also introduced the symplectic form \(\Omega=\bigoplus_{k=1}^{2N}\omega\), where \(\omega\) is the anti-symmetric matrix,
\[\omega=\begin{pmatrix}0&1\\ -1&0\end{pmatrix}. \tag{8}\]
The modes \(S\) are then mixed on the interferometer B, and we assume that the corresponding mode functions are identical and have a high overlap at the beamsplitters making up the interferometer. The \(\hat{a}_{s}\) be the amplitude operator for a mode \(s\in S\), the interferometer B is assumed to generate the Bogoliubov transformation,
\[\begin{pmatrix}\hat{a}_{s_{1}}\\ \hat{a}_{s_{2}}\\ \hat{a}_{s_{3}}\\ \vdots\\ \hat{a}_{s_{N}}\end{pmatrix}\rightarrow\begin{pmatrix}1&e^{i\frac{2\pi}{N}}&e ^{i2\frac{2\pi}{N}}&\cdots&e^{i(N-1)\frac{2\pi}{N}}\\ 1&e^{i2\frac{2\pi}{N}}&e^{i4\frac{2\pi}{N}}&\cdots&e^{i2(N-1)\frac{2\pi}{N}}\\ 1&e^{i3\frac{2\pi}{N}}&e^{i6\frac{2\pi}{N}}&\cdots&e^{i3(N-1)\frac{2\pi}{N}}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 1&1&1&\cdots&1\end{pmatrix}\begin{pmatrix}\hat{a}_{s_{1}}\\ \hat{a}_{s_{2}}\\ \hat{a}_{s_{3}}\\ \hat{a}_{s_{N}}\end{pmatrix} \tag{9}\]
We condition the state on obtaining a click at detector \(s_{N}\) and no clicks at the remaining detectors, thereby heralding the conditional state \(\rho_{c}\) of modes \(P\). The projector corresponding to this event is \(\tilde{\Pi}_{c}=\left(\prod_{s\in S}|0\rangle_{s}\langle 0|\right)\left(I_{s_{N}}- |0\rangle_{s_{N}}\langle 0|\right)\), where \(\bar{S}\) is the set \(\bar{S}=S\backslash\{s_{N}\}\). The conditional state is obtained as \(\rho_{c}=\mathrm{Tr}_{S}[\rho\tilde{\Pi}_{c}]/P(C)\), where \(P(C)\) is the normalization, i.e. the probability of obtaining the measurement outcomes heralding a successful swap. \(\tilde{\Pi}_{c}\) has the characteristic function,
\[\chi_{c}(\Lambda_{S})=\mathrm{Tr}\left[\tilde{\Pi}_{c}D_{S}( \Lambda_{S})\right]\] \[=E(\Lambda_{\bar{S}})\cdot\left(\pi\delta^{(2)}(\Lambda_{s_{N}})- E(\Lambda_{s_{N}})\right), \tag{10}\]
where
\[E(\Lambda_{j})=\exp\left[-\frac{1}{2}\Lambda_{j}^{T}\Lambda_{j}\right], \tag{11}\]
and \(\delta^{(2)}(\Lambda_{j})\) is a delta function. We obtain the characteristic function of the conditional state through integration,
\[\chi_{\rho_{c}}(\Lambda_{P})=\frac{1}{\pi^{N}P(C)}\int_{\mathbb{R}^{2N}}\chi_{ \rho}(\Lambda)\chi_{c}(-\Lambda_{S})d^{2N}\Lambda_{S}. \tag{12}\]
We then compute the Bell value of the W\({}^{3}\)ZB inequality by evaluating the expectation values \(\langle M^{(n)}\rangle=\langle\prod_{p\in P}M_{p}^{(n_{p})}\rangle\), for each setting \(n\). This is done via the integral [26],
\[\left\langle\prod_{p\in P}M_{p}^{(n_{p})}\right\rangle=\mathrm{Tr }\left\{\rho_{c}\prod_{p\in P}M_{p}^{(n_{p})}\right\}\] \[=\frac{1}{\pi^{N}}\int_{\mathbb{R}^{2N}}\chi_{\rho_{c}}(-\Lambda_{ P})\chi_{M}\left(\Lambda_{P},X_{P}\right)d^{2N}\Lambda_{P}, \tag{13}\]
where \(\chi_{M}\left(\Lambda_{P},X_{P}\right)\) is the characteristic function associated with the observable \(\prod_{p\in P}M_{p}^{(n_{p})}\). \(X_{P}\) is a vector of the displacements applied prior to the detectors, \(X_{P}=\bigoplus_{p\in P}X_{p}^{(n_{p})}\). A closed form expression for \(\langle\prod_{p\in P}M_{p}^{(n_{p})}\rangle\) can be found in appendix A1.
### Noise model
We now outline how we describe noise relevant to the experiment. We will include dark-counts in the detectors, loss in the channels, phase noise in the channels and measurements, and finally, amplitude noise in the measurements. Amplitude and phase noise during measurement are expected to arise if imperfect displacements are applied.
We include dark-counts in our measurement model by adding a noise term to the observable. Given that \(p_{d}\) is the probability of getting a dark-count during the measurement interval, then we measure the observable,
\[M_{p}^{(n_{p})} =(1-p_{d})\left[I_{p}-2\left|-X_{p}^{(n_{p})}\right\rangle_{p} \left\langle-X_{p}^{(n_{p})}\right|\right]+p_{d}I_{p}\] \[=I_{p}-2(1-p_{d})\left|-X_{p}^{(n_{p})}\right\rangle_{p}\left\langle -X_{p}^{(s_{p})}\right|. \tag{14}\]
If the detectors in \(S\) are triggered by a dark-count with probability \(p_{d}\), then the swap results in the transformation (see appendix A1),
\[\rho\rightarrow\rho_{c}=\frac{1}{P(C)}\,\mathrm{Tr}_{S}\left[\rho\tilde{\Pi}_{c }\right]. \tag{15}\]
We have introduced the operator \(\widetilde{\Pi}_{c}\),
\[\widetilde{\Pi}_{c}=\] \[(1-p_{d})^{N-1} \left(\prod_{s\in S}|0\rangle_{s}\langle 0|\right)\cdot\left(I_{s_{N}}- (1-p_{d})\left|0\rangle_{s_{N}}\langle 0|\right). \tag{16}\]
Given that channel \(\mathrm{ch}_{p_{n}}\) has transmission \(\eta_{p_{n}}\) and channel \(\mathrm{ch}_{s_{k}}\) has transmission \(\eta_{s_{k}}\), we model loss by a Gaussian map acting on the covariance matrix \(\sigma\) as [26],
\[\sigma\to G_{\eta}^{1/2}\sigma G_{\eta}^{1/2}+\left(I-G_{\eta}\right), \tag{17}\]
with the diagonal matrix \(G_{\eta}=G_{\eta p}\oplus G_{\eta_{S}}\), where \(G_{\eta p}=\mathrm{Diag}\left[\bigoplus_{p\in P}\left(\eta_{p}~{}~{}\eta_{p} \right)\right]\) and \(G_{\eta_{S}}=\mathrm{Diag}\left[\bigoplus_{s\in S}\left(\eta_{s}~{}~{}\eta_{s }\right)\right]\). We will assume that \(\eta_{p_{n}}\) equals \(\eta_{P}\), i.e. the channels \(\mathrm{ch}_{p}=\{\mathrm{ch}_{p_{1}},\mathrm{ch}_{p_{2}},\ldots,\mathrm{ch}_{ p_{N}}\}\) have the same transmission. Likewise we assume that \(\eta_{s_{k}}\) equals \(\eta_{S}\). \(\eta_{d}\) is the efficiency of a detector, and \(1-\eta_{d}\) is the loss of the detector. Given that \(\eta_{d}\) is the same for all detectors in \(S\), detector loss can then be commuted through B and absorbed into the transmission of the channels \(\mathrm{ch}_{\mathsf{s}}=\{\mathrm{ch}_{s_{1}},\mathrm{ch}_{s_{2}},\ldots, \mathrm{ch}_{s_{N}}\}\). Likewise, the detector loss in \(P\) can be shifted to be prior to the displacements, if we attenuate the magnitude of the displacements by the factor \(\sqrt{\eta_{d}}\).
A phase perturbation of the state \(\rho\), e.g. caused by environmental disturbance, can be modelled as a stochastic rotation in phase space,
\[\rho=\int d^{N}\mathbf{\theta}P(\mathbf{\theta})R(\mathbf{\theta})\rho_{0}R(-\mathbf{\theta}), \tag{18}\]
where \(\rho_{0}\) is the unperturbed state, and \(\mathbf{\theta}\) is a vector of stochastic rotation angles \(\theta_{p}\) for \(p\in P\), each being a perturbation on the phase of the corresponding mode. Note that phase noise acting on channels \(\mathrm{ch}_{\mathsf{S}}\) is shifted to act on channels \(\mathrm{ch}_{p}\) instead. \(R(\mathbf{\theta})\) is the rotation operator \(R(\mathbf{\theta})=\prod_{p\in P}R_{p}\left(\theta_{p}\right)\). \(R_{p_{n}}\left(\theta_{p_{n}}\right)\) is applied just prior to the displacement operation on mode \(p_{n}\), and includes phase noise resulting from propagation in the channels \(\mathrm{ch}_{p_{n}}\) and \(\mathrm{ch}_{s_{n}}\), and also the phase noise in the subsequent displacement operation. We make the assumption that the angles \(\mathbf{\theta}\) are uncorrelated, and model the probability density \(P(\mathbf{\theta})\) as a product of normal distributions for each angle \(\theta_{p}\). The variance of \(\theta_{p}\) is labelled as \(V_{\theta}\), and is the same for all modes. The correlated phase noise resulting from the interferometer B cannot be entirely captured by this simple model, but we expect that our model is sufficiently close to reality to indicate the sensitivity of the experiment toward phase noise. We furthermore assume that the angles \(\theta_{p}\) are small, allowing us to approximate the rotation of a coherent state by a small linear translation in phase space.
Amplitude noise arises from an imperfect displacement and is modelled similarly to phase noise, with the rotation operator in Eq. 18 replaced by a displacement operator. The stochastic displacement on mode \(p\) is given relative to the displacement \(X_{p}^{(n_{p})}\) applied on mode \(p\), i.e. for mode \(p\) we obtain the stochastic displacement \(r_{p}X_{p}^{(n_{p})}\), where \(r_{p}\) is referred to as the relative amplitude. We assume that the relative amplitudes \(r_{p}\) are normal, independent and identically distributed, with variance \(V_{A}\). A more detailed description of the noise model can be found in appendix A1.
## III Results and discussion
We compute Bell values under varying experimental conditions. In order to obtain realistic values we must include in the model reasonable experimental errors. We choose the noise parameters shown in Table 1. Unless otherwise stated, these are the values used for the noise parameters throughout our analysis. E.g. if we vary \(\eta_{P}\), as is done in Fig. 6, then the remaining noise parameters are set at the values listed in Table 1.
We maximize the violation of the W\({}^{3}\)ZB inequality in the squeezing parameter \(r\). The Bell value as a function of \(r\), for the optimal choice of measurement settings, is shown in Fig. 3. We clearly observe that there exists an optimal squeezing value for which the Bell value is maximized, and that the optimal squeezing depends on the number of parties. We also observe that the maximal Bell value increases for more parties, until 6 parties, at which point the maximal Bell value decreases for more parties.
While the correlations between all parties lead to a violation of the W\({}^{3}\)ZB inequality at the optimal squeezing, we find that, for up to 4 parties, the marginal outcome probabilities describing any subgroup of parties are inside the Bell polytope, with the used measurement settings. This was evidenced by a linear program (see appendix A2), and indicates that in these cases nonlocality results from correlations between _all_ parties. An exception can occur for 5 parties if \(\eta_{P}\) is above 97%, and for 6 parties if \(\eta_{P}\) is above 91%, with the used measurement settings. In these cases a Bell inequality can be broken with a subgroup of 4 and 5 parties respectively.
We find the optimal displacements (measurement settings), at the optimal squeezing, for which the violation is maximized. The optimal displacement for party \(p_{1}\) and another party \(p_{n}\), are shown in Fig. 4. The phase angles of the two-mode squeezers belonging to \(p_{1}\) and \(p_{n}\) respectively, are labelled as \(\phi_{p_{1}}\) and \(\phi_{p_{n}}\). \(m_{0}^{(p_{1})}\) and \(m_{1}^{(p_{1})}\) are the displacements used by party \(p_{1}\), whereas \(m_{0}^{(p_{n})}\) and \(m_{1}^{(p_{n})}\) are the displacements used by party \(p_{n}\). \(m_{0}^{(p_{1})}\) and \(m_{0}^{(p_{n})}\) have the same magnitude, but the displacements are directed along different quadrature axes at an angle \(\phi_{p_{1}}-\phi_{p_{n}}\), likewise for \(m_{1}^{(p_{1})}\) and \(m_{1}^{(p_{n})}\).
\begin{table}
\begin{tabular}{c|c|c|c|c} \(\eta_{P}\) & \(\eta_{S}\) & \(\sigma_{A}\) & \(\sigma_{\theta}\) & \(P_{d}\) \\ \hline
0.9 & 0.2 & 3/100 & 100 mrad & 1/10000 \\ \end{tabular}
\end{table}
Table 1: _Standard settings for noise parameters. \(\eta_{P}\) is the transmission of the channels \(\text{ch}_{\mathsf{s}}\) (\(\text{ch}_{\mathsf{s}_{1}}\), \(\text{ch}_{\mathsf{s}_{2}}\) etc.). \(\eta_{S}\) is the transmission of channels \(\text{ch}_{\mathsf{s}}\). \(\sigma_{A}\) is the standard deviation of the relative amplitude distribution \(\left(\sigma_{A}^{2}=V_{A}\right)\). \(\sigma_{\theta}\) is the standard deviation of the phase angle distribution \(\left(\sigma_{\theta}^{2}=V_{\theta}\right)\). \(P_{d}\) is the probability of getting a dark-count during the measurement interval (which is assumed to be \(t_{p}\) in our analysis)._
So the displacements used by a given party \(p_{n}\) will be determined by the phase angle of their squeezer \(\phi_{p_{n}}\). The magnitudes of \(m_{0}^{(p_{n})}\) and \(m_{1}^{(p_{n})}\) depend on the number of parties and are listed in Table 2. The overall orientation of the quadrature axes is arbitrary, i.e. we can freely rotate Fig. 4, as long as the angle between displacements remain unchanged. In this sense, the displacements used by party \(p_{1}\) serve as a reference from which we can define the displacements to be used by the remaining parties.
We note that the optimum in squeezing, seen in Fig. 3, is the result of a competition between the dark-count rate and the multi-photon generation rate. A dark-count would render the measurements by the parties uncorrelated, thereby lowering the Bell value. This indicates that it is preferable to have high squeezing, so that photons from the optical field outnumber the dark-counts. However, the click detectors in \(S\) cannot distinguish between 1 or more photons. Multi-photon emission from the two-mode squeezers therefore create mixedness in the conditional state generated by the swap, and this mixedness weakens the correlations between the measurement outcomes obtained by the parties. This mixedness can be avoided by lowering the degree of squeezing, so that on average less than one photon reaches the detectors in \(S\). As a result, there is some amount of squeezing where the combined detrimental effect of dark-counts and multi-photon generation is minimized. As we increase the number of parties, the presence of dark-counts becomes more detrimental due to the increased number of detectors, and the lower probability of a successful swap \(P(C)\). This is the cause of the decrease in maximal Bell value for 7 and 8 parties, as compared to the case with 6 parties.
We investigate the sensitivity of the experiment against a dark-count at a detector in \(S\), the result can be seen in Fig. 5 (left) for different number of parties. A dark-count at detector \(s_{N}\) would mistakenly herald nonlocal correlations between the detectors in \(P\), when no such correlations actually exists. This erroneous heralding significantly lowers the calculated Bell value. The Bell value is found to rapidly decrease around \(P_{d}\approx 0.02\%\). At this point, the probability of getting a dark-count is no longer insignificant compared with the probability of generating the conditional state, which is in the range 0.2% to 0.5%, depending on the number of parties (see Fig. 3). For the case of 2 parties, the decrease in Bell value proceeds a bit slower, however the lower initial Bell value (1.09) results in the curve reaching the classical limit of 1 at smaller dark-count probabilities.
\begin{table}
\begin{tabular}{c|c|c} No. of Parties & \(m_{0}\) & \(m_{1}\) \\ \hline
2 & 0.59 & -0.18 \\ \hline
3 & 0.47 & -0.20 \\ \hline
4 & 0.41 & -0.19 \\ \hline
5 & 0.37 & -0.18 \\ \hline
6 & 0.33 & -0.17 \\ \end{tabular}
\end{table}
Table 2: _Magnitudes of the optimal displacements shown in Fig. 4 for the optimal value of \(r\). If the detector transmission is \(\eta_{d}\), then the magnitudes should be multiplied by \(1/\sqrt{\eta_{d}}\)._
Figure 3: _Bell value against the squeezing parameter \(r\) for different number of parties. The annotation and legend gives the number of parties. The Bell value is computed for the optimal measurement settings at the given value of \(r\). We observe a maximum in the Bell value at a particular squeezing. Next to the legend we list the probability \(P(C)\) that an experiment succeeds with that number of parties, at the corresponding optimal value of \(r\)._
We also analyse the robustness of the Bell inequality violation against dark-counts at the detectors in \(P\). A plot of the Bell value against the probability of a dark-count in \(P\) is shown in Fig. 5 (right), and clearly illustrates that the violation is highly robust against such a dark-count.
The impact of loss on the Bell value of the W\({}^{3}\)ZB inequality is shown in Fig. 6 and Fig. 7. In Fig. 6 we vary the transmission \(\eta_{P}\), and show how the Bell value changes. The transmission at which the Bell value drop below one, lowers as we increase the number of parties. This indicates that a demonstration of nonlocality might be easier to realize when using more parties. In Fig. 7 we show the dependence of the Bell value on the transmission \(\eta_{S}\). We observe that the Bell value is only weakly dependent on this transmission until a critical point around a transmission of 10 %. The probability \(P(C)\) of successfully generating the conditional state, heralded by detector \(s_{N}\) clicking and the remaining detectors in \(S\) staying silent, is seen to drop linearly for decreasing transmission. If we assume a fiber loss of 0.3 dB/km, we find that a transmission of 10 %
Figure 5: **Left**: Bell value of the W\({}^{3}\)ZB inequality against the probability of a dark-count in \(S\) during the measurement interval. The annotation indicates the number of parties. **Right**: Bell value of the W\({}^{3}\)ZB inequality against the probability of a dark-count in \(P\) during the measurement interval.
Figure 6: _We plot how the Bell value of the W\({}^{3}\)ZB inequality depends on the transmission of the channels ch\({}_{p}\), connecting the two-mode squeezers to the detectors in \(P\). The annotation indicates the number of parties._
Figure 7: _We plot how the Bell value of the W\({}^{3}\)ZB inequality depends on the transmission of the channels ch\({}_{S}\), connecting the two-mode squeezers to the swapping detectors \(S\). The annotation indicates the number of parties. The solid curves correspond to Bell values and match the left y-axis. The dashed curves are the corresponding probabilities of generating the conditional state, these drop as we lower the transmission \(\eta_{S}\)._
corresponds to approximately 30 km. The maximal achievable separation between two parties will then be around 60 km.
We then check the sensitivity of the experiment against phase and amplitude noise. The result is shown in Fig. 8. In Fig. 8 (left) we plot the Bell value against the standard deviation of the relative amplitude distribution, \(\sigma_{A}\). In Fig. 8 (right) we plot the Bell value against the standard deviation of the phase distribution, \(\sigma_{\theta}\). We observe that the Bell value is not very sensitive to amplitude and phase noise. This implies that the optimal displacements, shown in Table 2 and Fig. 4, are not so strict, and that slight deviations from these displacements are acceptable.
## IV Conclusion
We have proposed an experiment for demonstrating nonlocality with multiple parties separated by a set of lossy channels. The experiment utilizes only standard quantum optical elements, including on/off detectors, beamsplitters, two-mode squeezers and displacements. We have given a detailed account of how loss impact the experiment, and identified critical values for channel transmissions, required for a Bell inequality violation with dichotomic inputs and outputs. We found that the experiment is very robust against loss in the channels connecting the parties (\(\mathrm{ch_{g}}\)), allowing for transmissions as low as 10%. On the other hand, our calculations indicate that the nonlocality of the experiment is strongly impacted by loss in the channels connecting the two-mode squeezer of each party, to the detector associated with that party (channels \(\mathrm{ch_{p}}\)). However, we found that the experiment could be made more robust against loss in channels \(\mathrm{ch_{p}}\), if the number of parties is increased. With 4 parties we found that the \(\mathrm{W^{3}ZB}\) inequality could be violated for transmissions of channels \(\mathrm{ch_{p}}\) as low as 82%. For an experiment with 4 or fewer parties, we found that the marginal outcome probabilities for all possible subgroups were inside the Bell polytope, with the used measurement settings.
Due to the heralded nature of the experiment, it is very sensitive toward dark-counts at the heralding detector. Our calculations indicate that the probability of a dark-count during a measurement must not be much higher than 1 in 10000, or the experiment fails. We then examined the influence of amplitude and phase noise, and found that the experiment is quite robust against these noise sources. The phase noise could be as high as several hundred milliradians, and the relative amplitude noise could be in excess of 25%.
## V Acknowledgment
We acknowledge the support of the Danish National Research Foundation through the Center for Macroscopic Quantum States (bigQ, DNRF0142) and research grant (40864) from VILLUM FONDEN.
## Appendix A1
The state \(\rho\) is generated by N two-mode squeezers, and occupy the modes \(S\) and \(P\). The characteristic function of \(\rho\) is given by \(\chi_{\rho}(\Lambda)=\exp[-(1/2)\Lambda^{T}\Omega\sigma\Omega^{T}\Lambda]\) where \(\Lambda\) is a vector of conjugate quadratures for the modes in \(S\) and \(P\). We introduce the following decomposition of the covariance matrix of \(\rho\),
\[\sigma=\begin{pmatrix}\sigma_{P}&K_{S}&K_{s_{N}}\\ K_{S}^{T}&\sigma_{S}&C\\ K_{s_{N}}^{T}&C^{T}&\sigma_{s_{N}}\end{pmatrix}. \tag{19}\]
We also introduce the matrices,
\[K_{S}=\begin{pmatrix}K_{\bar{S}}\:K_{s_{N}}\end{pmatrix},\;\;\sigma_{S}= \begin{pmatrix}\sigma_{\bar{S}}&C\\ C^{T}&\sigma_{s_{N}}\end{pmatrix}. \tag{20}\]
The subscript refer to the modes described by the relevant submatrix, i.e. \(\sigma_{S}\) describes the marginal distribution of the modes \(\bar{S}=S\backslash\{s_{N}\}\).
The modes in \(S\) are mixed in the interferometer B, described by the Bogoliubov transformation in Eq. 9. We then condition the state on obtaining a click at detector \(s_{N}\) and no click at the remaining detectors in \(S\) (this is referred to as a swap). If the detectors in \(S\) are triggered by a dark-count with probability \(p_{d}\), then the swap might herald success under three different conditions,
1. No dark-counts occur. Light reaches detector \(s_{N}\) and no light reaches the remaining detectors in \(S\). This event is associated with the projector \(\hat{\Pi}_{1}=\left(\prod_{s\in\bar{S}}\ket{0}_{s}\!\bra{0}\right)\left(I_{s _{N}}-\ket{0}_{s_{N}}\!\bra{0}\right)\).
2. A dark-count occurs at detector \(s_{N}\). Light reaches detector \(s_{N}\) and no light reaches the remaining detectors in \(S\). This event is associated with the projector \(\hat{\Pi}_{1}=\left(\prod_{s\in\bar{S}}\ket{0}_{s}\!\bra{0}\right)\left(I_{s _{N}}-\ket{0}_{s_{N}}\!\bra{0}\right)\).
3. A dark-count occurs at detector \(s_{N}\). No light reaches any detectors in \(S\). This event is associated with the projector \(\hat{\Pi}_{2}=\prod_{s\in\bar{S}}\ket{0}_{s}\!\bra{0}\).
Let \(P(\hat{\Pi}_{n}|C)\) be understood as the probability that the event \(\hat{\Pi}_{n}\) occur, given that detectors \(S\) herald a successful swap \(C\). \(P(\hat{\Pi}_{n})=\mathrm{Tr}\left[\hat{\Pi}_{n}\rho\right]\) is the prior probability that the event \(\hat{\Pi}_{n}\) occurs. The swap then transform the state
\(\rho\) into the conditional state \(\rho_{c}\) as,
\[\rho \rightarrow\rho_{c}\] \[=\operatorname{Tr}_{S}\left[P(\hat{\Pi}_{1}|C)\frac{\hat{\Pi}_{1} \rho\hat{\Pi}_{1}}{P(\hat{\Pi}_{1})}+P(\hat{\Pi}_{2}|C)\frac{\hat{\Pi}_{2}\rho \hat{\Pi}_{2}}{P(\hat{\Pi}_{2})}\right]\] \[=\operatorname{Tr}_{S}\left[\rho\left(P(\hat{\Pi}_{1}|C)\frac{ \hat{\Pi}_{1}}{P(\hat{\Pi}_{1})}+P(\hat{\Pi}_{2}|C)\frac{\hat{\Pi}_{2}}{P( \hat{\Pi}_{2})}\right)\right]. \tag{21}\]
By Bayes' theorem we have,
\[\frac{P(\hat{\Pi}_{n}|C)}{P(\hat{\Pi}_{n})}=\frac{P(C|\hat{\Pi}_{n})}{P(C)}, \tag{22}\]
which gives another expression for \(\rho_{c}\),
\[\rho_{c} =\operatorname{Tr}_{S}\left[\rho\left(\frac{P(C|\hat{\Pi}_{1})}{ P(C)}\hat{\Pi}_{1}+\frac{P(C|\hat{\Pi}_{2})}{P(C)}\hat{\Pi}_{2}\right)\right]\] \[=\frac{1}{P(C)}\operatorname{Tr}_{S}\left[\rho\widetilde{\Pi}_{ c}\right] \tag{23}\]
Where we have introduced the operator \(\widetilde{\Pi}_{c}\),
\[\widetilde{\Pi}_{c}=P(C|\hat{\Pi}_{1})\hat{\Pi}_{1}+P(C|\hat{\Pi}_{2})\hat{\Pi }_{2} \tag{24}\]
The probability of the swap being heralded as successful, given that the event \(\hat{\Pi}_{1}\) occur, is given by \(P(C|\hat{\Pi}_{1})=(1-p_{d})^{N}+(1-p_{d})^{N-1}p_{d}\), i.e. the swap will succeed as long as no dark-count triggers any detector other than \(s_{N}\). If no light reaches any detectors in \(S\), then the swap can only be heralded as successful if a dark-count triggers detector \(s_{N}\), so \(P(C|\widetilde{\Pi}_{2})=(1-p_{d})^{N-1}p_{d}\). Then we have,
\[\widetilde{\Pi}_{c} =(1-p_{d})^{N}\hat{\Pi}_{1}+(1-p_{d})^{N-1}p_{d}(\hat{\Pi}_{1}+ \hat{\Pi}_{2})\] \[=(1-p_{d})^{N-1}\left(\prod_{s\in\bar{S}}|0\rangle_{s}\langle 0 |\right)\left[I_{s_{N}}-(1-p_{d})\left|0\rangle_{s_{N}}\langle 0|\right] \tag{25}\]
Different number of photons could _in principle_ be distinguishable by the detector, even if the experimenter cannot distinguish the detector states sufficiently well to obtain this information. We define a projector onto Fock states, \(\hat{\Pi}^{(n)}=\left(\prod_{s\in\bar{S}}|0\rangle_{s}\langle 0|\right)|n \rangle_{s_{N}}\langle n|\). If different Fock states are in principle distinguishable, then the transformation of \(\rho\), conditioned on the swap, ought to be,
\[\rho \rightarrow\operatorname{Tr}_{S}\left[\sum_{n=0}^{\infty}P\left( \hat{\Pi}^{(n)}|C\right)\frac{\hat{\Pi}^{(n)}\rho\hat{\Pi}^{(n)}}{P\left(\hat {\Pi}^{(n)}\right)}\right]\] \[=\operatorname{Tr}_{S}\left[\rho\sum_{n=0}^{\infty}\frac{P\left( \hat{\Pi}^{(n)}|C\right)}{P\left(\hat{\Pi}^{(n)}\right)}\hat{\Pi}^{(n)}\right] \tag{26}\]
Using Bayes' theorem we have,
\[=\frac{1}{P(C)}\operatorname{Tr}_{S}\left[\rho\sum_{n=0}^{\infty }P\left(C|\hat{\Pi}^{(n)}\right)\hat{\Pi}^{(n)}\right]\] \[=\frac{1}{P(C)}\operatorname{Tr}_{S}\left[\rho\widetilde{\Pi}_{ c}^{\prime}\right] \tag{27}\]
We then make the assumption that,
\[P\left(C|\hat{\Pi}^{(n)}\right)=\begin{cases}(1-p_{d})^{N}+(1-p_{d})^{N-1}p_{d},&\text{if $n>0$}\\ (1-p_{d})^{N-1}p_{d},&\text{if $n=0$}\end{cases} \tag{28}\]
Under this assumption one can show that \(\widetilde{\Pi}_{c}^{\prime}=\widetilde{\Pi}_{c}\), and it doesn't matter whether we use the transformation in Eq. 21 or in Eq. 26.
The characteristic function of \(\widetilde{\Pi}_{c}\) is given by,
\[\chi_{c}(\Lambda_{S})=\operatorname{Tr}_{S}\left[\widetilde{\Pi }_{c}D_{S}(\Lambda_{S})\right]\] \[=(1-p_{d})^{N-1}E(\Lambda_{S})\cdot\left(\pi\delta^{(2)}(\Lambda_ {s_{N}})-(1-p_{d})E(\Lambda_{s_{N}})\right) \tag{29}\]
Figure 8: _Left: We plot how the Bell value depends on amplitude noise (\(\sigma_{A}\)). The annotation indicates the number of parties. Right: We plot how the Bell value depends on the phase noise (\(\sigma_{a}\))._
Then we have that,
\[\rho_{c}=\frac{1}{P(C)}\operatorname{Tr}_{S}[\rho\widetilde{ \Pi}_{c}]=\\ \frac{1}{P(C)}\int_{\mathbb{R}^{4N}}D_{P}(-\Lambda_{P})\chi_{\rho}( \Lambda_{P},\Lambda_{S})\chi_{c}(-\Lambda_{S})\frac{d^{4N}\Lambda}{\pi^{2N}}. \tag{30}\]
In evaluating the above expression we have used Glauber's formula [26] to express \(\rho\) and \(\widetilde{\Pi}_{c}\) in terms of their characteristic functions (\(\chi_{\rho}\) and \(\chi_{c}\)),
\[\dot{O}=\int_{\mathbb{R}^{2n}}\frac{d^{2n}B}{\pi^{n}}\chi_{O}(B)D^{\dagger}(B), \tag{31}\]
where \(n\) is the number of modes. We also used the facts,
\[\operatorname{Tr}_{i}[D(\Lambda_{i})]=\pi\delta^{(2)}(\Lambda_{i}) \tag{32}\]
\[D(\Lambda_{i})D(\Lambda_{j})=D(\Lambda_{i}+\Lambda_{j})\exp[-i \Lambda_{i}^{T}\omega\Lambda_{j}] \tag{33}\]
From Eq. 30 we may read off the characteristic function of the conditional state \(\rho_{c}\),
\[\chi_{\rho_{c}}(\Lambda_{P})=\frac{1}{\pi^{NP}P(C)}\int_{\mathbb{ R}^{2N}}\chi_{\rho}(\Lambda_{P},\Lambda_{S})\chi_{c}(-\Lambda_{S})d^{2N} \Lambda_{S}. \tag{34}\]
Inserting the expressions for \(\chi_{\rho}\) and \(\chi_{c}\), we may evaluate the conditional state as,
\[\chi_{\rho_{c}}(\Lambda_{P})=\frac{(1-p_{d})^{N-1}}{P(C)}\left[ \chi_{S}(\Lambda_{P})-(1-p_{d})\chi_{S}(\Lambda_{P})\right]. \tag{35}\]
\(\chi_{S}\) and \(\chi_{S}\) are Gaussian and respectively given by
\[\chi_{S}(\Lambda_{P}) =\frac{1}{\pi^{N}}\int_{\mathbb{R}^{2N}}\chi_{\rho}(\Lambda_{P}, \Lambda_{S})E(\Lambda_{S})\pi\delta^{(2)}(\Lambda_{s_{N}})d^{2N}\Lambda_{S}\] \[=2^{N-1}||\gamma_{\widetilde{S}}||^{-1/2}E\left[V_{\widetilde{S} },0\right](\Lambda_{P}) \tag{36}\] \[\chi_{S}(\Lambda_{P}) =\frac{1}{\pi^{N}}\int_{\mathbb{R}^{2N}}\chi_{\rho}(\Lambda_{P}, \Lambda_{S})E(\Lambda_{S})d^{2N}\Lambda_{S}\] \[=2^{N}||\gamma_{S}||^{-1/2}E\left[V_{S},0\right](\Lambda_{P}). \tag{37}\]
Where the brackets \(||.||\) refer to the determinant and,
\[E\left[V,\bar{x}\right](B) =\exp\left[-\frac{1}{2}B^{T}\Omega V\Omega^{T}B-i(\Omega\bar{x})^ {T}B\right],\] \[\gamma_{\widetilde{S}} =\sigma_{\widetilde{S}}+I,\ \ \gamma_{S}=\sigma_{S}+I,\] \[V_{\widetilde{S}} =\sigma_{P}-K_{\widetilde{S}}\ \gamma_{\widetilde{S}}^{-1}\ K_{S}^{T},\] \[V_{S} =\sigma_{P}-K_{S}\ \gamma_{\widetilde{S}}^{-1}\ K_{S}^{T}. \tag{38}\]
The normalization \(P(C)\) can be obtained by demanding that \(\chi_{\rho_{c}}(\Lambda_{P}=0)=1\). \(E\left[V,\bar{x}\right](B)\) is the characteristic function of a Gaussian state with covariance matrix \(V\) and centred on position \(\bar{x}\) in phase space.
We now derive a closed-form expression for the correlator \(\left\langle\prod_{p\in P}M_{p}^{(n_{p})}\right\rangle\), describing correlations between the measurement outcomes obtained by the N parties. The characteristic function of the observable \(\prod_{p\in P}M_{p}^{(n_{p})}\) is given by,
\[\chi_{M}\left(\Lambda,X_{P}\right)\\ =\prod_{p\in P}\left\{\pi\delta^{(2)}\left(\Lambda_{p}\right)-2(1 -p_{d})E\left[I,-2X_{p}^{(n_{p})}\right](\Lambda_{p})\right\}. \tag{39}\]
As we will show in the next section, when amplitude or phase noise is present, then we should instead use the characteristic function,
\[\chi_{M}\left(\Lambda_{P},X_{P}\right)\\ =\prod_{p\in P}\left\{\pi\delta^{(2)}\left(\Lambda_{p}\right)-2(1 -p_{d})E\left[\Delta_{p}^{(n_{p})},-2X_{p}^{(n_{p})}\right](\Lambda_{p})\right\}, \tag{40}\]
where \(\Delta_{p}^{(n_{p})}\) is the covariance matrix describing a noisy displacement for party \(p\). We form the covariance matrix \(\Delta_{P}\), describing the statistics of the noisy displacements for all N modes. We assume no correlation between noise in different modes, and \(\Delta_{P}\) is therefore block diagonal. The above product is rewritten as a sum over products,
\[\chi_{M}\left(\Lambda_{P},X_{P}\right)=\sum_{d}[-2(1-p_{d})]^{|d|}\prod_{p\in P }K_{p}^{(d_{p})}, \tag{41}\]
where the sum runs over all binary lists \(d=(d_{p_{1}},d_{p_{2}},\ldots,d_{p_{N}})\). \(|d|\) is the sum of \(d\), i.e. the number of ones in the list. \(K_{p}^{(d_{p})}\) is the piecewise characteristic function defined as,
\[K_{p}^{(d_{p})}=\begin{cases}\pi\delta^{2}\left(\Lambda_{p}\right)&\text{ if }d_{p}=0\\ E\left[\Delta_{p}^{(n_{p})},-2X_{p}^{(n_{p})}\right](\Lambda_{p})&\text{ if }d_{p}=1\end{cases}. \tag{42}\]
Given a Gaussian state \(\rho_{G}\) with characteristic function \(E[\sigma_{G},0](\Lambda_{P})\), we evaluate the expectation value of the observable,
\[f\left(\sigma_{G},X_{P}\right)=\operatorname{Tr}\left\{\rho_{G} \prod_{p\in P}M_{p}^{(n_{p})}\right\}\\ =\frac{1}{\pi^{N}}\int_{\mathbb{R}^{2N}}E[\sigma_{G},0](-\Lambda_{ P})\chi_{M}\left(\Lambda_{P},X_{P}\right)d^{2N}\Lambda_{P}\\ =\sum_{d}[-8\pi(1-p_{d})]^{|d|}G\left[\sigma_{G}^{(d)}+\Delta_{P}^{(d)},0 \right]\left(2X_{P}^{(d)}\right) \tag{43}\]
\(\sigma_{G}^{(d)}\) is the submatrix of \(\sigma_{G}\) containing all the modes where \(d\) is \(1\), i.e. if \(d=(1,0,1,1)\) then we extract the covariance matrix describing the marginal distribution of modes \(p_{1}\), \(p_{3}\) and \(p_{4}\). Likewise, we have for the present example \(\Delta_{P}^{(d)}=\operatorname{Diag}\left(\Delta_{p_{1}}^{(n_{p_{1}})},\Delta_{p _{3}}^{(n_{p_{3}})},\Delta_{p_{4}}^{(n_{p_{4}})}\right)\)
and \(X_{P}^{(d)}=X_{p_{1}}^{(n_{p_{1}})}\bigoplus X_{p_{3}}^{(n_{p_{3}})}\bigoplus X_{p_{4 }}^{(n_{p_{4}})}\). We have also defined the normal distribution, \(G[V,\bar{x}](X)=\left[(2\pi)^{D}\|V\|\right]^{-1/2}e^{-\frac{1}{2}(X-\bar{x})^{T }V^{-1}(X-\bar{x})}\), where \(D\) is the dimension of \(V\). Applying this result to the conditional state, which is a sum of two Gaussians, we obtain
\[\left\langle\prod_{p\in P}M_{p}^{(n_{p})}\right\rangle =\operatorname{Tr}\left\{\rho_{c}\prod_{p\in P}M_{p}^{(n_{p})}\right\}\] \[=\frac{(1-p_{d})^{N-1}}{P(C)}\left[2^{N-1}\left\|\gamma_{S} \right\|^{-\frac{1}{2}}f\left(V_{\bar{S}},X_{P}\right)\right.\] \[\left.-2^{N}(1-p_{d})\left\|\gamma_{S}\right\|^{-\frac{1}{2}}f \left(V_{S},X_{P}\right)\right]. \tag{44}\]
Which is a closed-form expression for the correlator of the measurements.
### Loss
A Gaussian transformation transforms the quadrature operators as \(Q\to SQ+d\), where \(S\) is a symplectic matrix, i.e. \(S\Omega S^{T}=\Omega\), and \(d\) is a displacement [25; 26]. Correspondingly, one can show that under a Gaussian transformation, the characteristic function transforms as,
\[\chi(\Lambda)\rightarrow\exp\left[id^{T}\Omega\Lambda\right]\chi(S^{-1} \Lambda). \tag{45}\]
We note that \(S^{-1}=\Omega^{T}S^{T}\Omega\). We model loss, acting on the optical modes of the system, by mixing said modes with a set of empty (groundstate) environmental modes, and subsequently trace out the environmental modes. Let the modes be ordered as \(\Lambda=\Lambda_{P}\oplus\Lambda_{S}\oplus\Lambda_{E}\), where \(\Lambda_{E}\) are the conjugate quadratures for the environmental modes. We assume there is one environmental mode for each system mode (\(S\), \(P\)). The system modes and environmental modes are mixed using beamsplitter interactions, described by the symplectic matrix \(U_{\eta}\),
\[U_{\eta}=\begin{pmatrix}G_{\eta}^{1/2}&-\sqrt{I-G_{\eta}}\\ \sqrt{I-G_{\eta}}&G_{\eta}^{1/2}\end{pmatrix}, \tag{46}\]
By using Eq. 31, Eq. 45, and \(U_{\eta}\), we obtain the map corresponding to loss acting on the system modes. This map transforms the characteristic function as,
\[\chi(\Lambda)\rightarrow\chi\left(G_{\eta}^{1/2}\Lambda\right)\exp\left[- \frac{1}{2}\Lambda^{T}(I-G_{\eta})\Lambda\right], \tag{47}\]
Eq. 17 can be derived from this mapping, and it can also be used to show that detector loss can be commuted through the interferometer B, given that all detectors have the same efficiency.
### Phase and amplitude noise
We now evaluate the effect of phase and amplitude noise on the computed correlators. Given that the optical state \(\rho\) is perturbed in phase by the environment, we model this by stochastic rotations in phase space \(\rho=\int d^{N}\mathbf{\theta}P(\mathbf{\theta})R(\mathbf{\theta})\rho_{0}R(-\mathbf{\theta})\). Where \(\rho_{0}\) is the unperturbed state, \(\mathbf{\theta}=\left(\theta_{p_{1}}\theta_{p_{2}}\ldots\theta_{p_{N}}\right)\) is a vector of stochastic rotation angles, and \(R(\mathbf{\theta})\) is the rotation operator \(R(\mathbf{\theta})=\prod_{p\in P}R_{p}\left(\theta_{p}\right)\). We shift this stochastic rotation from the state onto the observable:
\[\left\langle\prod_{p\in P}M_{p}^{(n_{p})}\right\rangle= \operatorname{Tr}\left\{\prod_{p\in P}M_{p}^{(n_{p})}\rho\right\}\] \[=\operatorname{Tr}\left\{\prod_{p\in P}M_{p}^{(n_{p})}\int d^{N} \mathbf{\theta}P(\mathbf{\theta})R(\mathbf{\theta})\rho_{0}R(-\mathbf{\theta})\right\}\] \[=\operatorname{Tr}\left\{\int d^{N}\mathbf{\theta}P(\mathbf{\theta})R(-\bm {\theta})\prod_{p\in P}M_{p}^{(n_{p})}R(\mathbf{\theta})\rho_{0}\right\}\] \[=\operatorname{Tr}\left\{\prod_{p\in P}\int d\theta_{p}P\left(\theta _{p}\right)R_{p}\left(-\theta_{p}\right)M_{p}^{(n_{p})}R_{p}\left(\theta_{p} \right)\rho_{0}\right\}\] \[=\operatorname{Tr}\left\{\prod_{p\in P}\widetilde{M}_{p}^{(n_{p})} \rho_{0}\right\} \tag{48}\]
Where \(\widetilde{M}_{p}^{(n_{p})}\) is the noisy observable. By factorizing the probability as \(P(\mathbf{\theta})=\prod_{p\in P}P(\theta_{p})\), we have tacitly assumed that there is no correlation in the phase noise acting on different modes. Inserting the expression for the observable \(M_{p}^{(n_{p})}\), we have
\[R_{p}\left(-\theta_{p}\right)M_{p}^{(n_{p})}R_{p}\left(\theta_{p }\right)\] \[=I_{p}-2\left(1-p_{d}\right)R_{p}\left(-\theta_{p}\right)\left|-X_ {p}^{(n_{p})}\right\rangle_{p}\!\left\langle-X_{p}^{(n_{p})}\right|R_{p}\left( \theta_{p}\right) \tag{49}\]
For a coherent state \(\left|-X_{p}^{(n_{p})}\right\rangle\), we have that a small rotation is identical to a displacement acting orthogonal to the amplitude vector \(-X_{p}^{(n_{p})}\). An orthogonal vector can be constructed by acting with the symplectic form: \(-\omega(-X_{p}^{(n_{p})})\). With this in mind, we make the substitution:
\[R_{p}\left(\theta_{p}\right)\to D_{p}\left(\theta_{p}\omega X_{p}^{(n_{p})}\right) \tag{50}\]
Imprecision in the measurement process, such as a noisy displacement, might lead to noise in the amplitude. We include this by also applying a stochastic displacement along the amplitude vector \(X_{p}^{(n_{p})}\). This stochastic displacement is given as a fraction \(r_{p}\) of the amplitude vector \(X_{p}^{(n_{p})}\), i.e. the stochastic displacement is \(r_{p}X_{p}^{(n_{p})}\). \(r_{p}\) is referred to as the relative amplitude. The noisy observable for party \(p\) is then given as,
\[\widetilde{M}_{p}^{(n_{p})} =\int d\theta_{p}dr_{p}P\left(\theta_{p},r_{p}\right)D_{p}\left(- \theta_{p}\omega X_{p}^{(n_{p})}\right)D_{p}\left(-r_{p}X_{p}^{(n_{p})}\right)M _{p}^{(n_{p})}D_{p}\left(r_{p}X_{p}^{(n_{p})}\right)D_{p}\left(\theta_{p}\omega X _{p}^{(n_{p})}\right)\] \[=I-2\left(1-p_{d}\right)\int P\left(\theta_{p},r_{p}\right)\cdot \left|-\left(1+r_{p}+\theta_{p}\omega\right)X_{p}^{(n_{p})}\right\rangle\left \langle-\left(1+r_{p}+\theta_{p}\omega\right)X_{p}^{(n_{p})}\right|d\theta_{p }dr_{p}\] \[=I-2(1-p_{d})\beta_{p}^{(n_{p})} \tag{51}\]
\(P\left(\theta_{p},r_{p}\right)\) is the distribution over displacements, and we have introduced the state,
\[\beta_{p}^{(n_{p})}=\int P\left(\theta_{p},r_{p}\right)\cdot\left|-\left(1+r_ {p}+\theta_{p}\omega\right)X_{p}^{(n_{p})}\right\rangle\left\langle-\left(1+r _{p}+\theta_{p}\omega\right)X_{p}^{(n_{p})}\right|d\theta_{p}dr_{p}. \tag{52}\]
We model \(P\left(\theta_{p},r_{p}\right)\) as a Gaussian, given by
\[P\left(\theta_{p},r_{p}\right)=\left[(2\pi)^{2}\left\|\Sigma_{p}\right\| \right]^{-1/2}\exp\left[-\frac{1}{2}\left(r_{p}\ \ \theta_{p}\right)\Sigma_{p}^{-1}\begin{pmatrix}r_{p}\\ \theta_{p}\end{pmatrix}\right]. \tag{53}\]
The covariance matrix is chosen to be diagonal
\[\Sigma_{p}=\left(\begin{array}{cc}V_{A}&0\\ 0&V_{\theta}\end{array}\right). \tag{54}\]
\(V_{A}\) and \(V_{\theta}\) are the relative amplitude and phase angle variance respectively. \(\beta_{p}^{(n_{p})}\) has a characteristic function given by,
\[\chi_{\beta_{p}^{(n_{p})}} =\mathrm{Tr}\left\{\beta_{p}^{(n_{p})}D_{p}\left(\Lambda_{p} \right)\right\}\] \[=\int P\left(\theta_{p},r_{p}\right)\cdot E\left[I,-\left(1+r_{p} +\theta_{p}\omega\right)2X_{p}^{(n_{p})}\right]\left(\Lambda_{p}\right)d \theta_{p}dr_{p}\] \[=E\left[I+V_{A}\left(2X_{p}^{(n_{p})}\right)\otimes\left(2X_{p}^{ (n_{p})}\right)^{T}+V_{\theta}\left(\omega^{T}2X_{p}^{(n_{p})}\right)\otimes \left(\omega^{T}2X_{p}^{(n_{p})}\right)^{T},-2X_{p}^{(n_{p})}\right]\left( \Lambda_{p}\right). \tag{55}\]
So the effect of amplitude and phase noise is to broaden the phase space distribution of \(\beta_{p}^{(n_{p})}\) along \(2X_{p}^{(n_{p})}\) and \(\omega^{T}2X_{p}^{(n_{p})}\). We define the covariance matrix of the state \(\beta_{p}^{(n_{p})}\) as \(\Delta_{p}^{(n_{p})}\),
\[\Delta_{p}^{(n_{p})} =I+V_{A}\left(2X_{p}^{(n_{p})}\right)\otimes\left(2X_{p}^{(n_{p}) }\right)^{T}\] \[+V_{\theta}\left(\omega^{T}2X_{p}^{(n_{p})}\right)\otimes\left( \omega^{T}2X_{p}^{(n_{p})}\right)^{T} \tag{56}\]
### A2
Let \(n\) be a binary list of measurement settings, and \(g\) a binary list of measurement outcomes for the detectors in \(P\), where click corresponds to 1 and no click corresponds to 0. We may then compute the probability of obtaining the outcomes \(g\) using the characteristic function \(\chi_{\rho_{c}}\). This probability is given by the expression,
\[P_{Q}(g|n) =\frac{\left(1-p_{d}\right)^{N-1}}{P(C)}\left[2^{N-1}\left\|\gamma _{S}\right\|^{-\frac{1}{2}}h_{g}\left(V_{\bar{S}}\right)\right.\] \[\left.-2^{N}\left(1-p_{d}\right)\left\|\gamma_{S}\right\|^{-\frac{ 1}{2}}h_{g}\left(V_{S}\right)\right], \tag{57}\]
where
\[h_{g}(V) =\left[4\pi\left(1-p_{d}\right)\right]^{|g|}\sum_{b}\left[-4\pi \left(1-p_{d}\right)\right]^{|b|}G\left[V^{(b+\bar{g})}\right.\] \[\left.+\Delta_{P}^{(b+\bar{g})},2X_{P}^{(b+\bar{g})}\right]. \tag{58}\]
\(\bar{g}\) is the negation of \(g\), i.e. we replace 1 by 0 and vice versa. The measurement settings \(n\) define the arrays \(\Delta_{P}\) and \(X_{P}\). The sum runs over all binary lists \(b\) of length N, satisfying the constraint that \(b\) takes the value zero in positions where \(g\) takes the value zero. E.g. if \(g=\left(1,0,0,1\right)\), then the sum would run over the lists \(b\in\left\{\left(0,0,0,0\right),\left(1,0,0,0\right),\left(0,0,0,1\right), \left(1,0,0,1\right)\right\}\).
\(V^{(b+\bar{g})}\) is the submatrix of the covariance matrix \(V\), containing all the modes where the vector \(b+\bar{g}\) takes the value 1, e.g. if \(b+\bar{g}=\left(0,1,1,1\right)\) then the marginal covariance matrix describing modes \(p_{2}\), \(p_{3}\) and \(p_{4}\) is extracted. Marginal probabilities for a subset of parties A can be extracted from \(P_{Q}(g|n)\) by summing over outcomes for the remaining parties B. The measurement settings for subset B should be fixed during this summation, however the choice of settings for B is arbitrary owing to the no-signalling property of quantum mechanics [10].
We then want to determine whether the array \(P_{Q}(g|n)\) can be expressed as a convex sum of local response functions. Let \(L\left(g_{p}|n_{p},\lambda_{k}\right)\) be the local response function for party \(p\), determined by the hidden variables \(\lambda_{k}\). The response function gives the probability of party \(p\) obtaining a particular outcome \(g_{p}\), given the measurement setting \(n_{p}\) and hidden variables \(\lambda_{k}\). We determine whether there exists a set of coefficients \(c_{k}\) such that [10]:
\[P_{Q}(g|n) =\sum_{k}c_{k}\prod_{p\in P}L\left(g_{p}|n_{p},\lambda_{k}\right)\] \[\sum_{k}c_{k} =1\] \[c_{k} \geq 0 \tag{59}\]
\(c_{k}\) is interpreted as the probability that the hidden variables \(\lambda_{k}\) are shared by the parties in a given measurement round. We use the set of deterministic response functions, i.e. each response function can be written as a Kronecker delta function,
\[L\left(g_{p}|n_{p},\lambda_{k}\right)=\delta(g_{p},g_{n_{p},\lambda_{k}}) \tag{60}\]
\(g_{p}\) is a potential outcome for party \(p\) and \(g_{n_{p},\lambda_{k}}\) is the outcome that is actually obtained, given the hidden variables \(\lambda_{k}\) and the setting \(n_{p}\). Whether the set of requirements in Eq. 59 allows for a solution or not, is determined using the linprog module of the SciPy 1.8.1 package in Python. When no solution is present, we know that the array of probabilities \(P_{Q}(g|n)\), determined by the quantum state, does not admit a local hidden variable model. In this case \(P_{Q}(g|n)\) lies outside the Bell polytope. However, when a solution _is_ present we know that the system can be described by a local hidden variable model, and no Bell inequality can be violated.
|
2302.10887 | The configurable tree graph (CT-graph): measurable problems in partially
observable and distal reward environments for lifelong reinforcement learning | This paper introduces a set of formally defined and transparent problems for
reinforcement learning algorithms with the following characteristics: (1)
variable degrees of observability (non-Markov observations), (2) distal and
sparse rewards, (3) variable and hierarchical reward structure, (4)
multiple-task generation, (5) variable problem complexity. The environment
provides 1D or 2D categorical observations, and takes actions as input. The
core structure of the CT-graph is a multi-branch tree graph with arbitrary
branching factor, depth, and observation sets that can be varied to increase
the dimensions of the problem in a controllable and measurable way. Two main
categories of states, decision states and wait states, are devised to create a
hierarchy of importance among observations, typical of real-world problems. A
large observation set can produce a vast set of histories that impairs
memory-augmented agents. Variable reward functions allow for the easy creation
of multiple tasks and the ability of an agent to efficiently adapt in dynamic
scenarios where tasks with controllable degrees of similarities are presented.
Challenging complexity levels can be easily achieved due to the exponential
growth of the graph. The problem formulation and accompanying code provide a
fast, transparent, and mathematically defined set of configurable tests to
compare the performance of reinforcement learning algorithms, in particular in
lifelong learning settings. | Andrea Soltoggio, Eseoghene Ben-Iwhiwhu, Christos Peridis, Pawel Ladosz, Jeffery Dick, Praveen K. Pilly, Soheil Kolouri | 2023-01-21T21:05:52Z | http://arxiv.org/abs/2302.10887v1 | # The configurable tree graph (CT-graph):
###### Abstract
This paper introduces a set of formally defined and transparent problems for reinforcement learning algorithms with the following characteristics: (1) variable degrees of observability (non-Markov observations), (2) distal and sparse rewards, (3) variable and hierarchical reward structure, (4) multiple-task generation, (5) variable problem complexity. The environment provides 1D or 2D categorical observations, and takes actions as input. The core structure of the CT-graph is a multi-branch tree graph with arbitrary branching factor, depth, and observation sets that can be varied to increase the dimensions of the problem in a controllable and measurable way. Two main categories of states, decision states and wait states, are devised to create a hierarchy of importance among observations, typical of real-world problems. A large observation set can produce a vast set of histories that impairs memory-augmented agents. Variable reward functions allow for the easy creation of multiple tasks and the ability of an agent to efficiently adapt in dynamic scenarios where tasks with controllable degrees of similarities are presented. Challenging complexity levels can be easily achieved due to the exponential growth of the graph. The problem formulation and accompanying code provide a fast, transparent, and mathematically defined set of configurable tests to compare the performance of reinforcement learning algorithms, in particular in lifelong learning settings.
Introduction
Many real-world problems are characterized by a large number of observations, confounding and spurious correlations, partially observable states, and distal, dynamic rewards with hierarchical reward structures. Such conditions make it hard for both animal and machines to learn complex skills. The learning process requires discovering what is important and what can be ignored, how the reward function is structured, and how to reuse knowledge across different tasks that share common properties. For these reasons, the application of standard reinforcement learning (RL) algorithms (Sutton and Barto, 2018) to solve structured problems is often not effective. Limitations of current RL algorithms include the problem of exploration with sparse rewards (Pathak et al., 2017), dealing with partially observable Markov decision problems (POMDP) (Ladosz et al., 2021), coping with large amounts of confounding stimuli (Thrun, 2000; Kim et al., 2019), and reusing skills for efficiently learning multiple task in a lifelong learning setting (Mendez and Eaton, 2020).
Standard reinforcement learning algorithms are best suited when the problem can be formulated as a single-task problem in observable Markov decision problem (MDP). Under these assumptions, with complete observability and with static and frequent rewards, deep reinforcement learning (DRL) (Mnih et al., 2015; Li, 2017) has gained popularity due to the ability to learn an approximated Q-value function directly from raw pixel data in the Atari 2600 platform. This and similar algorithms stack multiple frames to derive states of an MDP, and use a basic \(\epsilon\)-greedy exploration policy. In more complex cases with partial observability and sparse rewards, extensions have been proposed to include more advanced exploration techniques (Ladosz et al., 2022), e.g. (Pathak et al., 2017; Burda et al., 2018; Ecoffet et al., 2019), and memory systems (Hausknecht and Stone, 2015; Heess et al., 2015; Parisotto and Salakhutdinov, 2017).
The need to test RL algorithms in more challenging problems has led the community to look for increasingly more complex benchmarks (Justesen et al., 2019). Often, first person view (FPV) games are used to test the ability of RL to cope with partial observability, while games where rewards occur rarely, e.g., Atari Montezuma, are used to test advanced exploration techniques (Ecoffet et al., 2019). However, videogame-based benchmarks have several limitations. Their degree of observability is often unknown or hard to assess. The degree of sparsity and distance of rewards, and their search space might not be clearly defined or measurable. Games might also not be easily configured to express variations of tasks or difficulty, thus lim
iting tests to a fixed problem complexity and static conditions. Finally, many games are computationally expensive because they generate complex visual fields without necessarily requiring rich policies. As a consequence, algorithms can be assessed only by testing them across a large range of such games, thus requiring considerable computational effort and yet not providing a mathematically defined metric of performance or statistical significance. Moreover, because the underlying MDP is unknown, it is also unclear how performance of an RL algorithm in a suit of games maps to other real-world problems. Unfortunately, the need to test RL algorithms on a large set of computationally expensive benchmarks slows development and significantly increases costs. Often, only large research groups with abundant computational resources can convincingly show the strength of their algorithms. Recently developed benchmarks such as Minigrid (Chevalier-Boisvert et al., 2018), ProcGen (Cobbe et al., 2020) and Minihack (Samvelyan et al., 2021) address some of these concerns, offering highly configurable, fast and procedurally generated scenarios. While such benchmarks offer increasingly more flexible and powerful RL benchmarks, they do not have fully measurable search spaces, episode length and reward sparsity.
This paper introduces a mathematically defined and configurable environment. The environment allows for precisely defined metrics and measurements such as: the degree of partial observability, measurements of distal and sparse rewards, the size of the search space, a defined hierarchy of skills, and variable reward functions.
The proposed environment is an abstraction of a decision process informed by visual stimuli and simulated with a configurable tree graph with a set of configurable parameters. At the core of the system is a configurable tree graph that can be expanded both in the depth (i.e. the length of the
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Property** & **CT-graph** & **Videogame benchmarks** \\ \hline Multiple tasks & tasks can be randomly generated & fixed or hard to define/select \\ Task similarity & defined and measurable & not defined/measurable \\ Length of episode & configurable & often unknown \\ Input & configurable data set & defined by the game \\ Output & configurable number of actions & defined by the game \\ Observability & configurable MDP to POMDP & often unknown \\ Sparsity of reward & configurable & often predefined and fixed \\ Computational cost & low & often large/unrelated to complexity \\ Size of search space & configurable and known & often fixed and unknown \\ Optimal policy & configurable and known & often unknown \\ \hline \end{tabular}
\end{table}
Table 1: List of properties that were built in the CT-graph to address the limitations of video game benchmarks.
sequence of actions to complete one episode) and width (i.e. the number of actions that an agent can choose from). The environment provides observations as vectors (one-hot vectors suitable for tabular methods) or matrices (2D images suitable for approximate methods). The proposed environment is named the _Configurable Tree graph_, or CT-graph. Table 1 reports a list of properties of the CT-graph that were designed to address the limitations of video game benchmarks. The properties of the CT-graph make this environment suitable to assess the following learning properties of RL algorithms:
* learning with variable and measurable degrees of partial observability;
* learning action sequences of adjustable length and increasing memory requirements;
* learning with adjustable and measurable sparsity of rewards;
* learning multiple tasks and testing speed of adaptation (lifelong learning scenarios);
* learning multiple tasks where the knowledge of task similarity is a required metrics (meta-learning or multi-task learning);
* learning hierarchical knowledge representation and skill-reuse for fast adaptation to dynamics rewards (lifelong learning scenarios);
* testing attention mechanisms to identify key states from noise or confounding states;
* testing meta-learning approaches for optimised exploration policies;
* learning a model of the environment;
* learning a combination of innate and learned knowledge to cope with invariant and variant aspects of the environment.
Novel reinforcement learning algorithms that implement lifelong learning, incremental learning and optimal adaptation abilities will need to demonstrate such set of skills and properties.
The rest of the paper is organised as follows. Section 2 presents the definition and main aspects of the CT-graph. Section 3 "Learning challenges" illustrates examples of various CT-graph configurations. A discussion section (4) presents concepts that have inspired the idea of the CT-graph and similar benchmarks.
## 2 The configurable tree graph (CT-graph)
The CT-graph represents a family of tree graphs. The objective is to create a learning problem for an agent that learns an optimal sequence of stimuli
and actions that maximise the reward over time. An optimal sequence is a task and is defined as the sequence to reach one particular leaf node in the graph. Configurable parameters include the graph's depth, the branching factor of the tree graph, the reward function, and others, making the CT-graph a large family of problems with different sizes and complexity. The unit structure of the graph, which can be repeated to obtain an arbitrary depth, is illustrated in Fig. 1(Left).
One execution of a CT-graph episode is defined as the unrolling of a sequence of stimuli from the set \(\mathcal{O}\) and actions from the set \(\mathcal{A}\), starting at the home location and ending with either a fail, or at an end state (graph-end) (Fig. 1).
The nodes in the CT-graph belong to five types: _home_, _wait_, _decision_, _graph-end_, and _fail_. Thus, the set \(\mathcal{S}\) of all states is the union of the subsets \(\mathcal{S}^{\text{H}}\cup\mathcal{S}^{\text{W}}\cup\mathcal{S}^{\text{D}} \cup\mathcal{S}^{\text{E}}\cup\mathcal{S}^{\text{F}}\). \(\mathcal{S}^{\text{E}}\) and \(\mathcal{S}^{\text{F}}\) are terminal states.
* **Home state**: the starting state of each episode and root of the tree graph. Any action leads to the first wait state.
* **Decision state**: a state in which the graph forks into \(b\) branches, requiring the agent to make a choice out of the \(b\) options (or branches) of the graph by selecting one of \(b\) different actions in the subset \(\mathcal{A}^{\text{D}}=\{A^{1},..,A^{b}\}\). The agent will transition to a fail state if it selects \(A^{0}\).
* **Wait state**: a state in which only action \(A^{0}\) leads the agent to the next decision state in the graph with probability \(p-1\) and leaves the
Figure 1: CT-graph illustrations (Left): Minimal CT-graph unit. The transition graph of the smallest possible graph with \(\langle b=2,d=1\rangle\) is shown. (Right): The CT-graph unit on the left is combined to create a larger graph with \(d=2\). While these two examples are simple RL problems, partial observability, the wait probability \(p\) and the ability to extend the graph to arbitrary depth, allows for the creation of arbitrarily complex and difficult problems.
agent in the same wait state with probability \(p\). Other actions lead to the fail state, terminating the episode. Wait states are located before and after decision states.
* **End state**: a leaf node in the tree graph. It is visited by an agent after having traversed the entire depth of the graph. There are \(b^{d}\) end states.
* **Fail state**: if \(A^{0}\) is taken in a decision state or \(\neg A^{0}\) is taken in a wait state, the terminal state _fail_ is reached, which returns the agent to the home state.
Observations (\(\mathcal{O}\)).At each step \(t\), the environment provides the agent with an observation \(O_{t}\), which can be a vector (1D setting for standard RL) or an image of dimension \(r\times r\) (2D setting for deep RL). A stochastic process \(X_{(t,s)}\) maps states in \(\mathcal{S}\) to observations in \(\mathcal{O}\) and determines the level of observability. The set \(\mathcal{O}\) is also divided in five subsets corresponding to those in \(\mathcal{S}\): \(\mathcal{O}=\mathcal{O}^{\mathrm{H}}\cup\mathcal{O}^{W}\cup\mathcal{O}^{ \mathrm{D}}\cup\mathcal{O}^{\mathrm{E}}\cup\mathcal{O}^{\mathrm{C}}\).
Rewards.The reward provided by the environment can be found only at an end state. Changing the location of the reward has the effect of changing the reward function only partially and hierarchically: navigation actions to the end states remain constant but the sequence of actions at decision states changes.
In summary, a CT-graph is defined by the tuple
\[\langle b,d,\mathcal{O},\{X_{(t,s)}\},p,g\rangle \tag{1}\]
where \(b\in\mathbb{N},b\geq 2\) is an integer greater or equal to 2 that defines the branching factor, i.e., how many choices are available at a decision state. \(d\in\mathbb{N}_{1}\) is an integer greater or equal to 1 that defines the depth of the graph, or the number of decision states between the home and a graph-end. \(p\) is the probability to remain in a wait state, resulting in an expected duration of delays in between decision states equal to 1/(p-1). \(g\) is the reward function that returns a reward. The set \(\mathcal{S}\) of states and the set \(\mathcal{A}\) of actions are implicitly defined by the tuple 1.
The execution of a CT-graph experiment typically involves the execution of many episodes, or trials. At each time step t, the observation depends on the action \(s_{t-1}\) fed to the environment, on the state of the MDP that is given by tracking the node visited by the agent in the tree-graph, and on the stochastic process \(X_{(s,t)}\) that maps states to observations.
Fig. 2 illustrates two graphs with depth 2 and depth 3.
### Distal, sparse, and dynamic rewards
A CT-graph is initialised with a sequence of _optimal_ decision actions \(\mathcal{A}^{\mathrm{D}^{\star}}\doteq\langle A_{1},A_{2},...,A_{d}\rangle\) from the set \(\mathcal{A}^{\mathrm{D}}\doteq\mathcal{A}-A^{0}\), where \(\langle A_{1},A_{2},...,A_{d}\rangle\) is a random sequence of integers \(A\in\mathbb{N}\,|\,1\geq a\geq b\) and \((|\mathcal{A}^{\mathrm{D}^{\star}}|=d)\). This sequence defines the location of the reward in the graph. Note that \(\mathcal{A}^{\mathrm{D}^{\star}}\) is the optimal sequence of decision actions at the graph branching points (decision states), but not the complete optimal control sequence that includes a variable number of wait actions during execution.
### Hierarchy of policies
An agent in the CT-graph may apply policies with the following characteristics and effects:
* **Random policy**: The agent performs random actions. This leads to frequent visits to the fail state. Thus, the agent is unlikely to visit deep states, and therefore ever to see a reward.
* **Navigation policy**: The agent performs actions to avoid the fail state and thus is able to traverse the graph in its full depth. Such a policy increases significantly the chances to get a reward.
* **Optimal policy**: The agent is able to navigate the full graph and applies an optimal policy to maximise the reward by executing the \(A^{0}\) action at wait states and the optimal sequence \(\mathcal{A}^{\mathrm{D}^{\star}}\) at decision states.
One can interpret the navigation policy as the ability of moving through the graph, which is hard to achieve with a random policy. The optimal
Figure 2: Examples of CT-graphs. (a) A depth 2 graph with branching 3 \(\langle b=3,d=2\rangle\). (b) A depth 3 graph with branching 2 \(\langle b=2,d=3\rangle\). The colour scheme indicates the different types of states. Small recursive arrows in the wait states indicate that the agent escapes those states with a probability of \(1-p\).
policy is the ability to both move through the graph, and choose the specific trajectory that leads to the reward.
#### 2.2.1 Distal reward (\(\rho\))
A reward collected at the end of the graph is distal because is a consequence of a (possibly long) sequence of actions. Given this property of the CT-graph, we consider the overall length of an episode as a measure of how distal the final reward is from the graph-home. The number of steps to navigate a graph, \(\rho\), is a stochastic value whose mean is given by
\[\mathbb{E}[\rho]=(1-p)^{-1}(d+1)+d+1\quad. \tag{2}\]
The CT-graph parameters can be set to obtain a large \(\mathbb{E}(\rho)\). For example, a relatively small graph with \(d=2\) and \(b=2\) can have a large \(\mathbb{E}(\rho)\) by setting \(p=1-10^{-2}\). In such a case, \(\mathbb{E}(\rho)=100\cdot(2+1)+(2+1)=303\), i.e., there are on average 303 steps between the start of the episode and the reward at the end of the graph.
#### 2.2.2 Sparsity of rewards
Rewards are sparse if the probability of an agent to reach a reward is low. If the reward function \(g\) provides one reward at one unique graph-end, to obtain it, the agent needs to select \(A^{0}\) in a wait state and the appropriate decision action \(A\in\mathcal{A}^{D}\) when at a decision state. The CT-graph allows for computing the following probabilities:
* \(P_{R}\): the probability of reaching the rewarding end state with a random policy;
* \(P_{E}\): the probability of reaching any end state with a random policy;
* \(P_{RNP}\): the probability of reaching the rewarding end state while employing a navigation policy (that avoids the fail state).
Given that the parameter \(p\) results in a variable length of an episode, the probability of reaching a reward with a random policy, \(P_{R}\), involves computing the probabilties of all possible trajectories. Thus,
\[P_{\text{R}}=\sum_{n=0}^{\infty}\frac{1}{(b+1)^{2d+1+n}}(1+p)^{d +1}p^{n}\frac{(n+d)!}{n!\cdot d!}=\] \[=(b+1)\left(\frac{1-p}{(b+1)(b+1-p)}\right)^{d+1} \tag{3}\]
For brevity, we omit here the derivation of Eq. 3 that is reported in the Appendix.
The probability of reaching any end state, \(P_{\rm E}\) is \(P_{\rm R}\) times the number of end states \(b^{d}\):
\[P_{E}=P_{R}\cdot b^{d} \tag{4}\]
\(P_{\rm R}\) and \(P_{\rm E}\) can be very small even for small graphs. E.g., with \(b=2,d=2,p=0.9\), the probability of collecting a reward in one episode is \(3(0.1/6.3)^{3}=1/83,349\). In other words, applying a random policy, an agent is expected to stumble across a reward approximately once every \(83K\) episodes.
If an agent has acquired the skills to navigate the graph to end states (navigation policy, Section 2.2), the probability of reaching the optimal end state is the inverse of the number of end states. We define this value as the reward probability with a navigation policy (\(P_{\rm RNP}\))
\[P_{\rm RNP}=\frac{1}{b^{d}}\quad. \tag{5}\]
Table 2 provides examples of reward probabilities for different CT-graph configurations. It can be noted that the probability of obtaining a reward by random policy becomes very small quickly as the parameters \(b\), \(d\), and \(p\) increase. An intelligent exploration policy will find a reward with a probability equal or higher than \(P_{RNP}\).
### Graph's dimensions
Given a CT-graph with parameters \(\langle b,d\rangle\), the size of the underlying MDP can be computed counting all wait states, decision states, end states, plus
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline CT-graph conf. & \(P_{R}\) & \(P_{E}\) & \(P_{RNP}\) & states & end states \\ \hline \(b=2,d=1,p=0\) & \(3.70\cdot 10^{-2}\) & \(7.40\cdot 10^{-2}\) & \(2^{-1}\) & 8 & 2 \\ \hline \(b=2,d=2,p=0.9\) & \(1.20\cdot 10^{-5}\) & \(4.78\cdot 10^{-5}\) & \(4^{-1}\) & 16 & 4 \\ \hline \(b=3,d=2,p=0.9\) & \(2.10\cdot 10^{-6}\) & \(1.89\cdot 10^{-5}\) & \(9^{-1}\) & 28 & 9 \\ \hline \(b=2,d=4,p=0.9\) & \(3.02\cdot 10^{-9}\) & \(4.84\cdot 10^{-8}\) & \(16^{-1}\) & 64 & 16 \\ \hline \(b=3,d=10,p=0.9\) & \(3.75\cdot 10^{-23}\) & \(2.22\cdot 10^{-18}\) & \(59049^{-1}\) & 177148 & 59049 \\ \hline \(b=2,d=16,p=0.5\) & \(3.04\cdot 10^{-20}\) & \(1.99\cdot 10^{-15}\) & \(65536^{-1}\) & 262144 & 65536 \\ \hline \end{tabular}
\end{table}
Table 2: Examples of reward probabilities per episode, number of states and end states for six different configurations.
two states for the home and fail states:
\[|\mathcal{S}|=f(b,d)=\sum_{x=0}^{d}b^{x}+\sum_{x=0}^{d-1}b^{x}+b^{d}+2=2\sum_{x=0 }^{d}b^{x}+2\quad. \tag{6}\]
Note that the probability \(p\) does not affect the size of the MDP, but is a contributing factor in the complexity of the problem if the set of observations \(\mathcal{O}\) is large. That is because if \(|\mathcal{O}|\) is large, each wait state will randomly manifest itself with a large number of different observations.
One commonly used approach for RL in POMDP is to map observations to states by deriving a state representation from the history of observations, i.e. \(S_{t}^{\prime}=\langle O_{0},O_{1},...,O_{t}\rangle\). However, this is not applicable to a CT-graph if wait states return observations from a large set \(\mathcal{O}^{W}\). In fact, due to the stochastic process \(X\), the history of observations maps to a large set of equivalent but different states. As the history has variable length according to the stochastic nature of transitions through wait states, the agent cannot infer an MDP from the history, nor from counting the number of steps.
### Observations
The CT-graph provides 1D or 2D (\(r\times r\)) observations. The set of observations \(\mathcal{O}\) is divided in five sets that map the sets of states described earlier, \(\mathcal{O}^{H},\mathcal{O}^{W},\mathcal{O}^{D},\mathcal{O}^{E},\mathcal{O}^ {F}\). The sets' cardinality affects the difficulty of the problem. Since \(\mathcal{O}^{D}\) are observed at decision states, they can be seen as _special_ observations, while elements in \(\mathcal{O}^{W}\), provided by wait states can be seen as _distractors_. A special observation requires the agent to choose between multiple actions in \(A^{T}\) in order to continue navigation towards an end state. Special observations are also key to navigate towards the reward. A distractor instead requires the agent to select \(a^{0}\) to continue the navigation towards any end state, and thus, has a simpler relationship with the reward. At each time step t, the observation is a random sample from the specific subset of \(\mathcal{O}\) associated with the state in the MDP.
CT-graphs with large observation sets require an RL agent to use experience derived cause-effect relationships or attention models to perform the default \(a^{0}\) action for wait-state observations and identify decision-state observations to perform other actions. This proposition summarises an important aim of the CT-graph environment, i.e., that of reproducing real-world scenarios in which large input-output data streams are not relevant and not causally related to rewards, while a few key stimulus-action pairs are. Human and animal learning has evolved to be robust to such abun
dance of stimuli and actions and perform a search process that discovers key stimulus-actions pairs.
#### 2.4.1 Default sets
The default 1D set is composed of one-hot vectors representing all states. In this configuration \(\mathcal{O}=\mathcal{S}\). This is only a testing and debugging feature.
The default 2D set is built as a synthetic image set with patterns. Each element in \(\mathcal{O}\) is generated by upscaling and rotating a random \(4\times 4\) blueprint matrix of \(\{0,1,2\}\) elements to result in a \(12\times 12\) image. The resulting image is then augmented with noise and slightly rotated each time the observation is retrieved and produced by the environment to simulate instances of a class. The upscaling from a \(4\times 4\) random matrix is intended to create an image with local correlations that result into features and patterns. The \(12\times 12\) observation space allows for a variety of synthetic classes to be generated while maintaining low requirements for the computation and feature extraction process. Fig. 3 shows 7 elements from a set \(\mathcal{O}\).
The \(4\times 4\) random blueprint matrix that contains values in the set {0,1,2} can generate \(3^{16}\) different images. Adding augmentation with two rotations of 30 and 60 degrees, the total space of images is \(3\cdot 3^{16}=3^{17}\) (approximately 129 million). In short, the set \(\mathcal{O}\) can be built drawing from a large space of different images with patterns that require approximate methods.
#### 2.4.2 Alternative observations
In theory, alternative data sets such as the MNIST, CIFAR-10 or CIFAR-100 could be used instead. These might provide a more real-world set of inputs, but have the limitation of a pre-definite number of classes and instances per class. This could complicate automatic instantiating of large graphs.
Figure 3: Examples of 7 classes from a set of observations \(\mathcal{O}\). The two rows show differences due to noise and rotation each time an observation is fetched.
Additionally, the distances within and between classes is not controllable, introducing perception challenges that cannot be easily measured. While possible, the current implementation does not include additional data sets.
### Reward function
Transitions return a reward according to a function \(g\). Assume that \(A^{D^{\circ}}\) is the sequence of actions taken by the agent at decision states during one episode. Then
\[g(s_{t},a_{t})=\left\{\begin{array}{ll}\forall\{s_{t+1},a_{t}\}:s_{t+1}\notin S ^{E}&\Rightarrow g(s,a)=0\\ \forall\{s_{t+1},a_{t}\}:s_{t+1}\in S^{E}&\Rightarrow g(s,a)=c(A^{D^{*}},A^{D ^{\circ}})\end{array}\right. \tag{7}\]
where
\[c(A^{D^{*}},A^{D^{\circ}})=\left\{\begin{array}{ll}1&\mbox{if}\quad A^{D^{*} }=A^{D^{\circ}}\\ 0&\mbox{otherwise}\end{array}\right. \tag{8}\]
is a comparison function that returns 1 when the sequence of actions at decision states coincides with the optimal sequence. In other words, the agent receives a reward of 1 for reaching the goal end state, and zero otherwise.
This setting results in extremely spare rewards as show in Table 2. Measures can be adopted to simplify the problem by introducing dense rewards, e.g., -1 for reaching the fail state, or, similarly, a small positive value for each step.
Eq. 8 implies that the reward distribution in the graph is a needle in a haystack. If a new task is created by providing a new sequence \(A^{D^{*}}\), an optimal search algorithm will solve the tasks in \(\kappa\) episodes, with
\[\mathbb{E}[\kappa]=(b^{d}+1)/2\quad. \tag{9}\]
Eq. 9 provides a lower bound for an optimal exploration strategy that samples all end states without repetition.
An alternative setting implements a reward gradient across end states. In this case, the function of Eq. 8 becomes
\[c(A^{D^{*}},A^{D^{\circ}})=1-\frac{|A^{D^{*}}-A^{D^{\circ}}|\times[b^{d},b^{d-1 },..,b^{0}]^{T}}{b^{d}-1}\quad, \tag{10}\]
where the vector \(|A^{D^{*}}-A^{D^{\circ}}|\) is zero if the goal is reached, and the vector \([b^{d},b^{d-1},..,b^{0}]\) provides weighting parameters for the deviation of \(A^{D^{\circ}}\) from \(A^{D^{*}}\). Such a setting provides a gradient across rewards in large graphs such as those in the last two rows of Table 2.
Finally, rewards can be made stochastic simply by using Eq. 10 as the mean in
\[c^{*}(A^{D^{*}},A^{D^{\circ}})=\mathcal{N}(c(A^{D^{*}},A^{D^{\circ}}),\sigma)\quad. \tag{11}\]
Note that a high standard deviation \(\sigma\) in Eq. 11 requires a large number of samples to estimate the return. An overview of stochastic n-armed bandit problems in Sutton and Barto (2018) provides examples of the challenges involved in computing such estimates. However, as opposed to n-armed bandit problems that have one single state and multiple actions, the CT-graph requires visiting many states before reaching a graph end where rewards are located. If stochasticity results in negative reward samples, a policy might be "intimidated" into not approaching graph ends. Paraphrased with a metaphor, this is equivalent to going out in search of food and ending up being attacked by a predator, which has an immediate larger negative reward than not moving. However, going out and looking for food, although fatal in that particular occasion, is still better than staying at home.
We would like to point out that while CT-graphs with stochastic rewards are the most realistic types of problems presented here, practically, their complexity is likely to exceed the capabilities of current RL algorithms, unless stochasticity is used for very small graphs (i.e., depth 1 or 2).
## 3 Learning challenges
We now present exemplary configurations of the CT-graph that can be used to test RL algorithms for specific learning challenges. Such configurations can be set by modifying the configuration json file, whose entries are reported in Table 3. The following examples only represent a small set of possible configurations.
### Fully observable graph
The CT-graph can be configured to be fully observable. This is useful mainly for debugging and testing basic RL algorithms. Full observability in a CT-graph can be obtained replacing \(X_{s,t}\) with an injective function \(f\) to map each state to an observation. In other words, each different decision state and each different wait state have a _reserved_ observation from the set \(\mathcal{O}\). By setting MDP_D = true and MDP_W = true, each state in the graph has unique observations, and thus becomes fully observable.
Configuration **CT-FO-B1**: Fully observable, baseline 1.
**Parameters:** d=2, b=2, p=0, MDP_D=true, MDP_W=true.
**Properties:** A small graph with only 3 decision states, 7 wait states, 4 end states. The number of steps per episode is \(\rho=6\). The probability per episode of scoring a high-reward with a random policy is \(P_{R}=1/2^{5}=1/32\).
**Suitability:** Basic checks and debugging.
Configuration **CT-FO-B2**: Fully observable, baseline 2.
**Parameters:** d=3, b=2, p=0.5, MDP_D=true, MDP_W=true.
**Properties:** A larger graph with 7 decision states, 15 wait states, 8 end states. The number of steps per episode is \(\mathbb{E}[\rho]=11\). \(P_{R}=5.93\cdot 10^{-5}\), or 1 reward in 16875 episodes.
**Suitability:** Testing standard RL algorithms with sparse rewards.
### Learning with variable degrees of observability
#### 3.2.1 The surjective CT-graph
In the surjective CT-graph, partial observability is introduced by using a surjective function \(f\) instead of \(X_{(s,t)}\): only five observations corresponding to each type of the five types of state are used.
The surjective graph is a POMDP that models navigation environments in which the same visual inputs repeat in different states of the MDP. Such a graph cannot be solved by vanilla RL if observations are treated as states.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Parameter** & **Description** & **Common values** \\ \hline Seed & Seed of the random number generator used for noise and stochastic rewards & 1;2;...;n \\ \hline graph\_shape & & \\ d & Number of sequential decision states & 1 to 5 \\ b & Number of branches from a decision state & 2 or 3 \\ p & Probability of remaining in a wait state & 0; 0.5; 0.9 \\ \hline reward & & \\ high\_r & Value of the reward at the goal state & 1 \\ fail\_r & Value of the reward at the fail state & 0; -1 \\ std\_r & Standard deviation on reward sampling & 0; 0.1 \\ \hline observations & & \\ MDP\_D & If true, each decision state provides a unique obs. & true, false true, false true, false \\ MDP\_W & If true, each wait state provides a unique obs. & \([1,|\mathcal{O}^{W}|]\) \\ W\_IDs & Start and end indices of obs. for wait states & \([|\mathcal{O}^{W}|\) + 1,\(|\mathcal{O}^{D}|]\) \\ D\_IDs & Start and end indices of obs. for decision states & \([|\mathcal{O}^{W}|\) + 1,\(|\mathcal{O}^{D}|]\) \\ \hline image set & & 1;2;...;n \\ seed & Specific seed for image data set & false, true \\
1D & Use 1D one-hot vectors as obs. & 5 or higher \\ nr\_images & Number of images to be created in the data set & 0 deg; 5 deg \\ noise on read & Noise on each pixel when an obs. is read & 0 deg; 5 deg \\ rotation on read & Maximum random rotation when an obs. is read & 0 deg; 5 deg \\ \hline \end{tabular}
\end{table}
Table 3: List of parameters available in the json configuration file.
Configuration **CT-SU-B1**: Surjective, baseline 1.
**Parameters:** d=2, b=2, p=0, MDP_D=false, MDP_W=false, D_IDs=[2,2], W_IDs=[3,3]
**Properties:** A small graph with only 3 decision states, 7 wait states, 4 graph ends. The number of steps per episode is \(\rho=6\). The probability per episode of scoring a high-reward with a random policy is \(P_{R}=1/2^{5}=1/32\). However, all decision states provide the same observation, and similarly all wait states.
**Suitability:** Testing RL for POMDP with either memory or belief systems. Algorithms without memory or belief system will be unable to reach particular graph ends for which different actions are required at different decision states that provide the same observation.
#### 3.2.2 The confounding CT-graph
Employing the stochastic process \(X_{(s,t)}\) to generate observations results in a higher level of non-observability and more challenges for the learning agent. In the case of the _confounding_ CT-graph, we assume that all decision states provide the same observation from the set \(\mathcal{O}^{D}\) of cardinality one, and all wait states provide random observations from a large set \(\mathcal{O}^{W}\) with \(|\mathcal{O}^{W}|>>1\).
The challenge with the confounding CT-graph is that the agent needs to learn to ignore all seemingly random stimuli and perform \(a^{0}\) at wait states,
Figure 4: Fully observable, surjective and confounding CT-graphs. The numbers inside the nodes are the class ID of the observations. (a) Fully observable: each state in the MDP has a unique class in the observation set. (b) Surjective graph: each state type provides one observation. (c) Confounding graph: the wait states provide stochastic observations from a large set \(\mathcal{O}^{W}\).
while learning decision states where the correct decision action is required.
The confounding graph is the most interesting configuration setting in which cause-effect relationships have to be extracted and separated from random stimuli. Approaches such as attention, associative learning and neuromodulation are likely to provide an advantage in this case.
Configuration **CT-CO-B1**: Confounding, baseline 1.
**Parameters:** d=2, b=2, p=0, MDP_D=false, MDP_W=false, D_IDs=[2,2], W_IDs=[3,102]
**Properties:** A small graph with only 3 decision states, 7 wait states, 4 graph ends. The number of steps per episode is \(\rho=6\). The probability per episode of scoring a high-reward with a random policy is \(P_{R}=1/2^{5}=1/32\). However, all decision states provide the same observation. Wait states provide any observation from a set of 100 images.
**Suitability:** Testing RL for POMDP with either memory or belief systems, and attention systems, or particular abilities to learn particular features in the input stream and ignore others.
#### 3.2.3 Fully stochastic-observation graph
If the cardinality of all subsets is greater than one, i.e., \(|O^{H}|=|O^{D}|=|O^{W}|=|O^{E}|=|O^{F}|>1\), the challenge increases because the agent needs to learn an explicit association between groups of classes and states of the POMDP. While an agent in the confounding graph could learn to identify decision states, and ignore all other observations, in a fully stochastic-observation graph, the agent is required to derive the association of each image in \(O\) with the specific subset. In practice, this is an extremely hard problem that links to real-world scenarios only if each subset of \(O\) share common features. In other words, if observations within one subset are more similar to each other than observations among different sets, it is possible for an RL agent to learn what makes a wait observation different from a decision state observation.
### Learning with distal and sparse rewards
The length of the graph, expressed by \(\rho\) in Eq. 2, determines a distal measure of the rewards, or in other words, the length of the episode.
In all previous cases with a deterministic \(\rho\), an agent can learn to reach a specific goal state by ignoring all inputs, and simply applying the unique and optimal sequence. If the probability of staying in a wait state, \(p\), is set to be greater than 0, \(\rho\) becomes stochastic. In this case, the agent is
required to detect the difference between a decision state and a wait state to perform an optimal sequence of actions. Additionally, a high value of \(\rho\) increases the sparsity of the reward and decreases the probability of finding a reward by a random policy.
Configuration **CT-POSR-B1**: Partially observable sparse reward, baseline 1.
**Parameters:** d=2, b=2, p=0.5, MDP_D=false, MDP_W=false, D_IDs=[2,2], W_IDs=[3,3]
**Properties:** A small graph with only 3 decision states, 7 wait states, 4 end states. The number of steps per episode is stochastic and given by Eq. 2, which in this case is \(\mathbb{E}[\rho]=9\). The probability per episode of scoring a high-reward with a random policy is \(P_{R}=0.00088=8.88\cdot 10^{-4}\) (see Eq. 3). All decision states provide the same observation. All wait states provide the same observation.
**Suitability:** Testing RL for POMDP with either memory or belief systems on problems with distal and sparse rewards and a variable duration of episodes.
### Lifelong learning across multiple tasks
There are multiple ways to generate multiple tasks, which can then be assembled in a curriculum in which tasks can be learned sequentially according to a lifelong or continual learning protocol. The CT-graph allows for variations across three different domains: (1) variation of reward function; (2) variation of input distributions; (3) variation of the MDP;
**Variation of the reward function**. For a given CT-graph with \(n\) end states, it is possible to create \(n\) tasks, each of which has the high reward at a different end state. The input distribution and the structure of the MDP remain unchanged. This means that different tasks have a different sequence of optimal actions, or policy. Thus, such tasks are adversarial, meaning that one single function cannot learn more than one such a task.
**Variation of the input distribution**. Multiple tasks can be generated by changing some or all classes in the observation sets. This can be done by selecting different image IDs, or simply by creating a new data set with different seeds.
**Variation of the MDP**. Two CT-graphs that have different shapes, e.g., a depth 2 and a depth 3, will have different MDP structures.
#### 3.4.1 Exploiting task similarities for lifelong learning
If two CT-graphs have different shapes, different input distributions and different reward functions, they can be used in a LL curriculum to test particular learning properties. In particular, their lack of similarities can be exploited to test an LRL to learn new and uncorrelated tasks without suffering from catastrophic forgetting. However, such a curriculum will not test the ability to exploit old and new knowledge in backwards and forward transfer metrics (Baker et al., 2023).
A more interesting curriculum can be built by creating tasks that share similarities. One such example are two surjective graphs that share the same inputs and MDP, but have different reward functions. Using the example above CT-SU-B1, it is possible to create a curriculum of 4 tasks, each with the reward at at different end state. Solving all tasks requires learning different functions. However, the policy required at wait states is the same. This similarity can be exploited by LRL algorithms to accelerate learning across tasks.
We list here some practical ways to generate LL curricula with task similarities.
* Surjective graph with different reward locations. The similarities are the policy at the wait states.
* Surjective graph with similar reward locations. The similarities are both in the policy at the wait states and at some of the decision states.
* Graphs with growing depth. A depth 3 graph will contain a depth 2 graph, meaning that they share similarities. Additionally, similarities can be created if the shallower graph has a goal location along the trajectory of the goal location in the deeper graph.
### Coping with variant and invariant features
Assume that a LRL algorithm is given a large set of \(n\) different graphs. All such graphs, while different, share some similarities. The similarities represent invariant features across all problems. Therefore, an ideal learner could implement a policy that is composed of a fixed part to deal with invariant feature, and a learnable policy to adapt to variant features. Such a distinction introduces a hierarchy of knowledge that can be exploited by meta-RL algorithms, evolutionary algorithms that evolve both inborn knowledge and learning strategies (Soltoggio et al., 2018), and more in general LRL algorithms that can exploit task similarities to accelerate learning.
### Code
The CT-graph is implemented as an OpenAI gym environment. The code is available at [https://github.com/soltoggio/CT-graph](https://github.com/soltoggio/CT-graph).
## 4 Discussion
The inspiration that lead to the development of the CT-graph is discussed in the following section. A brief overview of known performance of RL algorithms is provided.
### Inspiration
The CT-graph is an abstract and generalised problem formulation that draws elements from a range of concepts such as sequence learning (Clegg et al., 1998), backup diagrams (Sutton and Barto, 2018), the well known T-maze environment used in animal behavior (Olton, 1979; Wenk, 1998) and computational studies (Soltoggio et al., 2008), and the problem of learning from rich and confounding stimuli-actions sequences (Izhikevich, 2007; Soltoggio and Steil, 2013; Soltoggio et al., 2013).
Decision problems can be seen as a sequence of stimuli and actions that lead to a desired state. A practical example is a T-maze environment in which an animal, typically a rat, starts at the bottom of the maze and walks forward until it reaches a turning point. On either sides there is a reward or a punishment. An animal such as a rat will learn which turning direction to choose to maximise reward and reduce punishment. Multiple T-junctions can be added to increase the length of the sequence of actions to memorise. The sequence of correct turning actions to reach a reward is an arbitrary long sequence of optimal actions. As such, the problem can be formulated as a reinforcement learning problem. If turning points and corridors look similar, observations do not maps to states in a MDP, and a memory system is required to perform navigation.
Additionally, while executing a sequence of stimuli-informed actions, distracting or random stimuli and actions may provide spurious correlations with rewards. For example, driving from A to B might lead a driver to observe identical cues, e.g., speed limit signs or cars parked on the side of the road, that do not reveal the state along the path because they repeat similarly at different locations. Thus, finding the optimal sequence of actions requires the ability to discount irrelevant information, and extract and focus on a subset of stimulus-action sequences that lead to a reward.
This condition can be abstracted as in the information stream presented in Fig. 5. If there are cause-effect relationships, but other stimuli and actions occur simultaneously, learning of true cause-effect relationships require the observation of many occurrences of the reward to extract the unique causing factors. The challenge increases as the delay among stimuli, actions, and reward increase because it also increase the number of intervening confounding stimuli that are not causally related to reaching a reward.
### Known performance of RL algorithms on the CT-graph
The range of SoTA RL algorithms that can address the many learning challenges in the CT-graph is too large to provide comprehensive tests as part of this paper. We choose instead to list the papers that have used the CT-graph so far and summarise the main results.
In Ben-Iwhiwhu et al. (2020), a surjective graph (POMDP) with depth 2 and 3 was used to learn multiple reward functions sequentially (different goal locations). Meta-RL algorithms were compared with an evolutionary approach. Due to the memory requirements, partial observability and adversarial tasks, CAVIA (Zintgraf et al., 2019), MAML (Finn et al., 2017) and RL\({}^{2}\)(Duan et al., 2016) failed to solve all tasks. The neuroevolution ap
Figure 5: Example of ambiguous cause-effect information stream (problem formulation derived from Izhikevich (2007) and Soltoggio and Steil (2013). In the figure, the distinction between low and high-level stimuli represents the fact that cause-effect relationships are likely to exist at high levels of representations. (a) A series of stimuli and actions precede the delivery of a reward. From the observation of these series, any stimuli, any actions, or any combinations of stimuli and actions that precede the reward could be the cause. If a second sequence is observed, panel (b), we can restrict the possible causes of the reward to s0, a8, a7, s0+a8, s0+a7. Finally, in this particular example, a third observation in panel (c) allows us to determined that the reward is caused by s0+a8. Irrelevant or confounding stimuli are implemented in the CT-graph with wait states and large sets of observations.
proach instead evolved memory units triggered by particular observations, which lead to solve the tasks. A similar depth 2 graph was used in Dick et al. (2020) to test the ability of a statistical approach to detect task changes. The similar graphs had a single variation in the transition matrix, which made the two environments very similar and therefore difficult to detect. However, the performance of RL algorithms was not assessed.
In Ladosz et al. (2021), a confounding CT-graph was used to test a RL architecture that combined backpropagation with an associative Hebbian neural unit. The most complex benchmark had a high \(p=0.9\), which led to long trajectories due to many cycles in the wait states, as well as a large number of observations (500). The setting aimed to reproduce a real-world condition in which few relevant task-cues are to be discovered among a large amount of diverse irrelevant observations. While the proposed algorithm was devised to perform well under such challenging conditions, several baselines, including DQN (Mnih et al., 2013), QRDQN+LSTM (Hausknecht and Stone, 2015), REINFORCE (Williams, 1992), A2C (Mnih et al., 2016), AMRL (Beck et al., 2019) and Backpropanine (Miconi et al., 2018) performed poorly or failed completely on the most complex graphs.
In Ben-Iwhiwhu et al. (2022), a meta-RL method that used a neuro-modulatory approach attempted to solve full-MDP graphs of depth 2, 3 and 4 with variying reward functions. CAVIA (Zintgraf et al., 2019) and PEARL (Rakelly et al., 2019) were shown to perform well with the addition of neuromodulation in tests where multiple tasks were instantiated with different goal locations.
In Ben-Iwhiwhu et al. (2022), a CT-graph of depth 5 was solved for the first time. The graph was MDP for the decision states (MDP_D=true) and POMDP for the wait states (MDP_W=false), which led to graphs that can be solved without memory and share similar wait states. The graph with depth 5 had a very small reward probability of \(5.6\cdot 10^{-6}\) (one reward every 177,147 episodes with a random policy). The novel algorithm investigated the potential of modulating masks in lifelong RL. The depth 5 graph was solved only by a system that first learned to solve depth 2, 3 and 4, in such order, before tackling depth 5. A linear combination of masks was used to exploit previously learned knowledge and apply it to the most challenging task.
### Limitations
The CT-graph may not be the most appropriate benchmark for all RL problems as some distinctive features lead to both advantages and disadvantages.
While input spaces can be customised to larger and more complex images, the standard synthetic set is designed for speed and for having equidistant classes. Expanding the CT-graph with other data sets is not immediate, and the new benchmark will have different properties, making comparisons difficult. A related limitation is that the current input set is not designed to have correlations among observations: in a full MDP setting, each observation is distinct. As a result, the power of approximate methods may not be fully tested. Setting a high level of noise when fetching observations will provide partial compensation for the problem. Another solution is to alter the image generation process to introduce correlations or similarities among particular observations, e.g., decision state observations, but again, that is not part of the current implementation.
The discrete action space makes the CT-graph less suitable to test algorithms for continuous output.
The nature of a tree graph is that it expands exponentially as the agent moves away from the start node. Each step in the CT-graph represents an action that cannot be undone, i.e., the agent cannot go back along the tree. As a result, there is only one optimal trajectory, which may not be the case for other RL environments where multiple optimal trajectories can exist.
Finally, while the many configurations offer a large range of problems, they also imply that the CT-graph is not a single reference benchmark. Different algorithms can be compared only if they are tested on the same configurations.
## 5 Conclusion
This paper describes a set of mathematically formulated RL problems that, despite their apparent simplicity, can be used to create different and hard challenges for RL algorithms. The problems are inspired by real-world scenarios where partial observability, confounding stimuli, distal rewards and multiple tasks affect the efficacy of RL algorithms. While this benchmark does not use appealing or catching visual inputs as 3D FPV environments, it has the advantage to allow for precisely defined levels of complexity. In particular, the CT-graph can be set to provide varying levels of observability, measurable sparsity of the reward, measurable size of the MDP, and others. The depth of the graph can be easily configured to obtain extremely sparse rewards and large MDPs that render SoTA RL algorithms ineffective. Therefore, the CT-graph is particularly suitable when specific learning properties of a RL algorithm need to be rigorously assessed against pre
cisely defined learning challenges. Finally, this benchmark is suitable to test lifelong learning capabilities due to the ability to generate large number of tasks with various degrees of similarities.
### Acknowledgement
This material is based upon work supported by the United States Air Force Research Laboratory (AFRL) and Defense Advanced Research Projects Agency (DARPA) under Contract No. FA8750-18-C-0103 (Lifelong Learning Machines) and Contract No. HR00112190132 (Shared Experience Lifelong Learning).
## Appendix
### Derivation of the probability of a reward
We show here how to compute the probability of obtaining a reward while applying a random policy. Firstly, we need to compute the probability of an episode of being of a given length \(\rho\). The probability of \(\rho\) taking a particular value \(l\) can be found in the following manner. Let a _wait transition_ be a transition from a wait state back to the same wait state, and an _escape transition_ be a transition from a wait state to the next decision or end state. Firstly, find the probability of the graph having, for example, all transitions from wait state to wait state occur before the first decision state, and the agent avoiding any such transitions at all following wait states. The minimum possible value of \(\rho\) is given by \(2(d+1)\), so the number of wait state to wait state transitions \(n\) required to make \(\rho=l\) can be set as \(n=l-2(d+1)\). The probability of making a wait state to wait state transition is \(p\), and the probability of escaping a wait state is given by \(1-p\). The probability of making \(n\) wait transitions followed by only escape transitions until the end of the graph is \(p^{n}\times(1-p)^{d+1}\). Multiply this by the number of ways the \(n\) wait transitions can be arranged between the escape transitions. The only restrictions are that the number of wait transitions must be \(n\), the number of escape transitions must be \(d+1\), and the sequence of transitions must end in an escape transition. This means we have to take a combination of \(d\) escape transitions from a collection of \(n+d\) total transitions. The number of possible combinations is \(\frac{(n+d)!}{n!\times d!}\). So, the total probability of \(\rho\) taking the value \(l=n+2(d+1)\) is
\[P(\rho=n+2(d+1))=p^{n}\times(1-p)^{d+1}\times\frac{(n+d)!}{n!\times d!}\quad. \tag{12}\]
To find the expected probability of the agent reaching one particular graph end through random exploration in one episode, we multiply the probability of the agent reaching the graph end at each length \(l\) by the probability of the graph being length \(l\), and sum these probabilities for all \(l\). For \(l<2(d+1)\), the probability \(P_{(}\rho=l)=0\). So, we can start our summation with \(l=2(d+1)\), which is equivalent to \(n=0\).
Recall that \(n\) is the number of times the agent transitions from the wait state back into the wait state, and \(b\) is the branching factor, or number of paths available at a decision state. Note that \(b+1\) actions can be chosen at any state. Also, recall that at the home state, any action results in a transition to the next state. This means that of all \(l\) actions taken, only \(l-1\) must be specific. \(P_{R}\) is therefore expressed by:
\[P_{\text{R}}=\sum_{n=0}^{\infty}\frac{1}{(b+1)^{2d+1+n}}(1+p)^{d+1}p^{n}\frac{ (n+d)!}{n!\cdot d!}=\]
\[=(b+1)\left(\frac{1-p}{(b+1)(b+1-p)}\right)^{d+1}\]
|
2301.07673 | OnePose++: Keypoint-Free One-Shot Object Pose Estimation without CAD
Models | We propose a new method for object pose estimation without CAD models. The
previous feature-matching-based method OnePose has shown promising results
under a one-shot setting which eliminates the need for CAD models or
object-specific training. However, OnePose relies on detecting repeatable image
keypoints and is thus prone to failure on low-textured objects. We propose a
keypoint-free pose estimation pipeline to remove the need for repeatable
keypoint detection. Built upon the detector-free feature matching method LoFTR,
we devise a new keypoint-free SfM method to reconstruct a semi-dense
point-cloud model for the object. Given a query image for object pose
estimation, a 2D-3D matching network directly establishes 2D-3D correspondences
between the query image and the reconstructed point-cloud model without first
detecting keypoints in the image. Experiments show that the proposed pipeline
outperforms existing one-shot CAD-model-free methods by a large margin and is
comparable to CAD-model-based methods on LINEMOD even for low-textured objects.
We also collect a new dataset composed of 80 sequences of 40 low-textured
objects to facilitate future research on one-shot object pose estimation. The
supplementary material, code and dataset are available on the project page:
https://zju3dv.github.io/onepose_plus_plus/. | Xingyi He, Jiaming Sun, Yuang Wang, Di Huang, Hujun Bao, Xiaowei Zhou | 2023-01-18T17:47:13Z | http://arxiv.org/abs/2301.07673v1 | # OnePose++: Keypoint-Free One-Shot Object Pose Estimation without CAD Models
###### Abstract
We propose a new method for object pose estimation without CAD models. The previous feature-matching-based method OnePose [48] has shown promising results under a one-shot setting which eliminates the need for CAD models or object-specific training. However, OnePose relies on detecting repeatable image keypoints and is thus prone to failure on low-textured objects. We propose a keypoint-free pose estimation pipeline to remove the need for repeatable keypoint detection. Built upon the detector-free feature matching method LoFTR [47], we devise a new keypoint-free SfM method to reconstruct a semi-dense point-cloud model for the object. Given a query image for object pose estimation, a 2D-3D matching network directly establishes 2D-3D correspondences between the query image and the reconstructed point-cloud model without first detecting keypoints in the image. Experiments show that the proposed pipeline outperforms existing one-shot CAD-model-free methods by a large margin and is comparable to CAD-model-based methods on LINEMOD even for low-textured objects. We also collect a new dataset composed of 80 sequences of 40 low-textured objects to facilitate future research on one-shot object pose estimation. The supplementary material, code and dataset are available on the project page: [https://zju3dv.github.io/onepose_plus_plus/](https://zju3dv.github.io/onepose_plus_plus/).
## 1 Introduction
Object pose estimation is crucial for immersive human-object interactions in augmented reality (AR). The AR scenario demands the pose estimation of arbitrary household objects in our daily lives. However, most existing methods [39; 29; 38; 55; 2; 4; 37] either rely on high-fidelity object CAD models or require training a separate network for each object category. The instance- or category-specific nature of these methods limits their applicability in real-world applications.
To alleviate the need for CAD models or category-specific training, OnePose [48] proposes a new setting of _one-shot object pose estimation_. It assumes that only a video sequence with annotated object poses is available for each object and aims for its pose estimation in arbitrary environments. This setting eliminates the requirements for CAD models and the separated pose estimator training for each object, and thus is more widely applicable for AR applications. OnePose adopts the feature-matching-based visual localization pipeline for this problem setting. It reconstructs sparse object point clouds with SfM [44] and establishes 2D-3D correspondences between keypoints in the query image and the point cloud model to estimate the object pose. Being dependent on detecting repeatable
keypoints, OnePose struggles with low-textured objects whose complete point clouds are difficult to reconstruct with keypoint-based SfM. Without complete point clouds, pose estimation is prone to failure for many low-textured household objects.
We propose to use a keypoint-free feature matching pipeline on top of OnePose to handle low-textured objects. The keypoint-free semi-dense feature matching method LoFTR [47] achieves outstanding performance on matching image pairs and shows strong capabilities for finding correspondences in low-textured regions. It uses centers of regular grids on a left image as "keypoints", and extracts sub-pixel accuracy matches on the right image in a coarse-to-fine manner. However, this two-view-dependent nature leads to inconsistent "keypoints" and fragmentary feature tracks, which go against the preference of modern SfM systems. Therefore, keypoint-free feature matching cannot be directly applied to OnePose for object pose estimation. We will further elaborate this issue in Sec. 3.1.
To get the best of both worlds, we devise a novel system to adapt keypoint-free matching for one-shot object pose estimation. We propose a two-stage pipeline for reconstructing a 3D structure, striving for both accuracy and completeness. For testing, we propose a sparse-to-dense 2D-3D matching network that efficiently establishes accurate 2D-3D correspondences for pose estimation, taking full advantage of our keypoint-free design.
More specifically, to better adapt LoFTR [47] for SfM, we design a coarse-to-fine scheme for accurate and complete semi-dense object reconstruction. We disassemble the coarse-to-fine structure of LoFTR and integrate them into our reconstruction pipeline. In the coarse reconstruction phase, we use less accurate yet repeatable LoFTR coarse correspondences to construct consistent feature tracks for SfM and yield an inaccurate but complete semi-dense point cloud. Then, our novel refinement phase optimizes the initial point cloud by refining "keypoint" locations in coarse feature tracks to sub-pixel accuracy. As shown in Fig. 1, our framework can reconstruct accurate and complete semi-dense point clouds even for low-textured objects, which lays the foundation for building high-quality 2D-3D correspondences for pose estimation.
At test time, we draw inspiration from the sparse-to-dense matching strategy in visual localization [12], and further adapt it to direct 2D-3D matching in a coarse-to-fine manner for efficiency. Additionally, we use self- and cross-attention to model long-range dependencies required for robust 2D-3D matching and pose estimation of complex real-world objects, which usually contain repetitive patterns or low-textured regions.
We evaluate our framework on the OnePose [48] dataset and the LINEMOD [16] dataset. The experiments show that our method outperforms all existing one-shot pose estimation methods [48, 33] by a large margin and even achieves comparable results with instance-level methods [39, 29] which are trained for each object instance with a CAD model. To further evaluate and demonstrate the capability of our method in real-world scenarios, we collect a new dataset named OnePose-LowTexture, which comprises 80 sequences of 40 low-textured objects.
#### Contributions.
* A keypoint-free SfM method for semi-dense reconstruction of low-textured objects.
* A sparse-to-dense 2D-3D matching network for accurate object pose estimation.
* A challenging real-world dataset OnePose-LowTexture composed of 40 low-textured objects with ground-truth object pose annotations.
Figure 1: **Comparison Between Our Method and OnePose [48]. For low-textured objects that are challenging for OnePose, our method can reconstruct their semi-dense point clouds with more complete geometry and thus achieves more accurate object pose estimation. Green and blue boxes represent ground truth and estimated poses, respectively.**
Related work
CAD-Model-Based Object Pose Estimation.Many previous methods leverage known object CAD models for pose estimation, which can be further categorized into instance-level, category-level, and generalizable methods by their generalizability. Instance-level methods estimate object poses either by directly regressing poses from images [58; 20; 29] or construct 2D-3D correspondences and then solve poses with PnP [39; 59]. The primary deficiency is that these methods need to train a separate network for each object. Category-level methods, such as [55; 51; 21; 54; 56; 4], learn the shape prior shared within a category and eliminate the need for CAD models in the same category at test time. However, these methods cannot handle objects in unseen categories. Some recent methods leverage the generalization power of 2D feature matching for the pose estimation of unseen objects. Reference images are rendered with CAD models and then matched with the query image, using either sparse keypoints matching [61] or dense optical flow [45]. All methods mentioned above depend on high-fidelity textured CAD models for training or rendering, which are not easily accessible in real-world applications. Our framework, instead, reconstructs a 3D object model from pose-annotated images for object pose estimation.
CAD-Model-Free Object Pose Estimation.Some recent methods get rid of the CAD model completely. RLLG [2] uses correspondences between image pairs as supervision for training, without a known object model. However, it still requires accurate object masks as supervision, which are not easily accessible without a CAD model. NeRF-Pose [24] reconstructs a neural representation NeRF [36] for an object first and train an object coordinate regression network for pose estimation. These methods are not generalizable to unseen objects. The recently proposed Gen6D [33] and OnePose [48] only require a set of reference images with annotated poses to estimate object poses and can generalize to unseen objects. Gen6D uses detection and retrieval to initialize the pose of a query image and then refine it by regressing the pose residual. However, Gen6D requires an accurately detected 2D bounding box for pose initialization and struggles with occlusion scenarios. OnePose reconstructs objects' sparse point cloud and then extracts 2D-3D correspondences for solving poses. It performs poorly on low-textured objects because of its reliance on repeatably detected keypoints. Our work is inspired by OnePose but eliminates the need for keypoints in the entire framework, which leads to better performance on both textured and low-textured objects.
Notably, leveraging feature matching for object pose estimation is a long-studied problem. Some previous methods [9; 35; 14; 17; 46] extract keypoints on the query image first and perform matching with reference images or SfM model to obtain 2D-3D matches for pose estimation. The main challenges are the ambiguous matches incurred by low-textured and repetitive patterns. They either rely on the ratio test in the matching stage [35; 14; 46] or leverage prioritized hypothesis testing in the outlier filtering stage [9; 17] to reject ambiguous matches. Different from them, our framework eliminates the keypoint detection for the query image by directly performing matching between the 2D feature map and the 3D model, which benefits pose estimation for low-textured objects. Moreover, we leverage the attention mechanism to disambiguate 2D and 3D features for matching, while the direct feature disambiguation is not explored by these methods.
Structure from Motion and Visual Localization.Visual localization estimates camera poses of query images relative to a known scene structure. The scene structure is usually reconstructed with Structure-from-Motion (SfM) [44], relying on the feature matching methods [34; 6; 62; 25; 41; 13]. The localization problem is then solved by finding 2D-3D correspondences between the 3D scene model and a query image. HLoc [40] scales up visual localization in a coarse-to-fine manner. It is based on image retrieval and establishes 2D-3D correspondences by lifting 2D-2D matches between query images and retrieved database images to 3D space. However, HLoc is not suitable for our setting since it is slow during pose estimation because it depends on 2D-2D matching of multiple image pairs as the proxy to locate one query image. Our framework is more relevant to the previous visual localization methods which are based on efficient direct 2D-3D matching [43; 26; 49; 32; 3; 60; 27; 5; 7; 50; 42; 12]. To boost matching efficiency between the large-scale point cloud and query images, some of them narrow the searching range by leveraging the priors [43; 26; 5] or compress 3D models by quantizing features [43; 32; 42]. However, these strategies contribute little to the disambiguation of features. They often use priors [43; 32; 27; 42] or geometric verification [49; 3; 60; 50] in the outlier filtering stage to cope with the challenges from low-textured regions or repetitive patterns. In contrast, our method works on the 2D-3D matching phase but focuses on disambiguating features
before matching. Our idea is to directly disambiguate and augment dense features by implicitly encoding their spatial information and relations with other features in a learning-based manner.
Our keypoint-free SfM framework is related to SfM refinement methods PatchFlow [8] and PixSfM [31]. They improve keypoint-based SfM for more accurate 3D reconstructions by refining inaccurately-detected sparse local features with local patch flow [8] or dense feature maps [31]. Different from them, we leverage fine-level matching with Transformer [53] to refine the 2D locations of coarse feature tracks and then optimize the 3D model with geometric error. Please refer to the supplementary for more detailed discussions. [57] also works on keypoint-free SfM. However, it refines the coarse matches by the keypoint relocalization, which is single-view dependent and faces the issue of inaccurate keypoint detection. In contrast, our refinement leverage two-view patches to find accurate matches. Note that there are also methods proposed by keypoint-free matchers [47; 62] to adapt themselves for SfM. They either round matches to grid level [47] or merge matches within a grid to the average location [62] to obtain repeatable "keypoints" for SfM. However, all these approaches trade-off point accuracy for repeatability. On the contrary, our framework obtains repeatable features while preserving the sub-pixel matching accuracy by the refinement phase.
## 3 Methods
An overview of our method is shown in Fig. 2. Given a reference image sequence with known object poses \(\{\mathbf{I}_{i},\ \boldsymbol{\xi}_{i}\}\), our objective is to estimate the object poses \(\{\boldsymbol{\xi}_{q}\}\) for the test images, where \(i\) and \(q\) denote the indices of the reference images and test images, respectively. To achieve this goal, we propose a novel two-stage pipeline, which first reconstructs the accurate semi-dense object point cloud from reference images (Section 3.2), and then solves the object pose by building 2D-3D correspondences in a coarse-to-fine manner for test images (Section 3.3). Since our method is highly related to the keypoint-free matching method LoFTR [47], we give it a short overview in Section 3.1.
### Background
Keypoint-Free Feature Matching Method LoFTR [47].Without a keypoint detector, LoFTR builds semi-dense matches between image pairs (noted as left and right images) in a coarse-to-fine pipeline. First, dense matches between two coarse-level feature maps (\(\boldsymbol{\mathtt{\char 37}}\)s resolution in LoFTR) are built and upsampled, yielding coarse semi-dense matches in the original resolution. With the locations of all left matches fixed, the right matches are refined to a sub-pixel level using fine-level feature maps. Thanks to the keypoint-free design and the global receptive field of Transformers, LoFTR is capable of building correspondences in low-textured regions.
Problem of Using LoFTR for Keypoint-Based SfM.Directly combining LoFTR with modern keypoint-based SfM systems such as COLMAP [44] is not applicable since they rely on fixed keypoints detected on each image to construct feature tracks for estimating 3D structures. However, for LoFTR, its matching locations on a right image depend on its pairing left images. Therefore, the right matching locations are not consistent when paired with multiple left images. Due to this reason, keypoint-free feature matching cannot establish feature tracks across multiple views for effective 3D structure optimization in SfM and is thus not directly applicable in OnePose.
Figure 2: **Overview. 1.** For each object, given a reference image sequence \(\{\mathbf{I}_{i}\}\) with known object poses \(\{\boldsymbol{\xi}_{i}\}\), our keypoint-free SfM framework reconstructs the semi-dense object point cloud in a coarse-to-fine manner. The coarse reconstruction yields the initial point cloud (\(\boldsymbol{\mathtt{\char 37}}\)) which is then optimized to obtain an accurate point cloud (\(\boldsymbol{\mathtt{\char 37}}\)) in the refinement phase. **2.** At test time, our 2D-3D matching network directly matches a reconstructed object point cloud with a query image \(\mathbf{I}_{q}\) to build 2D-3D correspondences \(\mathcal{M}_{3D}\), and then the object pose \(\boldsymbol{\xi}_{q}\) is estimated by solving PnP with \(\mathcal{M}_{3D}\).
### Keypoint-Free Structure from Motion
To better adapt LoFTR for SfM, we design a coarse-to-fine SfM framework leveraging the properties of LoFTR's coarse and fine stages separately. Our framework constructs the coarse structure of the feature tracks \(\{\mathcal{T}_{c}^{j}\}\) and point cloud \(\{\mathbf{P}_{c}^{j}\}\) in the coarse reconstruction phase. Then in the refinement phase, the coarse structures are refined to obtain the accurate point cloud \(\{\mathbf{P}^{j}\}\). For clarity, in this part, we use \(\cdot\) to denote the coarse matching results and use \(\cdot\) to denote fine matching results. We consider the feature track \(\mathcal{T}^{j}=\{\mathbf{u}^{k}\in\mathbb{R}^{2}|k=1...N_{j}\}\) as a set of matched 2D points observing a 3D point \(\mathbf{P}^{j}\in\mathbb{R}^{3}\). \(j\) denotes the index of the feature track and its corresponding 3D point.
Coarse Reconstruction.We first strive for the completeness of the initially reconstructed 3D structure. We propose to use the inaccurate yet repeatable coarse correspondences of LoFTR for COLMAP [44] to reconstruct the coarse 3D structure. The coarse correspondences, as shown in Fig. 3 (**1**), can be seen as pixel-wise dense correspondences on downsampled image pairs. Every pixel in the downsampled image can be regarded as a "keypoint" in the original image. Therefore, performing coarse matching can provide repeatable semi-dense correspondences for COLMAP to reconstruct coarse feature tracks \(\{\mathcal{T}_{c}^{j}\}\) and semi-dense point cloud \(\{\mathbf{P}_{c}^{j}\}\), as shown in Fig. 3 (**2**).
Refinement.Due to the limited accuracy of performing matching on downsampled images, the point cloud from the coarse reconstruction is inaccurate and thus insufficient for the object pose estimation. Therefore, we further refine the object point cloud \(\{\mathbf{P}_{c}^{j}\}\) with sub-pixel correspondences. To achieve this, we first fix the position of one node for each feature track \(\mathcal{T}_{c}^{j}\) and refine other nodes within the track. Then, we use the refined tracks \(\{\mathcal{T}_{f}^{j}\}\) to optimize the \(\{\mathbf{P}_{c}^{j}\}\).
For the refinement of \(\{\mathcal{T}_{c}^{j}\}\), we draw the idea from the fine-level matching module in LoFTR and adapt it to the multi-view scenario. As shown in Fig. 3 (**3**), we first select and fix one node in each \(\mathcal{T}_{c}^{j}\) as the reference node \(\mathbf{u}_{r}\), and then perform fine matching with each of the remaining source node \(\tilde{\mathbf{u}}_{k}^{k}\). The fine matching searches within a local region around each \(\tilde{\mathbf{u}}_{s}^{k}\) for a sub-pixel correspondence \(\tilde{\mathbf{u}}_{s}^{k}\), so the nodes' locations in the coarse feature track are refined. We denote the refined feature tracks as \(\{\mathcal{T}_{f}^{j}\}\). Details about the selection of reference nodes \(\mathbf{u}_{r}\) are provided in the supplementary material.
We now treat the refined feature tracks \(\{\mathcal{T}_{f}^{j}\}\) as fixed measurements, and optimize the 3D locations of the coarse point cloud \(\{\mathbf{P}_{c}^{j}\}\) using reprojection errors as shown in Fig. 3 (**4**). To accelerate the convergence, inspired by SVO [11], we further decrease the DoF of each \(\mathbf{P}_{c}^{j}\) by only optimizing the depth \(d_{r}\) of each reference node \(\mathbf{u}_{r}\). Specifically, we transform each point \(\mathbf{P}_{c}^{j}\) to the frame of \(\mathbf{u}_{r}\) and use its coordinate of \(z\)-axis to initialize \(d_{r}\). Then we optimize each reference node depth \(d_{r}\) by minimizing the distance between each reprojected location and the refined feature location \(\hat{\mathbf{u}}_{s}^{k}\):
\[d_{r}^{*}=\underset{d_{r}}{\mathrm{argmin}}\sum\limits_{k\in N_{j}-1}\|\hat{ \mathbf{u}}_{s}^{k}-\boldsymbol{\pi}\left(\boldsymbol{\xi}_{r\to s_{k}} \cdot\boldsymbol{\pi}^{-1}(\mathbf{u}_{r},d_{r})\right)\|^{2}. \tag{1}\]
where \(\boldsymbol{\pi}\) is the projection determined by intrinsic camera parameters, and \(\boldsymbol{\xi}_{r\to s_{k}}=\boldsymbol{\xi}_{s_{k}}\cdot\boldsymbol{\xi}_{ r}^{-1}\) is the relative pose between the frame of the reference node and \(k\)-th source node.
Finally, the optimized depth \(d_{r}^{*}\) of each reference node is transformed to the canonical object coordinate to get the refined 3D point \(\mathbf{P}^{j}\). Notably, when applying the proposed system in practical AR applications, we can optimize inaccurate camera poses obtained from ARKit along with the 3D points, i.e., solving a bundle adjustment problem. For the later 2D-3D matching at test time, we calculate and store each 3D point feature by averaging the 2D features of its associated 2D points.
Figure 3: **Keypoint-Free SfM. 1.** We first build repeatable coarse semi-dense 2D matches between image pairs. **2.** Then, we feed coarse matches to COLMAP [44] to build a coarse feature track \(\mathcal{T}_{c}^{j}\) and a coarse 3D point \(\mathbf{P}_{c}^{j}\) (**0**). **3.** To refine \(\mathcal{T}_{c}^{j}\), we fix a reference node \(\mathbf{u}_{r}\) (**0**) and search around the local window (\(\square\)) of each source node \(\tilde{\mathbf{u}}_{s}^{k}\) (**0**) for sub-pixel correspondences \(\tilde{\mathbf{u}}_{s}^{k}\) (**0**). **4.** Finally, we optimize the depth \(d_{r}\) of \(\mathbf{u}_{r}\) by minimizing reprojection errors. We back-project \(\mathbf{u}_{r}\) with its refined \(d_{r}\) to the object coordinate to obtain an optimized accurate object point cloud \(\mathbf{P}^{j}\) (**0**).
Note that we store coarse and fine 3D features separately, which are extracted from multi-resolution feature maps of LoFTR's feature backbone.
### Object Pose Estimation
At test time, we establish 2D-3D matches between the object point cloud \(\{\mathbf{P}^{j}\}\) and the query image \(\mathbf{I}_{q}\) to estimate object pose \(\boldsymbol{\xi}_{q}\). Inspired by [47], we first extract hierarchical feature maps of \(\mathbf{I}_{q}\) and then perform matching in a coarse-to-fine manner for efficiency, as illustrated in Fig. 4.
Coarse 2D-3D Matching.We first perform dense matching between the pre-calculated coarse 3D point features \(\tilde{\mathbf{F}}_{3D}\in\mathbb{R}^{N\times C_{c}}\) and the extracted coarse image feature map \(\tilde{\mathbf{F}}_{2D}\in\mathbb{R}^{\frac{H}{8}\times\frac{H}{8}\times C_{c}}\). This phase globally searches for a rough correspondence of each 3D object point in the query image, which also determines whether the 3D point is observable by \(\mathbf{I}_{q}\).
We augment 3D and 2D features \(\{\tilde{\mathbf{F}}_{3D},\;\tilde{\mathbf{F}}_{2D}\}\) with positional encodings, to make them position-dependent, and thus facilitates their matching. Please refer to the supplementary material for more details. Then we flatten the 2D feature map and apply self- and cross-attention layers by \(N_{c}\) times to yield the transformed features \(\{\tilde{\mathbf{F}}_{3D}^{t},\;\tilde{\mathbf{F}}_{2D}^{t}\}\). Linear Attention [19] is used in our model to reduce the computational complexity, following [47]. A score matrix \(\mathrm{S}\) is calculated by the similarity between two sets of features \(\tilde{\mathbf{F}}_{3D}^{t}\) and \(\tilde{\mathbf{F}}_{2D}^{t}\). We then apply dual-softmax operation [52] on \(\mathrm{S}\) to get the matching probability matrix \(\mathcal{P}^{c}\):
\[\mathcal{P}^{c}(j,q)=\operatorname{softmax}\left(\mathrm{S}\left(j,\cdot \right)\right)_{q}\,\operatorname{softmax}\left(\mathrm{S}\left(\cdot,q \right)\right)_{j},\operatorname{where}\mathrm{S}\left(j,q\right)=\frac{1}{ \cdot}\cdot(\tilde{\mathbf{F}}_{3D}^{t}(j),\;\tilde{\mathbf{F}}_{2D}^{t}(q)). \tag{2}\]
\(\langle\cdot,\cdot\rangle\) is the inner product, \(\tau\) is a scale factor, and \(j\) and \(q\) denote the indices of a 3D point and a pixel in the flattened query image, respectively. The coarse 2D-3D correspondences \(\mathcal{M}_{3D}^{c}\) are established from \(\mathcal{P}^{c}\) by selecting correspondences above a confidence threshold \(\theta\):
\[\mathcal{M}_{3D}^{c}=\{(j,q)\mid\forall\,(j,q)\in\operatorname{MNN}\left( \mathcal{P}^{c}\right),\;\mathcal{P}^{c}(j,q)\geq\theta\}. \tag{3}\]
\(\operatorname{MNN}\) refers to finding the mutual nearest neighbors. We use this strict criterion to suppress potential false matches.
Fine Matching.For a visible 3D point \(\mathbf{P}^{j}\) determined by \(\mathcal{M}_{3D}^{c}\), our fine matching module searches for its sub-pixel 2D correspondence \(\hat{\mathbf{u}}^{q}\) within the local region of its coarse correspondence \(\tilde{\mathbf{u}}^{q}\). Similar to [47], we crop a local window \(W\) with a size of \(w\times w\) around \(\hat{\mathbf{u}}^{q}\) in the fine feature map \(\tilde{\mathbf{F}}_{2D}\in\mathbb{R}^{\frac{H}{2}\times\frac{H}{2}\times C_{f}}\). Then the cropped feature map \(\tilde{\mathbf{F}}_{crop}\in\mathbb{R}^{w\times w\times C_{f}}\) and corresponding 3D fine feature \(\tilde{\mathbf{F}}_{3D}(j)\in\mathbb{R}^{C_{f}}\) are transformed by \(N_{f}\) self- and cross- attention layers. We correlate the transformed 3D fine feature vector \(\tilde{\mathbf{F}}_{3D}^{t}\) with all elements in the transformed 2D feature \(\hat{\mathbf{F}}_{crop}^{t}\) and apply a softmax to get the probability distribution of its 2D correspondence in the cropped local window:
\[p(\mathbf{u}|j,\hat{\mathbf{F}}_{3D}^{t},\hat{\mathbf{F}}_{crop}^{t})=\frac{ \exp\left(\tilde{\mathbf{F}}_{3D}^{t}(j)^{T}\cdot\tilde{\mathbf{F}}_{crop}^{t }(\mathbf{u})\right)}{\sum_{w\in W}\exp\left(\tilde{\mathbf{F}}_{3D}^{t}(j)^{ T}\cdot\tilde{\mathbf{F}}_{crop}^{t}(\mathbf{u})\right)}. \tag{4}\]
The fine correspondence \(\hat{\mathbf{u}}^{q}\) of \(\mathbf{P}^{j}\) on the query image is then obtained with an expectation:
\[\hat{\mathbf{u}}^{q}=\hat{\mathbf{u}}^{q}+\sum_{u\in W}\mathbf{u}\cdot p( \mathbf{u}|j,\hat{\mathbf{F}}_{3D}^{t},\;\hat{\mathbf{F}}_{crop}^{t}). \tag{5}\]
After building the 2D-3D correspondences \(\mathcal{M}_{3D}^{f}\) between the query image and the object point cloud, we solve the object pose \(\boldsymbol{\xi}_{q}\) with the Perspective-n-Point (PnP) [22] algorithm and RANSAC [10].
Figure 4: **Object Pose Estimation.** At test time, we first extract multi-scale query image features \(\{\tilde{\mathbf{F}}_{2D},\;\tilde{\mathbf{F}}_{2D}\}\). **Coarse Matching** module transforms coarse 2D and 3D features \(N_{c}\) times with self- and cross-attention modules and then build their coarse 2D-3D correspondences \(\mathcal{M}_{3D}^{c}\). Next, we crop the local window \(\hat{\mathbf{F}}_{crop}\) on the fine feature map around each coarse 2D match. **Fine Matching** module transforms the 3D feature and cropped 2D features and calculates each 2D fine match location \(\hat{\mathbf{u}}^{q}\) with feature correlation and expectation. The object pose \(\boldsymbol{\xi}_{q}\) is then solved using PnP with \(\mathcal{M}_{3D}^{f}\).
Supervision.We jointly train the coarse and fine modules of our 2D-3D matching framework with different supervisions, following [47]. The ground-truth 2D-3D correspondences \(\mathcal{M}^{gt}_{3D}\) and the coarse matching probability matrix \(\mathcal{P}^{c}_{gt}\) are obtained by projecting observable SfM points to the 2D frame with the ground-truth object pose. We optimize the coarse matching module by minimizing the focal loss [30] between the predicted \(\mathcal{P}^{c}\) and \(\mathcal{P}^{c}_{gt}\). For the fine module, we minimize the \(\ell_{2}\) loss between the predicted 2D coordinate \(\hat{\mathbf{u}}^{q}\) and the ground truth \(\hat{\mathbf{u}}^{q}_{gt}\). The total loss is the weighted sum of coarse and fine losses. More details are provided in the supplementary material.
### Implementation Details
Keypoint-Free SfM.The COLMAP [44] triangulation is used to construct the coarse 3D structure. In the refinement phase, the local window with a size of \(9\times 9\) is searched for the sub-pixel correspondence of each reference node. The Levenberg-Marquardt algorithm [23] implemented in DeepLM [18] is used to optimize the coarse object point cloud. We use the LoFTR [47] outdoor model pre-trained on MegaDepth [28]. Running time analyses are given in the supplementary material.
Object Pose Estimation.We use ResNet-18 [15] as the image backbone and set \(N_{c}=3,N_{f}=1\) for the 2D-3D attention module. The scale factor \(\tau\) is 0.08, the cropped window size \(w\) in the fine level is 5, and the confidence threshold \(\theta\) is set to 0.4. As for training, the backbone of our model is initialized with the LoFTR outdoor model, and the remaining parts of our model use randomly initialized weights. The entire model is trained on the OnePose training set, and we randomly sample or pad the reconstructed point cloud to \(7000\) points for training. We use the AdamW optimizer with an initial learning rate of \(4\times 10^{-3}\). The network training takes about 20 hours with a batch size of 32 on 8 NVIDIA-V100 GPUs. During testing, we use all reconstructed 3D points (\(\sim 15000\)) for building 2D-3D correspondences, and our 2D-3D matching module takes 88ms for a \(512\times 512\) query image on a single V100 GPU.
## 4 Experiments
### Datasets
OnePose and LINEMOD Datasets.We validate our method on the OnePose [48] and LINEMOD [16] datasets. The OnePose dataset is newly proposed, which contains around 450 real-world video sequences of 150 objects. LINEMOD is a broadly used dataset for object pose estimation. For both datasets, we follow the train-test split in previous methods [48; 29].
OnePose-LowTexture Dataset.Since the original OnePose evaluation set mainly comprises textured objects, we collected an additional test set, named OnePose-LowTexture, to supplement the original OnePose dataset. The proposed dataset is composed of 40 household low-textured objects. For each object, there are two corresponding videos captured with different backgrounds, one as the reference video and the other for testing. Besides, to evaluate and compare our method with CAD-model-based methods, we further obtain high-fidelity 3D models of eight randomly selected objects with a commercial 3D scanner. Some example images are shown in Fig. 5. Please refer to the supplementary material for more details.
Figure 5: Data capture and example images of the proposed OnePose-LowTexture dataset.
### Experiment Settings and Baselines
Baselines.We compare the proposed method with the following baselines in two categories: 1) _One-shot baselines_[48, 33, 40] that hold the same setting as ours. OnePose [48] and HLoc [40] are most relevant to our method in leveraging feature matching for reconstruction and pose estimation. To be specific, we compare with HLoc combined with different feature matching methods including SuperGlue [41] and LoFTR [47]. 2) _Instance-level baselines_[39, 29] that require CAD-models and need to be trained separately for each object. These methods achieve high accuracy through training on many rendered images with extensive data augmentation. We compare our method with them to demonstrate that our method achieves competitive results while not relying on CAD models and eliminating per-object pose estimator training.
Evaluation Protocols.We compare our method with OnePose and HLoc using the same set of reference images. Since HLoc's original retrieval module is designed for the outdoor scenes, we use uniformly sampled \(10\) reference views for 2D-2D matching for pose estimation, following [48]. For the comparison with PVNet [39], we follow its original training setting, which first samples \(8\) keypoints on the object surface and then trains a network using 5000 synthetic images for each object. In contrast, our method only uses around 200 reference images to reconstruct the object point cloud. We evaluate our method and PVNet on the same real-world test sequences, while our matching model has never seen the test objects before. As for the experiments on LINEMOD, we compare our method with OnePose by running their open-source code. Our method and OnePose share the same 2D bounding boxes from an off-the-shelf object detector YOLOv5 [1]. Note that the object detector is trained on real-world images only to provide rough bounding boxes. We use the real training images (\(\sim 180\)) for object reconstruction and all test images for evaluation. The results of other baselines on LINEMOD are from the original papers.
Metrics.We use metrics including the _cm-degree_ pose success rate, the _ADD(S)-0.1d_ average distance with a threshold of \(10\%\) of the object diameter, and the 2D projection error _Proj2D_ with a threshold of 5 pixels. The definitions of these metrics are detailed in the supplementary material.
### Results on the OnePose and OnePose-LowTexture Datasets
Comparison with _One-shot_ Baselines.The _cm-degree_ success rate with different thresholds are used for evaluation. As shown in Tab. 1, our method substantially outperforms OnePose [48] and HLoc [40]. Objects in the OnePose dataset have rich textures, benefiting keypoint detection. Therefore, keypoint-based methods OnePose and HLoc (_SPP+SPG_) perform reasonably well. Our method achieves even higher accuracy thanks to the keypoint-free design, effectively utilizing both texture-rich and low-textured object regions for pose estimation. On the OnePose-LowTexture dataset, our method surpasses OnePose and HLoc by a large margin. This further demonstrates the capability of our keypoint-free design for object reconstruction and the sparse-to-dense 2D-3D matching for object pose estimation. HLoc (_LoFTR\({}^{*}\)_) uses LoFTR coarse matches for SfM and uses full LoFTR to match the query image and its retrieved images for pose estimation. It does not rely on keypoints, similar to our design. Our method significantly outperforms it on accuracy and runs \(\sim 10\times\) faster.
\begin{table}
\begin{tabular}{c||c c c c|c c c|c} \hline \multirow{2}{*}{Obj. ID} & \multicolumn{3}{c}{OnePose dataset} & \multicolumn{3}{c}{OnePose-LowTexture} & \multirow{2}{*}{Time (ms)} \\ \cline{2-2} \cline{5-8} & & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{2-2} \cline{5-8} & & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline PVNet & 12.3 & 90.0 & 68.1 & 67.6 & 95.6 & 57.3 & 49.6 & **61.3** & 62.7 \\ Ours & **89.5** & **99.1** & **97.2** & **92.6** & **98.5** & **79.5** & **97.2** & 57.6 & **88.9** \\ \hline \end{tabular}
\end{table}
Table 2: **Comparison with _Instance-level_ Baseline. Our method is compared with PVNet[39] on objects with CAD models in the OnePose-LowTexture dataset using the _ADD(S)-0.1d_ metric.**
\begin{table}
\begin{tabular}{c||c c c c c c c|c} \hline \multirow{2}{*}{Obj. ID} & 0700 & 0706 & 0714 & 0721 & 0727 & 0732 & 0736 & 0740 & Avg. \\ \hline PVNet & 12.3 & 90.0 & 68.1 & 67.6 & 95.6 & 57.3 & 49.6 & **61.3** & 62.7 \\ Ours & **89.5** & **99.1** & **97.2** & **92.6** & **98.5** & **79.5** & **97.2** & 57.6 & **88.9** \\ \hline \end{tabular}
\end{table}
Table 1: **Comparison with _One-shot_ Baselines. Our method is compared with HLoc [40] combined with different feature matching methods and OnePose [48], using the _cm-degree_ pose success rate with different thresholds.**
The improved accuracy and speed come from the accurate point cloud reconstructed by our novel SfM framework and the efficient 2D-3D matching module.
Comparison with _Instance-level_ Baseline PVNet.On the OnePose-LowTexture dataset, the proposed method is compared with PVNet [39] on the subset objects with scanned models. The _ADD(S)-0.1d_ results are presented in Tab. 2. Even though PVNet is trained on a large number (\(\sim 5000\)) of rendered images covering almost all possible views, our method still outperforms it on most objects without additional training. We attribute this to PVNet's susceptibility to domain gaps and our matching module's robustness and generalizability, thanks to its large-scale pre-training.
### Results on LINEMOD
We compare the proposed method with OnePose [48] and Gen6D [33] which are under the _One-shot_ setting, and _Instance-level_ methods PVNet [39] and CDPN [29] on _ADD(S)-0.1d_ and _Proj2D_ metrics. As shown in Tab. 3, our method outperforms existing one-shot baselines significantly and achieves comparable performance with instance-level methods. Notably, our method and OnePose are only trained on the OnePose training set and tested on LINEMOD without additional training.
Since LINEMOD is mainly composed of low-textured objects, our method outperforms OnePose significantly thanks to the keypoint-free design. Gen6D [33] is CAD-model-free and can generalize to unseen objects similar to our method. However, it relies on detecting accurate object bounding boxes for pose initialization, which is hard on LINEMOD because of the poor image quality and slight object occlusion. In contrast, our method only needs rough object detection to reduce possible mismatches, which is more robust to detection error. Moreover, the performance of Gen6D drops significantly without training on a subset of LINEMOD, while our method requires no extra training and achieves much higher accuracy than Gen6D. The experiment demonstrates the superiority of our method over existing methods under the one-shot setting.
Our method has lower or comparable performance with instance-level methods [39, 29], which are trained to fit each object instance, and thus perform well naturally, at the expense of the tedious training for each object. In contrast, our method is grounded in highly generalizable local features and generalizes to unseen objects with comparable performances.
### Ablation Studies
We conduct several experiments on the OnePose dataset and the OnePose-LowTexture dataset to validate the efficacy of the point cloud refinement in the SfM framework and the attention module in our 2D-3D matching module. More ablation studies are detailed in the supplementary material.
Point Cloud Refinement in the Keypoint-Free SfM.We validate the effectiveness of the point cloud refinement from two perspectives, as shown in Tab. 4 (Left). One is evaluating the accuracy of the reconstructed point cloud with ground truth object mesh on OnePose-LowTexture dataset following evaluations in [31], and the other is quantifying the impact of reconstruction accuracy on the _cm-degree_ pose success rate. Compared with the coarse point cloud reconstructed with coarse-level LoFTR only, the accuracy of our refined point cloud increased significantly, especially under a strict threshold (\(1mm\)). Moreover, using the refined point cloud for object pose estimation brings around
\begin{table}
\begin{tabular}{c|c|c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Type**} & \multirow{2}{*}{Name} & \multirow{2}{*}{ape} & \multirow{2}{*}{benchwise} & \multicolumn{4}{c}{Object Name} & \multirow{2}{*}{dye} \\ & & & & & & & & & & & & & & & & \\ \hline \multirow{4}{*}{**Instance-level**} & CDPN & 6.73 & 98.8 & 92.8 & 96.6 & 86.6 & 95.1 & 75.2 & 99.6 & 99.6 & 89.7 & 9.9 & 97.8 & 80.7 & 91.4 \\ & PVNet & 43.6 & 99.9 & 86.9 & 95.5 & 79.3 & 96.4 & 52.6 & 99.2 & 95.7 & 81.9 & 98.9 & 99.3 & 92.4 & 86.3 \\ \hline \multirow{4}{*}{**One-shot**} & Gen6D\({}^{\dagger}\) & - & 62.1 & 45.6 & - & 40.9 & 48.8 & 16.2 & - & - & - & - & - & - & - \\ & Gen6D & - & 77.0 & 66.1 & - & 60.7 & 67.4 & - & 95.7 & **87.2** & - & - & - & - & - \\ & OnePose & 11.8 & 92.6 & 88.1 & 77.2 & 47.9 & 74.5 & 34.2 & 71.3 & 37.5 & 54.9 & 89.2 & 87.6 & 60.6 & 63.6 \\ & Ours & **31.2** & **97.3** & **88.0** & **89.8** & **70.4** & **92.5** & **42.3** & **99.7** & 48.0 & **60.7** & **97.4** & **97.8** & **76.0** & **76.9** \\ \hline \multirow{4}{*}{**Instance-level**} & CDPN & 97.5 & 98.8 & 98.6 & 99.6 & 99.3 & 94.9 & 98.4 & 99.1 & 98.4 & 99.5 & 97.9 & 97.3 & 96.8 & 98.0 \\ & PVNet & 99.2 & 99.8 & 99.9 & 99.9 & 99.3 & 96.9 & 98.0 & 99.3 & 98.5 & 100.0 & 99.2 & 98.3 & 99.4 & 99.0 \\ \hline \multirow{4}{*}{**One-shot**} & OnePose & 35.2 & 94.4 & 96.8 & 87.4 & 77.2 & 76.0 & 73.0 & 89.9 & **55.1** & 79.1 & 92.4 & 88.9 & 69.4 & 78.1 \\ & Ours & **97.3** & **99.6** & **99.2** & **98.7** & **93.1** & **97.7** & **98.7** & 51.8 & **98.6** & **98.9** & **98.8** & **94.5** & **94.3** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Results on LINEMOD. Our method is compared with _Instance-level_ and _One-shot_ baselines. Note that Gen6D is fine-tuned on a selected subset of objects and uses the rest for testing. Gen6D\({}^{\dagger}\) is the version without fine-tuning on LINEMOD. Symmetric objects are indicated by \({}^{*}\).**
\(7\%\) improvement on the strict _1cm-1deg_ metric. These experiments demonstrate that the point cloud refinement improves the reconstructed point clouds' precision, thus benefiting the pose estimation.
Attention module in the Pose Estimation Network.We validate the attention design in our matching network quantitatively and qualitatively. It is shown in Tab. 4 (Right) that compared with directly matching the backbone features, using the transformed features for 2D-3D correspondences obtains \(15\%\) improvement on the _5cm-5deg_ metric on OnePose-LowTexture dataset. The visualization of features in Fig. 6 shows that the transformed 2D and 3D features become more discriminative for establishing correspondences. The ablation study demonstrates that the attention module provides the global receptive field and plays a critical role in the pose estimation of low-textured objects.
## 5 Conclusion
We propose a keypoint-free SfM and pose estimation pipeline that enables pose estimation of both texture-rich and low-textured objects under the one-shot CAD-model-free setting. Our method can efficiently reconstruct accurate and complete 3D structures of low-textured objects and build robust 2D-3D correspondences with the test image for accurate object pose estimation. The experiments show that our method achieves significantly better pose estimation accuracy compared with existing CAD-model-free methods, and even achieves comparable results with CAD-model-based instance-level methods. Although we do not see the immediate negative societal impact of our work, we do note that accurate object pose estimation can be potentially used for malicious purposes.
Limitations.Being dependent on local feature matching, our method inherently suffers from very low-resolution images and extreme scale and viewpoint changes. In the current pipeline, we still need a separate object detector to provide rough regions of interest. In the future, we envision a more tight integration with the object detector, where object detection can also be carried out through local feature matching.
Acknowledgements.The authors would like to acknowledge the support from the National Key Research and Development Program of China (No. 2020AAA0108901), NSFC (No. 62172364), the ZJU-SenseTime Joint Lab of 3D Vision, and the Information Technology Center and State Key Lab of CAD&CG, Zhejiang University.
\begin{table}
\begin{tabular}{c||c c|c c c c||c c c} \hline \hline & Point Cloud Accuracy & Pose Success Rate on OnePose Dataset & \multicolumn{3}{c||}{Pose Success Rate on OnePose-LowTexture} \\ \hline
1mm & 3mm & 5mm & 1cm-1deg & 3cm-3deg & 5cm-3deg & 1cm-1deg & 3cm-3deg & 5cm-3deg \\ \hline w/o refine. & 26.0 & 71.2 & 85.7 & 43.9 & 78.3 & 85.9 & w/o attention. & 12.2 & 40.7 & 55.3 \\ w refine. & **30.9** & **75.8** & **87.7** & **51.1** & **80.8** & **87.7** & w attention. & **16.8** & **57.7** & **72.1** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Ablation Studies.** We quantitatively validate the effectiveness of the point cloud refinement in the keypoint-free SfM and the attention module in the 2D-3D matching network, using the point cloud accuracy metric and _cm-degree_ pose success rate with different thresholds.
Figure 6: **Qualitative Results** showing the reconstructed semi-dense object point clouds and the estimated object poses. The ablation part visualizes the 2D and 3D features before and after our 2D-3D attention module. Features become more discriminative as shown by the color contrast. |
2310.05725 | Post-hoc Bias Scoring Is Optimal For Fair Classification | We consider a binary classification problem under group fairness constraints,
which can be one of Demographic Parity (DP), Equalized Opportunity (EOp), or
Equalized Odds (EO). We propose an explicit characterization of Bayes optimal
classifier under the fairness constraints, which turns out to be a simple
modification rule of the unconstrained classifier. Namely, we introduce a novel
instance-level measure of bias, which we call bias score, and the modification
rule is a simple linear rule on top of the finite amount of bias scores.Based
on this characterization, we develop a post-hoc approach that allows us to
adapt to fairness constraints while maintaining high accuracy. In the case of
DP and EOp constraints, the modification rule is thresholding a single bias
score, while in the case of EO constraints we are required to fit a linear
modification rule with 2 parameters. The method can also be applied for
composite group-fairness criteria, such as ones involving several sensitive
attributes. | Wenlong Chen, Yegor Klochkov, Yang Liu | 2023-10-09T13:54:08Z | http://arxiv.org/abs/2310.05725v3 | # Post-Hoc Bias Scoring is Optimal for Fair Classification
###### Abstract
We consider a binary classification problem under group fairness constraints, which can be one of Demographic Parity (DP), Equalized Opportunity (EOp), or Equalized Odds (EO). We propose an explicit characterization of Bayes optimal classifier under the fairness constraints, which turns out to be a simple modification rule of the unconstrained classifier. Namely, we introduce a novel instance-level measure of bias, which we call _bias score_, and the modification rule is a simple linear rule on top of the finite amount of bias scores. Based on this characterization, we develop a _post-hoc_ approach that allows us to adapt to fairness constraints while maintaining high accuracy. In the case of DP and EOp constraints, the modification rule is thresholding a single bias score, while in the case of EO constraints we are required to fit a linear modification rule with 2 parameters. The method can also be applied for composite group-fairness criteria, such as ones involving several sensitive attributes. We achieve competitive or better performance compared to both _in-processing_ and _post-processing_ methods across three datasets: Adult, COMPAS, and CelebA. Unlike most _post-processing_ methods, we do not require access to sensitive attributes during the inference time.
## 1 Introduction
Significant improvements have been made in classification tasks using machine learning (ML) algorithms. With ML algorithms being deployed in more and more decision-making applications, it is crucial to ensure fairness in their predictions. Although the debate on what is fairness and how to measure it is ongoing (Caton and Haas, 2023), oftentimes group fairness measures are utilized in practice due to the simplicity of their verification (Chouldechova, 2017; Hardt et al., 2016), which conform to the intuition that predictions should not be biased toward a specific group of the population. In practice, it is desirable to train classifiers satisfying these group fairness constraints while maintaining high accuracy.
Training classifiers that maintain competitive accuracy while satisfying group fairness constraints remains a challenging problem, and it often requires intervention during the training time. A popular approach (Zafar et al., 2017, 2019) suggests relaxing these constraints of discrete nature to score-based differentiable constraints, thus presenting the possibility of using gradient-based optimization methods. This approach is very flexible and can be used in a broad set of applications (Donini et al., 2018; Cotter et al., 2019; Rezaei et al., 2021; Wang et al., 2021; Zhu et al., 2023). Another popular method suggests dynamically reweighting observations during training (Agarwal et al., 2018). In vision tasks, researchers propose to use more sophisticated techniques, such as synthetic image generation (Ramaswamy et al., 2021) and contrastive learning (Park et al., 2022).
Another set of methods proposes to modify unconstrained classifiers in a _post-hoc_ manner (Hardt et al., 2016; Jiang et al., 2019; Jang et al., 2022). Unlike the _in-processing_ methods, these methods allow one to adapt to fairness constraints after the model is trained. These modifications are much cheaper and more feasible in industrial settings, where very large datasets are utilized and complicated algorithms are used to train the target classifier. However, the existing solutions typically require knowledge of the sensitive attribute during the inference time. For instance, one of the solutions that Hardt et al. (2016) propose modifies a score-based classifier in the form \(\hat{Y}(X)=\mathbf{1}\{R(X)>t\}\) to a group-specific thresholding rule \(\hat{Y}(X,A)=\mathbf{1}\{R(X)>t_{A}\}\). Similar approaches are also taken by Jiang et al. (2019); Jang et al. (2022). This is impractical for real-world applications where sensitive attributes during inference are inaccessible due to privacy protection.
Most of the existing methods aim at debiasing a classifier, whether with an _in-processing_ or _post-processing_ method. We ask a more general question: how can we flexibly adjust a classifier to achieve the best accuracy for a given level of fairness? For a binary classification problem, Menon & Williamson (2018) study this question from a theoretical perspective, that is, when one knows the ground truth distribution \(p(Y,A|X)\), they derive the Bayes-optimal classifier satisfying fairness constraints. Unfortunately, Menon & Williamson (2018) only covers two cases of fairness measures: Demographic Parity and Equalized Opportunity. In this paper, we close this gap and derive the Bayes-optimal classifier for general group fairness metrics, which include the case of Equalized Odds. Our analysis also allows using composite fairness criteria that involve more than one sensitive attribute at the same time, which analysis is highly non-trivial.
We interpret our solution as a modification of (unconstrained) Bayes optimal classifier based on a few values that we term _"bias scores"_, which in turn can be thought of as a measure of bias on instance level. For instance, think of reducing the gender gap in university admissions. Bhattacharya et al. (2017) show that such gap reduction typically happens at the expense of applicants with borderline academic abilities. In terms of classification (passed/ not passed), this corresponds to the group where we are least certain in the evaluation of one's academic abilities. This suggests that evaluation of bias on instance level should not only take into account prediction and group membership, but also uncertainty in the prediction of target value. Our _bias score_ not only conforms to this logic, but thanks to being part of Bayes optimal classifier, it is also theoretically principled. In particular, for the case of Demographic Parity constraints, we show that the optimal constrained classifier can be obtained by modifying the output of the unconstrained classifier on instances with largest bias score. When Equalized Odds constraints are imposed, or more generally a composite criterion, the optimal modification is a linear rule with two or more bias scores.
Based on our characterization of the optimal classifier, we develop a practical procedure to adapt any score-based classifier to fairness constraints. In Section 4, we show various experiments across three benchmarks: Adults, COMPAS, and CelebA. Surprisingly, we are able to achieve better performance than the in-processing methods, despite only being able to adapt to the group-fairness constraints after training. We also provide competitive results when compared to post-processing methods (Hardt et al., 2016; Jiang et al., 2019), which require knowledge of sensitive attribute during inference.
We summarize our contributions as follows:
* We characterize the Bayes optimal classifier under group fairness constraints, which generalizes Menon & Williamson (2018) in the sense that Menon & Williamson (2018) can be viewed as a special case in our framework, where the constraint is only a single fairness criterion (e.g. Demographic Parity). Nevertheless, our formulation is more convenient and intuitive thanks to the interpretable _bias score_, which captures both predictive uncertainty and inference of the sensitive group on the instance level.
* Moreover, our characterization can further allow a composite fairness criterion (e.g. Equalized Odds) as constraint, which has not been established before to our knowledge.
* Based on this characterization, we propose a post-processing method that can flexibly adjust the trade-off between accuracy and fairness and does not require access to test sensitive attributes. Empirically, our method achieves competitive or better performance compared with baselines.
### Preliminaries
In this work, we consider binary classification, which consists of many practical applications that motivate machine fairness research (Caton & Haas, 2023). We want to construct a classifier \(\hat{Y}=\hat{Y}(X)\) for a target variable \(Y\in\{0,1\}\) based on the input \(X\). Apart from the accuracy of a classifier, we are concerned with fairness measurement, given that there is a sensitive attribute \(A\), with some underlying population distribution over the triplets \((X,Y,A)\sim\Pr\) in mind. We assume that the sensitive attribute is binary as well. We generally focus on three popular group-fairness criteria:
* **Demographic Parity (DP)**(Chouldechova, 2017) is concerned with equalizing the probability of a positive classifier output in each sensitive group, \[DP(\hat{Y};A)=\left|\Pr(\hat{Y}=1|\;A=0)-\Pr(\hat{Y}=1|A=1)\right|.\] (1)
* **Equalized Odds (EO)** was introduced in Hardt et al. (2016a). Unlike DP which suffers from explicit trade-off between fairness and accuracy in the case where \(Y\) and \(A\) are correlated, this criterion is concerned with equalizing the false positive and true positive rates in each sensitive group, \[EO(\hat{Y};A)=\max_{y=0,1}\left|\Pr(\hat{Y}=1|A=0,Y=y)-\Pr(\hat{Y}=1|A=1,Y=y) \right|.\] (2)
* **Equality of Opportunity (EOp)** was also introduced by Hardt et al. (2016a), and it measures the disparity only between true positive rates in the two sensitive groups, \[EOp(\hat{Y};A)=\left|\Pr(\hat{Y}=1|A=0,Y=1)-\Pr(\hat{Y}=1|A=1,Y=1)\right|.\] (3)
## 2 Bayes optimal fairness-constrained classifier
For a given distribution over \(\Pr\sim(X,Y)\), the Bayes optimal unconstrained classifier has the form \(\hat{Y}(X)=\mathbf{1}\{p(Y=1|X)>0.5\}\) in the sense that it achieves maximal accuracy. Although it is not generally possible to know these ground-truth conditional probabilities (\(p(Y=1|X)\)) in practice, such characterization allows one to train probabilistic classifiers, typically with cross-entropy loss. Here we ask what is the optimal classifier when the group fairness restrictions are imposed. Namely, which classifier \(\hat{Y}(X)\) maximizes the accuracy \(Acc(\hat{Y})=\Pr(\hat{Y}=Y)\) under the restriction that a particular group fairness measure is below a given level \(\delta>0\).
We are interested in either DP, EOp, or EO constraints, as described in the previous section. For the sake of generality, we consider the following case of _composite criterion_. Suppose, we have several sensitive attributes \(A_{1},\ldots,A_{K}\) and for each of them we fix values \(a_{k},b_{k}\), so that we need to equalize the groups \(\{A_{k}=a_{k}\}\) and \(\{A_{k}=b_{k}\}\). In other words, our goal is to minimize a _composite criterion_ represented by a maximum over a set of disparities,
\[CC(\hat{Y})=\max_{j=1,\ldots,K}\left|\Pr(\hat{Y}=1|A_{k}=a_{k})-\Pr(\hat{Y}=1| A_{k}=b_{k})\right|, \tag{4}\]
This general case covers DP, EOp, and EO, as well as composite criteria involving more than one sensitive attribute. Let us give a few examples.
**Example 1** (Demographic Parity).: _For the case of DP (see Eq. 1), it is straightforward: take \(A_{1}=A\in\{0,1\}\), \(a_{1}=0,b_{1}=1\), and then with \(K=1\), \(CC(\hat{Y})=DP(\hat{Y})\)._
**Example 2** (Equalized Opportunity and Equalized Odds).: _Suppose we have a sensitive attribute \(A\in\{0,1\}\), then the Equalized Opportunity criterion (Eq. 2), can be written in the form of Eq. 4 with \(A_{1}=(A,Y)\), \(a_{1}=(0,1)\), and \(b_{1}=(1,1)\)._
_For the Equalized Odds, we can write it as a composite criterion with \(K=2\) by setting \(A_{1}=A_{2}=(A,Y)\), and setting \(a_{1}=(0,0)\), \(b_{1}=(1,0)\), \(a_{2}=(0,1)\), \(b_{2}=(1,1)\) in Eq. 4._
**Example 3** (Two and more sensitive attributes).: _We could be concerned with fairness with respect two sensitive attributes \(A,B\) simultaneously (for instance, gender and race). In this case, we want to minimize the maximum of two Demographic Parities looks as follows,_
\[\max\{\left|\Pr(\hat{Y}=1|A=0)-\Pr(\hat{Y}=1|A=1)\right|,\left|\Pr(\hat{Y}=1|B= 0)-\Pr(\hat{Y}=1|B=1)\right|\}.\]
_If we have three DPs, we will have \(K=3\) in Eq. 4; if we are interested in a maximum over EO's for two different sensitive attributes, we would have \(K=4\), etc._
Given a specified fairness level \(\delta>0\), we want to find the optimal classifier \(\hat{Y}(X)\), possibly randomized, that is optimal under the composite criterion constraints
\[\max\qquad Acc(\hat{Y})=\Pr(\hat{Y}=Y)\qquad\text{s.t.}\qquad CC(\hat{Y})\leq\delta. \tag{5}\]
We will be searching the solution in the form of modification of the Bayes optimal unconstrained classifier. Recall that in our notation, \(\hat{Y}=\hat{Y}(X)\) denotes the Bayes optimal unconstrained classifier \(\mathbf{1}\{p(1|X)>0.5\}\). We "reparametrize" the problem by setting \(\kappa(X)=\Pr(\hat{Y}\neq\hat{Y}|X)\) as the target function. In other words, given an arbitrary function \(\kappa(X)\in[0,1]\), we can define the modification \(\hat{Y}(X)\) of \(\hat{Y}(X)\) by drawing \(Z\sim Be(\kappa(X))\) and outputting
\[\hat{Y}=\begin{cases}\qquad\hat{Y},&Z=0\\ 1-\hat{Y},&Z=1\end{cases}\]
We call such function \(\kappa(X)\) a _modification rule_. With such reparameterization, the accuracy of a modified classifier can be rewritten as
\[Acc(\hat{Y})=Acc(\hat{Y})-\int\eta(X)\kappa(X)d\Pr(X),\]
where \(\eta(X)=2p(Y=\hat{Y}|X)-1\) represents the confidence of the Bayes optimal unconstrained classifier \(\hat{Y}\) on the instance \(X\) (see Section A.1 for detailed derivation), A similar representation holds for the value of the composite criterion. Specifically, recall the criterion is of the form \(CC(\hat{Y})=\max_{k\leq K}|C_{k}(\hat{Y})|\), where
\[C_{k}(\hat{Y})=\Pr(\hat{Y}=1|A_{k}=a_{k})-\Pr(\hat{Y}=1|A_{k}=b_{k})\,.\]
We can rewrite it as
\[C_{k}(\hat{Y}) =C_{k}(\hat{Y})-\int f_{k}(X)\kappa(X)d\Pr(X), \tag{6}\] \[f_{k}(X) :=(2\hat{Y}-1)\left[\frac{p(A_{k}=a_{k}|X)}{\Pr(A_{k}=a_{k})}- \frac{p(A_{k}=b_{k}|X)}{\Pr(A_{k}=b_{k})}\right].\]
These two expressions suggest that modifying the answer on a point with low confidence \(\eta(X)\) makes the least losses in the accuracy, while modifying the answers with higher absolute value of \(f_{k}(X)\) makes largest effect on the parity values, although this has to depend on the sign. This motivates us to define modification rule on the relative score,
\[\text{(Instance-Level Bias Score):}\qquad s_{k}(X)=\frac{f_{k}(X)}{\eta(X)}, \tag{7}\]
which we refer to as _bias score_. It turns out that the optimal modification rule, i.e. one corresponding to the constrained optimal classifier in Eq. 5, is a simple linear rule with respect to the given \(K\) bias scores. We show this rigorously in the following theorem. We postpone the proof to the appendix, Section A.1.
**Theorem 1**.: _Suppose that all functions \(f_{k},\eta\) are square-integrable and the scores \(s_{k}(X)=f_{k}(X)/\eta(X)\) have joint continuous distribution. Then, for any \(\delta>0\), there is an optimal solution defined in Eq. 5 that is obtained with a modification rule of the form,_
\[\kappa(X)=\mathbf{1}\left\{\sum_{k}z_{k}s_{k}(X)>1\right\}. \tag{8}\]
This result suggests that for the case of DP, EOp, or EO, given the ground-truth probabilities \(p(Y,A|X)\), we only need to fit \(1\) parameter for either of Demographic Parity and Equalized Opportunity, which essentially corresponds to finding a threshold, and fit a linear rule in two dimensions for the Equalized Odds. Below we consider each of the three fairness measures in detail.
Demographic Parity.In the case of DP constraint, we have a single bias score of the form,
\[s(X)=\frac{1}{\eta(X)}(2\hat{Y}-1)\left[\frac{p(A=0|X)}{\Pr(A=0)}-\frac{p(A=1|X)}{ \Pr(A=1)}\right], \tag{9}\]
and since there is only one score, the modification rule is a simple threshold rule of this bias score \(\kappa(X)=\mathbf{1}\{s(X)/t>1\}\). We note that \(t\) can be positive or negative, depending on which group has the advantage, see Section A.1. This allows one to make linear comparison of fairness on the instance level. That is, departing from a fairness measure defined on a group level, we derive a bias score that measures fairness on each separate instance. For example, in the context of university admissions (Bhattacharya et al., 2017), our bias score conforms with the following logic: it is more fair to admit a student who has high academic performance (lower \(\eta(X)\)) than one who has borderline performance (higher \(\eta(X)\)) even though they are both equally likely to come from the advantageous group (same \(f(X)\)). We note that the problem of measuring fairness and bias on instance level has recently started to attract attention, see Wang et al. (2022); Yao and Liu (2023).
Equality of Opportunity.Here we also have the advantage of having a simple threshold rule, corresponding to the score function,
\[s(X)=\frac{1}{\eta(X)}(2\hat{Y}-1)\left[\frac{p(A=0,Y=1|X)}{\Pr(A=0,Y=1)}- \frac{p(A=1,Y=1|X)}{\Pr(A=1,Y=1)}\right]\,.\]
**Remark 2.1** (Comparison to group-aware thresholding).: _Let us consider the case where there is a one-to-one correspondence \(A=A(X)\). This is equivalent to the case of observed sensitive attribute, and it is trivial to check that our method turns into a group-aware thresholding and becomes oblivious (Hardt et al., 2016a; Jang et al., 2022) (i.e., the flipping rule doesn't depend on interpretation of individual features \(X\) and more). Indeed, in such case we have \(p(A=a,Y=1|X)=p(Y=1|X)\mathbf{1}\{A(X)=a\}\), therefore \(s(X)=\frac{p(Y=1|X)}{2p(Y=1|X)-1}\left[a_{0}\mathbf{1}\{A(X)=0\}+a_{1}\mathbf{ 1}\{A(X)=1\}\right]\), where \(a_{0}=1/\Pr(A=0,Y=1)>0\) and \(a_{1}=-1/\Pr(A=1,Y=1)<0\). Then for a given \(t_{\delta}\), the final decision rule turns into \(\hat{Y}(X,A)=\mathbf{1}\{p(Y=1|X)>t_{A}\}\), where \(t_{A}=0.5+a_{A}/(4t_{\delta}-2a_{A})\)._
Equalized Odds.Let us consider the case of optimizing under Equalized Odds constraint in detail. In this case, we need to know the ground-truth conditional probabilities \(p(Y,A|X)\), and we obtain two scores for \(k=0,1\),
\[s_{k}(X)=\frac{1}{\eta(X)}\{2\hat{Y}-1\}\left[\frac{p(Y=k,A=0|X)}{\Pr(Y=k,A=0 )}-\frac{p(Y=k,A=1|X)}{\Pr(Y=k,A=1)}\right] \tag{10}\]
Our goal is then to find a linear rule in the bias embedding space \((s_{0}(X),s_{1}(X))\), which on validation, achieves the targeted equalized odds, while maximizing the accuracy. Notice that here the problem is no longer a simple threshold choice as in the case of DP-constrained classifier. We still need to fit a fairness-constrained classifier, only we have dramatically reduced the complexity of the problem to dimension \(K=2\), and we only have to fit a linear classifier.
We demonstrate the modification rule in the case of EO constraints with the following synthetic data borrowed from Zafar et al. (2019) (Section 5.1.2) as example:
\[\begin{split} p(X|Y=1,A=0)&=\mathcal{N}([2,0],[5,1 ;1,5]),\qquad\ p(X|Y=1,A=1)=\mathcal{N}([2,3],[5,1;1,5]),\\ p(X|Y=0,A=0)&=\mathcal{N}([-1,-3],[5,1;1,5]),\quad p (X|Y=0,A=1)=\mathcal{N}([-1,0],[5,1;1,5]).\end{split} \tag{11}\]
We sample \(500\), \(100\), \(100\), \(500\) points from each of groups \((Y,A)=(1,0)\), \((1,1)\), \((0,0)\), \((0,1)\), respectively, so that \(Y\) and \(A\) are correlated. Next, we fit a logistic linear regression with \(4\) classes to estimate \(p(Y,A|X)\) and calculate the scores according to the formulas Eq. 10. In Figure 0(a), we show the scatter plot of the scores \((s_{0}(X),s_{1}(X))\), with the corresponding group marked by different colors. Figures 0(b)-0(c) show the optimal flipping rule, with color encoding \(\kappa(X)\) evaluated with the discretized version of the linear program, while the red line approximately shows the optimal linear separation plane. We observe that some of the points that we had to flip for the restriction \(EO\leq\delta=0.15\), are unflipped back when the restriction is tightened to \(EO\leq\delta=0.01\). It indicates that unlike in the case of DP restriction, there is no unique score measure that can quantify how fair is the decision made by a trained classifier.
## 3 Methodology
In practice, especially for deep learning models, unconstrained classifiers are usually of the form \(\hat{Y}=\mathbf{1}\{\hat{p}(Y|X)>0.5\}\), with the conditional probability trained using the cross-entropy loss. Our characterization of the optimal modification rule naturally suggests a practical post-processing algorithm that takes fairness restriction into account: assume that we are given an _auxiliary_ model for either \(\hat{p}(A|X)\) (in the case of DP constraints) or \(\hat{p}(Y,A|X)\) (in the case of EOp and EO constraints). We then treat these estimated conditionals as ground-truth conditional distributions, plugging them into Eq. 7 to compute the bias scores, and modify the prediction \(\hat{Y}\) correspondingly with a linear rule over these bias scores. We propose to fit the linear modification rule using a labeled validation set. We call this approach _Modification with Bias Scores_ (MBS) and it does not require knowing test set sensitive attribute since the bias scores are computed based on the estimated conditional distributions related to sensitive attribute (\(\hat{p}(A|X)\) or \(\hat{p}(Y,A|X)\)) instead of the empirical observations of sensitive attribute. Here we demonstrate the algorithms in detail for the two cases where DP and EO are the fairness criteria.
Post-processing algorithm with DP constraints.In this case, we assume that we have two models \(\hat{p}(Y|X)\) and \(\hat{p}(A|X)\) (which in the experiments are fitted over the training set, but can be provided by a third party as well) to estimate the ground truth \(p(Y|X)\) and \(p(A|X)\) respectively. We then define the bias score as follows:
\[\hat{s}(X)=\frac{\hat{f}(X)}{\hat{\eta}(X)}=\frac{\{2\hat{Y}(X)-1\}\left[ \frac{\hat{p}(A=0|X)}{\Pr(A=0)}-\frac{\hat{p}(A=1|X)}{\Pr(A=1)}\right]}{2\hat {p}(Y=\hat{Y}(X)|X)-1}, \tag{12}\]
where \(\widehat{\Pr}(A=i)\) (\(i=0,1\)) can be estimated by computing the ratio of the corresponding group in the training set. We search for the modification rule of the form \(\kappa(X)=\mathbf{1}\{\hat{s}(X)/t>1\}\), so that the resulting \(\hat{Y}_{t}(X)=\mathbf{1}\{\hat{s}(X)/t\leq 1\}\hat{Y}(X)+\mathbf{1}\{\hat{s}(X)/ t>1\}(1-\hat{Y}(X))\) satisfies the DP constraint, while maximizing the accuracy. For this, we assume that we are provided with a labeled validation dataset \(\{(X_{i},Y_{i},A_{i})\}_{i=1}^{N_{val}}\) and we choose the threshold \(t\) such that the validation accuracy is maximized, while the empirical DP evaluated on it is \(\leq\delta\). To find the best threshold value, we simply need to go through all \(N_{val}\) candidates \(t=\hat{s}(X_{i})\), which can be done in \(O(N_{val}\log N_{val})\) time. See detailed description in Algorithm 1 in the appendix, Section B.1.
Post-processing algorithm with EO constraints.In this case, we require an auxiliary model \(\hat{p}(Y,A|X)\) with four classes. This allows us to obtain the 2D estimated bias score \((\hat{s}_{0}(X),\hat{s}_{1}(X))\)
Figure 1: (a) Scatter plot of the scores for synthetic distribution Eq. 11. (b) Separation plane for the optimal flipping rule \(\kappa\) corresponding to \(EO\leq\delta=0.15\) and (c) \(\delta=0.01\).
where
\[\hat{s}_{0}(X) =\frac{\hat{f}_{0}(X)}{\hat{\eta}(X)}=\frac{\{2\hat{Y}(X)-1\}\left[ \frac{\hat{p}(A=0,Y=0|X)}{\Pr(A=0,Y=0)}-\frac{\hat{p}(A=1,Y=0|X)}{\Pr(A=1,Y=0)} \right]}{2\hat{p}(Y=\hat{Y}(X)|X)-1}, \tag{13}\] \[\hat{s}_{1}(X_{i}) =\frac{\hat{f}_{1}(X)}{\hat{\eta}(X_{i})}=\frac{\{2\hat{Y}(X)-1\} \left[\frac{\hat{p}(A=0,Y=1|X)}{\Pr(A=0,Y=1)}-\frac{\hat{p}(A=1,Y=1|X)}{\Pr(A =1,Y=1)}\right]}{2\hat{p}(\hat{Y}=\hat{Y}(X)|X)-1},\]
where each of the \(\widehat{Pr}(A=a,Y=y)\) is again estimated from training set. We are searching for a linear modification rule \(\kappa(X)=\mathbf{1}\{a_{0}\hat{s}_{0}(X)+a_{1}\hat{s}_{1}(X)>1\}\) that for a given validation set satisfies the empirical EO constraint while maximizing the validation accuracy. We consider two strategies to choose such a linear rule.
In the first approach, we take a subsample of points \(\{(\hat{s}_{0}(X_{m}^{\prime}),\hat{s}_{1}(X_{m}^{\prime}))\}_{m=1}^{M}\) of size \(M\leq N_{val}\) and consider all \(M(M-1)/2\) possible linear rules passing through any two of these points. For each of these rules, we evaluate the EO and accuracy on validation set, then choose maximal accuracy among ones satisfying \(EO\leq\delta\). The total complexity of this procedure is \(O(M^{2}N_{val})\). A formal algorithm is summarized in Algorithm 2 in the appendix, Section B.1.
We also consider a simplified version, where we fix a set of \(K\) equiangular directions \(w=(\cos(2\pi j/K),\sin(2\pi j/K))\) for \(j=0,\ldots,K-1\). Then, for a score \(w_{0}\hat{s}_{0}(X)+w_{1}\hat{s}_{1}(X)\) we simply need to choose a threshold, following the procedure in the DP case, where we evaluate the EO and accuracy dynamically. The time complexity in \(O(KN_{val}\log N_{val})\), see details in Algorithm 3, Section B.1 in the appendix.
**Remark 3.1**.: _Note that as functions of \(X\), the probabilities \(p(Y|X)\) and \(p(Y,A|X)\) must agree in order for Theorem 1 to hold, in the sense that \(p(Y|X)=p(Y,0|X)+p(Y,1|X)\). However, the form of the algorithms itself, which only requires "plug-in" estimators \(\hat{p}(Y|X)\) and \(\hat{p}(Y,A|X)\) for ground-truth \(p(Y|X)\) and \(p(Y,A|X)\) respectively, does not require strict agreement between \(\hat{p}(Y|X)\) and \(\hat{p}(Y,A|X)\) for it to run. In practice, we can employ \(\hat{p}(Y|X)\), \(\hat{p}(Y,A|X)\) that were trained separately, and we can run the algorithm even in the case where the auxiliary model \(\hat{p}(Y,A|X)\) was pretrained by a third party on another dataset similar to the one of interest. E.g., we conduct an experiment where the auxiliary model is based on CLIP (Radford et al., 2021) in Section D.3._
**Remark 3.2**.: _In principle, it is possible to have a situation where we are not able to choose a threshold or a linear rule, that satisfies the required fairness constraint on the validation dataset. In our experiments we did not encounter such situation._
Sensitivity analysis.Here we investigate the sensitivity of our method to inaccuracy in the estimated conditional distributions. We first provide theoretical analysis, which takes into account two sources of error: approximation of the ground-truth conditional distributions \(p(Y|X)\) and \(p(A|X)\) by, say parametric models, and the sampling error in evaluation of the accuracy and DP on validation. We provide only the informal formulation omitting technical details. See precise formulation and the proof in Section C.
**Theorem** (Informal).: _Suppose that we have estimations of conditional distributions \(\hat{p}(Y|X)\), \(\hat{p}(A|X)\), and assume that_
\[\mathbb{E}|\hat{p}(Y|X)-p(Y|X)|\leq\varepsilon,\qquad\mathbb{E}|\hat{p}(A|X)-p (A|X)|\leq\varepsilon.\]
_Let \(\bar{Y}\) be the algorithm obtained with Algorithm 1. Then, with high probability_
\[Acc(\hat{Y})-Acc(\bar{Y})\lesssim\sqrt{\varepsilon}+\sqrt{(\log N_{val})/N_{val }},\qquad DP(\bar{Y})-\delta\lesssim\sqrt{(\log N_{val})/N_{val}}\,.\]
Moreover, we include three ablation studies in appendix D, where less accurate \(\hat{p}(Y|X)\), \(\hat{p}(A|X)\) or \(\hat{p}(Y,A|X)\) are deployed to examine the robustness of the post-processing modification algorithm. We find that our post-processing algorithm still retains the performance even when \(\hat{p}(Y|X)\), \(\hat{p}(A|X)\) or \(\hat{p}(Y,A|X)\) are moderately inaccurate.
## 4 Experiments
We evaluate MBS on real-world binary classification tasks with the following experimental set-up.
Datasets.We consider three benchmarks:
* **Adult Census**(Kohavi, 1996), a UCI tabular dataset where the task is to predict whether the annual income of an individual is above $50,000. We randomly split the dataset into a training, validation and test set with 30000, 5000 and 10222 instances respectively. We pre-process the features according to Lundberg & Lee (2017) and the resulting input \(X\) is a 108-dimensional vector. We use "Gender" as the sensitive attribute;
* **COMPAS**(Angwin et al., 2015), a tabular dataset where the task is to predict the recidivism of criminals. The dataset is randomly split into a training, validation and test set with 3166, 1056 and 1056 instances respectively. The input \(X\) consists of 9 normal features (e.g. age and criminal history) and we choose "Race" as the sensitive attribute;
* **CelebA**(Liu et al., 2015), a facial image dataset containing 200k instances each with 40 binary attribute annotations. We follow the experimental setting as in Park et al. (2022): we choose "Attractive", "Big nose", and "Bag Under Eyes" as target attributes, and choose "Male" and "Young" as sensitive attributes, yielding 6 tasks in total, and we use the original train-validation-test split.
Network architectures and hyperparameters.We use an MLP for Adult Census and COMPAS datasets, with hidden dimension chosen to be 8 and 16 respectively. For each CelebA experiment, we use a ResNet-18 (He et al., 2016). For experiments with DP constraints, we train two models \(\hat{p}(Y|X)\) and \(\hat{p}(A|X)\) to predict the target and sensitive attributes respectively, while for experiments with EO constraints, we only train one model but with four classes \(\hat{p}(Y,A|X)\), with each class corresponding to one element in the Cartesian product of target and sensitive attributes.
Baselines.For experiments on Adult Census and COMPAS, we compare MBS with Zafar et al. (2017), Jiang et al. (2019) (post-processing version, for experiments with DP constraints) and Hardt et al. (2016a) (for experiments with EO constraints). For CelebA, we additionally compare with Park et al. (2022), which is a strong baseline tailored to fair facial attribute classification on CelebA. We report the averaged performance from 3 independent runs for all methods.
Evaluations & metrics.We consider both Demographic Parity (DP) and Equalized Odds (EO) as fairness criteria. We select the modification rules \(\kappa(X)\) over the validation set according to the algorithms in Section 3. We consider three levels of constraints for the fairness criteria: \(\delta=10\%,5\%\), and \(1\%\), and we set \(M\) in Algorithm 2 described in Section 3 to be 3000, 600 and 5000 for experiments with EO as fairness criterion on Adult Census, COMPAS and CelebA respectively. Then we report the test set accuracy and DP/EO computed based on the post-processed test predictions after modification according to \(\kappa(X)\).
### Experiments with DP as fairness criterion
Here, we consider experiments with DP as fairness criterion on Adult Census and COMPAS datasets, and we compare MBS with Zafar et al. (2017) and Jiang et al. (2019). The results are reported in Figures 1(a)-1(b). One can see MBS consistently outperforms Zafar et al. (2017) for both datasets in the sense that given different desired levels (\(\delta\)'s) of DP, MBS tends to achieve higher accuracy. Furthermore, while Zafar et al. (2017) requires retraining the model each time to achieve a different trade-off between accuracy and DP, we are able to flexibly balance between accuracy and DP by simply modifying predictions of a single base model according to different thresholds of the bias score. For Adult Census, the \(\kappa(X)\) estimated over validation set is robust when evaluated over test set since the DPs for test set are either below the desired \(\delta\)'s or close to it. For COMPAS, the performance seems to be relatively low as one can see a relatively large drop in accuracy when DP is reduced. Although MBS still outperforms Zafar et al. (2017) in this case, it achieves worse performance than Jiang et al. (2019) and since the validation set is small (1056 instances), the \(\kappa(X)\) estimated over it is not robust as there is a relatively big gap between DPs on test set and the specified \(\delta\)'s. The decline of performance on COMPAS is not surprising since COMPAS is a small dataset (with only 5278 instances), and the number of training examples is insufficient for reliable estimation of both \(p(Y|X)\) and \(p(A|X)\). On the other hand, Jiang et al. (2019) has potential to bypass the problem of unreliable inference for the sensitive attribute as it assumes access to the test set sensitive attribute. However, in real-world applications, the sensitive attributes during inference often won't
be provided due to privacy protection. Moreover, Jiang et al. (2019) is less flexible than MBS since it only provides a single trade-off between EO and accuracy, and it is tailored to reducing DP only.
### Experiments with EO as fairness criterion
To evaluate the performance of MBS when the fairness criterion is EO, we again consider Adult Census and COMPAS datasets and the results are reported in Figures 2c-2d. The observation is similar to that in experiments with DP constraints. We again achieve better trade-off between accuracy and EO than Zafar et al. (2017) for both datasets. Although Hardt et al. (2016) can also significantly reduce EO, similar to Jiang et al. (2019) in the DP-based experiments, it is not able to adjust the balance between EO and accuracy and thus is less flexible than MBS. Both MBS and Zafar et al. (2017) achieve relatively low performance for small dataset COMPAS, which we again believe it is due to unreliable estimation of \(p(Y,A|X)\) with small dataset. Similar to Jiang et al. (2019), Hardt et al. (2016) also assumes access to the test set sensitive attribute and thus won't be affected by unreliable inference of the test sensitive attribute in small data regime.
In addition, we evaluate MBS on a more challenging dataset, CelebA, following the same set-up as in Park et al. (2022). Here we denote the target attributes "Attractive", "Big_Nose", "Bag_Under_Eyes" as "a", "b" and "e" respectively, and the sensitive attributes "Male" and "Young" as "m" and "y" respectively. The results are reported in Table 1. MBS tends to achieve better trade-off than Hardt et al. (2016), whose accuracy is severely hurt across all 6 tasks. MBS is able to maintain high accuracy and meanwhile achieve competitive or even smaller EO. Furthermore, MBS consistently achieves better or competitive performance when compared with Park et al. (2022). To our knowledge, their method is one of the state-of-the-art methods for fair learning on CelebA. This verifies the effectiveness of MBS for practical fair learning problems. We additionally report validation metrics and standard error across 3 independent runs in Appendix E.
Figure 2: Accuracy (%) vs Demographic Parity (DP) (%) trade-offs on (a) Adult Census and (b) COMPAS; Accuracy (%) vs Equalized Odds (EO) (%) trade-offs on (c) Adult Census and (d) COMPAS. Desired \(\delta=\infty\) (unconstrained), \(10\%\), \(5\%\), and \(1\%\).
## 5 Conclusion
To the best of our knowledge, we have for the first time characterized the Bayes optimal binary classifier under composite group fairness constraints, with a post-hoc modification procedure applied to an unconstrained Bayes optimal classifier. Our result applies to popular fairness metrics, such as DP, EOp, and EO as special cases. Based on this characterization, we propose a simple and effective post-processing method, MBS, which allows us to freely adjust the trade-off between accuracy and fairness. Moreover, MBS does not require test sensitive attribute, which significantly broadens its application in real-world problems where sensitive attribute is not provided during inference.
|
2302.12048 | Frequency bin-wise single channel speech presence probability estimation
using multiple DNNs | In this work, we propose a frequency bin-wise method to estimate the
single-channel speech presence probability (SPP) with multiple deep neural
networks (DNNs) in the short-time Fourier transform domain. Since all frequency
bins are typically considered simultaneously as input features for conventional
DNN-based SPP estimators, high model complexity is inevitable. To reduce the
model complexity and the requirements on the training data, we take a single
frequency bin and some of its neighboring frequency bins into account to train
separate gate recurrent units. In addition, the noisy speech and the a
posteriori probability SPP representation are used to train our model. The
experiments were performed on the Deep Noise Suppression challenge dataset. The
experimental results show that the speech detection accuracy can be improved
when we employ the frequency bin-wise model. Finally, we also demonstrate that
our proposed method outperforms most of the state-of-the-art SPP estimation
methods in terms of speech detection accuracy and model complexity. | Shuai Tao, Himavanth Reddy, Jesper Rindom Jensen, Mads Græsbøll Christensen | 2023-02-23T14:20:13Z | http://arxiv.org/abs/2302.12048v1 | # Frequency Bin-Wise Single Channel Speech Presence Probability Estimation Using Multiple DNNs
###### Abstract
In this work, we propose a frequency bin-wise method to estimate the single-channel speech presence probability (SPP) with multiple deep neural networks (DNNs) in the short-time Fourier transform domain. Since all frequency bins are typically considered simultaneously as input features for conventional DNN-based SPP estimators, high model complexity is inevitable. To reduce the model complexity and the requirements on the training data, we take a single frequency bin and some of its neighboring frequency bins into account to train separate gate recurrent units. In addition, the noisy speech and the \(a\ posteriori\) probability SPP representation are used to train our model. The experiments were performed on the Deep Noise Suppression challenge dataset. The experimental results show that the speech detection accuracy can be improved when we employ the frequency bin-wise model. Finally, we also demonstrate that our proposed method outperforms most of the state-of-the-art SPP estimation methods in terms of speech detection accuracy and model complexity.
Shuai Tao, Himavanth Reddy, Jesper Rindom Jensen, Mads Gresboll Christensen Audio Analysis Lab, CREATE, Aalborg University, Aalborg, Denmark
[email protected], [email protected], [email protected], [email protected] frequency bin-wise, speech presence probability, \(a\ posteriori\) probability, gated recurrent units
## 1 Introduction
Noise estimation is one of the key components to realize single-channel and multi-channel speech enhancement, most of which rely on the speech presence probability (SPP) to update the noise statistics [1, 2, 3]. Available noise power spectral density (PSD) estimators also make use of the SPP to decide when to update the noise PSD [4, 5, 6]. Compared to voice activity detectors (VAD), SPP is a soft-decision approach that depends on the correlation of inter-bands and inter-frames [7]. Accurate SPP estimation can greatly improve the effectiveness of speech enhancement [8, 9].
In the short time-frequency transform (STFT) domain, some conventional statistical signal processing methods commonly assume that the spectral coefficients of speech and noise are independent and follow the complex Gaussian distribution [10, 11]. Therefore, the SPP can be derived from the \(a\ posteriori\) probability of the time-frequency (T-F) bins of the noisy speech. According to this assumption, [4] applied the minima values of a smoothed periodogram to estimate the SPP which enables the SPP estimation to be more robust under the effect of non-stationary noise. In [5], to achieve a highly accurate SPP estimate with low latency and computational complexity, an optimal fixed \(a\ priori\) SNR was used to guarantee the \(a\ posteriori\) SPP to be close to zero when speech is absent. In addition, [7] takes the correlation of inter-band and inter-frame into account when designing a general SPP estimator.
Recently, deep neural networks (DNNs) have been proven to be effective at processing non-stationary noise, and many novel DNN-based approaches have been proposed to estimate SPP accurately, which have been applied to speech enhancement and speech recognition successfully [12, 13, 14]. In these methods, recurrent neural networks (RNNs) [15] are commonly used to acquire information from neighboring frames since the frames contain temporal information which can improve the accuracy of SPP estimation. In [14], a bidirectional long short-term memory (BLSTM) was trained by the input features of multi-time frames with all frequency bins to estimate the SPP. In [12], considering the ideal ratio mask (IRM) [16] ranges from 0 to 1 at each T-F bin, they selected different DNN models, such as LSTM, BLSTM, gate recurrent units (GRUs), and bidirectional GRU (BGRU) to estimate the IRM and approximate the SPP. However, the problem that arises here is that as the complexity of the model goes up and more training data is applied to the model, more powerful hardware is required to train the models.
Inspired by conventional SPP estimation methods, our model estimates the SPP based on the correlation of several neighboring T-F bins in contrast to the typical DNN-based SPP estimation approach where all frequency bins are regarded as the input features. This allows us to use DNNs on a one-to-one basis with frequency bins therefore vastly reducing the number of parameters in the model and the amount of computations taking place. In this work, we thus propose a frequency bin-wise SPP estimation model in the STFT domain that relies on using multiple DNNs to estimate the SPP. For our proposed model architecture, the GRU module is used to extract time and frequency information from each frequency bin and several of its neighbors. Additionally, since IRM-based SPP estimation methods may misclassify the T-F bins dominated by non-speech and noise [12, 17, 18], we choose the \(a\ posteriori\) probability to represent the SPP in the STFT domain.
The work is organized as follows. In Section 2, the problem of frequency bin-wise single channel SPP estimation is formulated. In Section 3, the SPP estimation model with multiple DNNs is designed. In Section 4 and Section 5, the experimental procedures and results are provided, respectively. Finally, Section 6 presents the conclusion. The work can be found on GitHub1.
Footnote 1: [https://github.com/Shuatiaoau/SPP](https://github.com/Shuatiaoau/SPP)
## 2 Frequency Bin-Wise SPP Estimation
### Signal Modeling
For the single channel speech signal \(x(n)\), we assume that it is corrupted by the additive noise \(d(n)\). That is, in the STFT domain, we can obtain the noisy speech \(y(n)\) representation as follows:
\[Y(k,l)=X(k,l)+D(k,l), \tag{1}\]
where \(k\in\{0,...,K-1\}\) denotes the frequency bin index and \(K\) is the number of frequency bins, \(l\in\{0,...,L-1\}\) denotes the time frame index and \(L\) is the number of time frames. With the assumption of a zero-mean complex Gaussian distribution and independence for \(X\) and \(D\), we have
\[\begin{split}\mathbf{\phi}_{Y}(k,l)&=E[|Y(k,l)|^{2}]\\ &=\mathbf{\phi}_{X}(k,l)+\mathbf{\phi}_{D}(k,l),\end{split} \tag{2}\]
where \(E[\cdot]\) is the statistical expectation operator, \(\phi_{X}(k,l)=E[|X(k,l)|^{2}]\) and \(\phi_{D}(k,l)=E[|D(k,l)|^{2}]\). The PSD of the clean and noisy speech can be represented by \(\phi_{X}(k,l)\) and \(\phi_{D}(k,l)\), respectively. In the STFT domain, there exists a correlation between the neighboring T-F bins [7]. Therefore, the SPP estimate can be improved using the correlation.
The first step in creating our input signal vector is to obtain a vector corresponding to each individual frequency bin,
\[\mathbf{\varphi}_{Y}(k)=[\phi_{Y}(k,0),...,\phi_{Y}(k,l),...\phi_{Y}(k,L-1)]^{T}. \tag{3}\]
Each frequency bin vector contains \(L\) consecutive time frames, which contain relevant contextual information for the estimation of the SPP. Since RNNs are effective at processing temporal information [19, 20], we employ RNNs in this work to extract time correlations from the neighboring time frames.
To improve the SPP estimation accuracy, we take a few neighboring frequency bin vectors into consideration to extract frequency correlations from the input signal matrix. Therefore, the input signal matrix \(\mathbf{\Phi}_{Y}(k)\) can be obtained as
\[\mathbf{\Phi}_{Y}(k)=[\mathbf{\varphi}_{Y}(k-I),...,\mathbf{\varphi}_{Y}(k),...,\mathbf{ \varphi}_{Y}(k+I)]^{T}, \tag{4}\]
where \(I\) is the number of neighboring frequency bin vectors.
Now, the time correlation and frequency correlation of neighboring time-frequency bins can be extracted according to the input signal matrix \(\mathbf{\Phi}_{Y}(k)\). In this work, the SPP is represented by the _a posteriori_ probability [5], and the DNN is used to estimate the SPP from the noisy observation.
Since the typical DNN-based approach takes all the frequency bins into account to estimate the SPP, the model complexity may be increased. In this section, we, therefore, design multiple specific DNNs to estimate the frequency bin-wise SPP. Additionally, since the \(a\ posteriori\) probability is derived by the correlation of neighboring T-F bins, the \(a\ posteriori\) probability SPP representation of the clean speech and the noisy speech PSD are used as the training data pairs to train our model.
### SPP Estimation Model and Loss Function
To extract the time and frequency correlation of the consecutive T-F bins in the input signal matrix \(\mathbf{\Phi}_{Y}(k)\) from the observed noisy PSD \(\mathbf{\phi}_{Y}(k,l)\), we set \(K\) specific DNNs as the regression module. As mentioned in (4), the coefficient of the \(k\)'th input signal matrix can be used to train the \(k\)'th DNN for the SPP estimate in the \(k\)'th frequency bin.
First, to train the DNN model, we choose the log-power periodogram as the input feature [21, 22]. Therefore, the input features of each individual DNN are obtained from the log input signal matrix \(\mathbf{\Phi}_{Y}(k)\). It can be expressed as
\[\mathbf{\Phi}_{Y}^{\prime}(k)=\log(\mathbf{\Phi}_{Y}(k)), \tag{5}\]
where \(\mathbf{\Phi}_{Y}^{\prime}(k)\) is the input feature for the \(k\)'th DNN. Also, during training, we have
\[\widehat{\text{SPP}}_{Y}(k)=F_{k}^{\theta}(\mathbf{\Phi}_{Y}^{\prime}(k)), \tag{6}\]
where \(\widehat{\text{SPP}}_{Y}(k)=[\widehat{\text{SPP}}_{Y}(k,0),...,\widehat{ \text{SPP}}_{Y}(k,l),...,\widehat{\text{SPP}}_{Y}(k,L-1)]^{T}\) is the SPP estimate of the \(k\)'th input features, \(F_{k}^{a}\) is the \(k\)'th DNN with the parameter \(\theta\). To update the DNN parameters, the loss between the target and the estimated SPP is calculated by mean-squared error (MSE), i.e.,
\[L_{MSE}=\frac{1}{L}\sum_{l=0}^{L-1}(\text{SPP}_{Y}(k)-\widehat{\text{SPP}}_{Y}( k))^{2}, \tag{7}\]
where \(\text{SPP}_{Y}(k)=[\text{SPP}_{Y}(k,0),...,\text{SPP}_{Y}(k,l),...,\text{SPP}_{Y}(k,L-1)]^{T}\) is the target function. In this work, the \(a\ posteriori\) probability is regarded as the SPP representation, therefore \(\text{SPP}_{Y}(k,l)\) can be represented by
\[\text{SPP}_{Y}(k,l)=\left(1+\frac{p(\mathcal{H}_{0})}{p(\mathcal{H}_{1})}\left( 1+\xi_{\mathcal{H}_{1}}\right)e^{-\frac{|Y|^{2}}{\Phi_{D}}\frac{\xi_{\mathcal{H }_{1}}}{1+\xi_{\mathcal{H}_{1}}}}\right)^{-1} \tag{8}\]
where \(p(\mathcal{H}_{0})\) and \(p(\mathcal{H}_{1})\) denote \(a\ priori\) speech absence and presence probability, \(\xi_{\mathcal{H}_{1}}\) is the \(a\ priori\) SNR during speech presence [5].
### Model Architecture
In this work, since a GRU can outperform an LSTM both in terms of convergence in CPU time, and in terms of parameter updates and generalization [23], we choose GRUs to design the SPP estimation model. The model training strategy is shown in Fig. 1 and the DNN model is trained by the input features of the logarithmic power spectral T-F bins.
The training strategy of the typical DNN-based SPP estimation model in Fig. 1(a) shows that a GRU module is trained using \(K\) frequency bins (all frequency bins) and \(L\) consecutive time frames. The typical DNN-based model input size is \(K\) and, in this work, the size of the hidden layer is the same as the size of the input layer. The
Figure 1: Typical DNN-based model training strategy vs our proposed method. (a) Typical DNN-based SPP estimation model (with all frequency bins), and (b) Proposed frequency bin-wise SPP estimation model, a frequency bin along with \(2I\) neighboring frequency bins are treated as the input features.
proposed training strategy of the frequency bin-wise SPP estimation model is shown in Fig. 1(b). When \(I\) neighboring frequency bins are introduced to estimate the SPP of a single frequency bin, the input size is \(2I+1\), and one hidden layer is set. The output of each hidden layer state is regarded as the value of the SPP estimate at the current time. Finally, to restrict the output range of the DNN to [0, 1], the output layer is the activation function \(Softplus\) with a fixed parameter \(\beta\).
## 3 Experimental settings
In this work, the sub-band DNS dataset is used to train our designed model. During testing, 200 noisy utterances (1.1 hours) and 1800 noisy utterances (1 hour) were collected from the DNS dataset [24], and the TIMIT dataset [25], respectively. Each clean utterance is corrupted by a random noise utterance selected from the noise dataset, each noisy utterance SNR ranging from -5dB to 25 dB. The noise data includes 150 different types of noise taken from Audioset [26] Freesound [27] and Demand datasets [28].
The receiver operating characteristic (ROC) [29] curve is used to evaluate the SPP estimation method performance and the false-alarm probability \(P_{\mathrm{h}}=0.05\) given in [7] is used to calculate the speech detection probability, \(P_{\mathrm{d}}\). Additionally, we apply the area under curve (AUC) metric which is derived from ROC and ranges between [0, 1] to represent overall performance. We also adopt the adaptive threshold set to -60 dB below the maximum instantaneous power across all TF bins shown in [7] to distinguish the speech and non-speech bins across all T-F bins of clean speech.
The sampling rate of all utterances is 16 kHz. Hann window is applied to STFT analysis and the length of the time window for STFT is 16 ms and the hop length is 8 ms. We use the mean and standard derivation to normalize the dataset. During training, the Adam optimizer [30] is utilized to optimize the neural network parameters. The learning rate is set to 0.001. Weight decay is set to 0.00001 to prevent overfitting. The parameter will be updated at the 50th and 100th epochs for the implemented DNN models. Pytorch is used to implement the frequency bin-wise SPP estimation model and the reference DNN-based model.
## 4 Results and discussion
In this section, to prove the effectiveness of our method, a comparison is shown between a typical DNN-based model and our proposed method using ROC curves. Moreover, some numerical results are provided to evaluate the accuracy of the SPP estimators and the model complexity, respectively.
### Examination of ROC Curves
To investigate the performance of the proposed method, 200 training utterances (1.1 hours) are used to train our proposed frequency bin-wise model. In addition, 200 utterances (1.1 hours), 1000 utterances (5.5 hours), and 3000 utterances (16.6 hours) are used to train the typical DNN-based model, respectively. To investigate the effect of using neighboring frequency bins for the proposed method, we set \(I=0\) (no neighboring frequency bins), \(I=1\) (with 1 neighboring frequency bin), and \(I=2\) (with two neighboring frequency bins) to train the frequency bin-wise model. Fig. 2 shows an example of SPP estimation results. A noisy utterance of length 20 seconds and input SNR of 11 dB taken from the DNS dataset, is used for testing by the typical DNN-based SPP estimation model and the frequency bin-wise model.
From Fig. 2, we can observe that the typical DNN-based method and the proposed frequency bin-wise method are able to estimate the SPP with similar accuracy. In addition, we also investigate the impact of the training data volume on SPP estimation accuracy for the typical DNN-based SPP estimation model. From Fig. 3, we can find that when we increase training data from 1.1 hours to 5.5 hours and then to 16.6 hours for the typical DNN-based model, there is a gradual increase in AUC but still falls short of our proposed method in terms \(P_{\mathrm{d}}\).
### Numerical Results
To evaluate the performance of the proposed method, the speech detection probability and the AUC are calculated from the ROC curves to represent the speech detection accuracy and the effectiveness of the SPP estimation method, respectively. In addition, we also investigate the effect of model complexity on SPP estimation accuracy. Inspired by [31] and [32], we compare our method with the state-of-the-art self-attention model and, in this work, 3 self-attention heads and 2 encoder layers are used to estimate the SPP. The self-attention model is trained in a typical way where all the frequency bins are treated as input features. During training, the frequency bin-wise SPP estimation model and the self-attention-based SPP estimation model are trained with 1.1 hours of training data pairs. The typical DNN-based model is trained with 1.1 and 16.6 hours of training data pairs, respectively. All training data pairs come from the DNS
Figure 3: ROC curves comparison of the typical DNN-based model and the frequency bin-wise model with an increase in training data for the typical DNN-based model. The vertical dotted line indicates the false-alarm probability \(P_{\mathrm{h}}=0.05\). Input SNR = 11 dB.
Figure 2: ROC curves comparison of the typical DNN-based model and the frequency bin-wise model. Both models are trained with the same amount of training data (1.1 hours). The vertical dotted line indicates the false-alarm probability \(P_{\mathrm{h}}=0.05\). Input SNR = 11 dB.
dataset.
In Table 1, we show how the proposed model compares to other conventional methods and a few DNN-based methods using \(P_{\text{d}}\) and AUC as metrics. The results in Table 1 are obtained from testing using the TIMIT dataset (1 hour).
With 1.1 hours of training data, we can observe that the frequency bin-wise model AUC (0.7986) is lower than the typical DNN-based model and the self-attention-based model, it is still higher than IMCRA [4] (0.6504), Unbiased MMSE [5] (0.7348) and General SPP estimator [7] (0.6229). Especially, when we set \(I=1\) and \(I=2\), the sub-frequency bin-based model achieved higher AUCs of 0.8011 and 0.7988, respectively. For the speech detection accuracy, all the frequency bin-wise models achieved higher speech detection accuracy than other methods and when we take one neighboring frequency bin (\(I=1\)) into account the speech detection probability can reach 0.5038.
According to the results, we can confirm that an increase in model complexity can improve the performance of DNN-based applications, and in this work, the SPP estimation accuracy can also be improved, which is consistent with the experimental results shown in [33]. The reason is that the complex model can extract more global information than the simple model to estimate the SPP from all frequency bins. Additionally, a remarkable improvement in speech detection accuracy appears when we employ our proposed method to estimate the SPP, especially when we set \(I=1\), the model performance and \(P_{\text{d}}\) are improved. The reason for the improved performance could be that the DNNs can extract specific contextual information for each frequency bin which is not possible when \(I=0\) due to the lack of inclusion of its neighbors.
Finally, by comparing the AUC of different SPP estimation methods, we can observe that all DNN-based models can achieve higher performance of SPP estimation than the conventional methods. For DNN-based SPP estimation models, although all the presented models demonstrate similar performance, the speech detection accuracy is different. Therefore, it can be observed that more details can be detected by the bin-wise model leading to better detection accuracy.
### Computational Complexity
To evaluate the complexity of the proposed model relative to its counterparts, we use the number of parameters and floating point operations (FLOPs) as the metrics. For our proposed frequency bin-wise model, the total parameters and FLOPs of all the models are used to represent computational complexity. We use the _ptflops_2 python library to calculate the total parameters and FLOPs for our method and the reference DNN-based methods. Table 2 shows that our proposed method has fewer parameters and FLOPs than the other methods. The reason is that although we use multiple DNNs to estimate the SPP, each DNN has less input size than the typical DNN-based model. Furthermore, although we introduced the neighboring frequency bins to estimate the SPP in 4.2, from Table 2, we can also observe that the increase in computational complexity is minimal even with the inclusion of additional neighboring frequency bins.
Footnote 2: [https://pypi.org/project/ptflops/](https://pypi.org/project/ptflops/)
From the above experimental results, we can confirm that although increasing the training data and using complex models can contribute to the improvement of the performance of the typical DNN-based SPP model, high computational complexity is inevitable. However, it can be observed that the proposed frequency bin-wise model not only shows an improvement in \(P_{\text{d}}\) while maintaining similar performance in terms of the AUC but also reduces the computational complexity while using the same amount of training data.
## 5 Conclusion
In this work, we proposed an effective frequency bin-wise SPP estimation method that shows good performance with a limited amount of training data while also maintaining low model complexity. Experimental results show that in addition to reducing the model complexity, the frequency bin-wise model also shows better performance even in comparison with the typical DNN-based model that is trained with increasing amounts of training data. The experimental observations involving the inclusion of neighboring frequency bins show that there is an increase in speech detection accuracy as well as the AUC (compared to its counterpart that does not include any neighboring frequency bins) due to being exposed to local contextual information. Since multiple DNNs are employed to estimate the SPP in the STFT domain, the frequency bin-wise model's computational complexity is much lower than its DNN-based counterparts.
|
2308.07378 | The Devil in the Details: Simple and Effective Optical Flow Synthetic
Data Generation | Recent work on dense optical flow has shown significant progress, primarily
in a supervised learning manner requiring a large amount of labeled data. Due
to the expensiveness of obtaining large scale real-world data, computer
graphics are typically leveraged for constructing datasets. However, there is a
common belief that synthetic-to-real domain gaps limit generalization to real
scenes. In this paper, we show that the required characteristics in an optical
flow dataset are rather simple and present a simpler synthetic data generation
method that achieves a certain level of realism with compositions of elementary
operations. With 2D motion-based datasets, we systematically analyze the
simplest yet critical factors for generating synthetic datasets. Furthermore,
we propose a novel method of utilizing occlusion masks in a supervised method
and observe that suppressing gradients on occluded regions serves as a powerful
initial state in the curriculum learning sense. The RAFT network initially
trained on our dataset outperforms the original RAFT on the two most
challenging online benchmarks, MPI Sintel and KITTI 2015. | Kwon Byung-Ki, Kim Sung-Bin, Tae-Hyun Oh | 2023-08-14T18:01:45Z | http://arxiv.org/abs/2308.07378v1 | # The Devil in the Details: Simple and Effective Optical Flow Synthetic Data Generation
###### Abstract
Recent work on dense optical flow has shown significant progress, primarily in a supervised learning manner requiring a large amount of labeled data. Due to the expensiveness of obtaining large scale real-world data, computer graphics are typically leveraged for constructing datasets. However, there is a common belief that synthetic-to-real domain gaps limit generalization to real scenes. In this paper, we show that the required characteristics in an optical flow dataset are rather simple and present a simpler synthetic data generation method that achieves a certain level of realism with compositions of elementary operations. With 2D motion-based datasets, we systematically analyze the simplest yet critical factors for generating synthetic datasets. Furthermore, we propose a novel method of utilizing occlusion masks in a supervised method and observe that suppressing gradients on occluded regions serves as a powerful initial state in the curriculum learning sense. The RAFT network initially trained on our dataset outperforms the original RAFT on the two most challenging online benchmarks, MPI Sintel and KITTI 2015.
## I Introduction
Optical flow provides the clues of motion between subsequent frames, which can be utilized for other computer vision tasks such as object tracking, action recognition, 3D reconstruction, and video enhancement, _etc_. Recently, deep neural networks have shown great progress in optical flow estimation [1, 2, 3, 4, 5]. The progress has been made primarily in a supervised learning manner requiring a large amount of labeled data. Despite the effectiveness of the learning-based approaches, obtaining labeled real-world data is prohibitively expensive at a large scale. Therefore, synthetic computer graphics data [6, 7, 8, 9] are typically leveraged.
A common belief of using synthetic data is that the data rendered by graphics engines limit generalization to real scenes due to synthetic-to-real domain gaps in quality. Those gaps involve real-world effects such as noise, 3D motion, non-rigidity, motion blur, occlusions, large displacements, and texture diversity. Thus, synthetic datasets [6, 7, 8, 9] for optical flow have been developed by considering these effects to some extent, _i.e_., mimicking the real-world effects.
In this paradigm, we throw a question, "Which factor of the synthetic dataset is essential for the generalization ability to the real domain?" In this work, we found that the required characteristics for an optical flow dataset are simple; achieving only a certain level of realism is enough for training highly generalizable and accurate optical flow models. We empirically observe that a simple 2D motion-based dataset as training data often shows favorable performance for ordinary purposes or much higher than the former synthetic datasets [7, 10], which are rendered by complex 3D object or motion with rich textures. Furthermore, we found that using occlusion masks to give the network incomplete information is effective for a powerful initial state of curriculum learning.
We design easily controllable synthetic dataset generation recipes using a cut-and-paste method with segmented 2D object textures. As shown in Fig. 1, our generated data appears to be far from the real-world one, but training on those shows promising results both on generalization and fine-tuning regimes, outperforming the networks trained on the competing datasets. We also utilize occlusion masks to stop gradients on occluded regions, and the RAFT network initially trained with occlusion masks outperforms the original RAFT on the two most challenging online benchmarks, MPI Sintel [9] and KITTI 2015 [11]. Our key contributions are summarized as follows:
* We present simple synthetic data generation recipes with compositions of simple elementary operations and show comparable performance against competing methods.
* We propose a novel method of utilizing occlusion masks in a supervised method and show that suppressing gradients on occluded regions in a supervised optical flow serves as a powerful initial state in the curriculum learning protocol.
* We systematically analyze our dataset and the effects according to different factors of motion type, motion distribution, data size, texture diversity, and occlusion masks.
## II Related Work
We briefly review our target task, _i.e_., optical flow estimation, and the training datasets that have been used for training learning-based optical flow estimation methods.
**Optical Flow.** Fundamentally, optical flow estimation for each pixel is an ill-posed problem. Traditional approaches [12, 13, 14, 15] attempted to deal with imposing smoothness priors to regularize the ill-condition in an optimization framework. According to the advance of deep learning, the ill-posedness has been tackled by learning, yielding superior performance. Starting with the success of FlowNet [6, 1], recent optical flow estimation methods have been developed by supervised learning [16, 17, 2, 5, 18]. However, these approaches strongly rely on training datasets, where real supervised data of optical flow is extremely difficult to obtain [10].
**Datasets.** The supervised learning-based methods for optical flow estimation requires exact and pixel-accurate ground truth.
While obtaining true real motion is extremely difficult without the support of additional information, several real-world optical flow datasets [19, 11, 20, 21] have been proposed. However, these datasets are relatively small scale and biased to limited scenarios; thus, those are not sufficient for training a deep model but more suitable for benchmark test sets.
To address persistent data scarcity, studies for generating large-scale synthetic datasets have been attempted. Dosovitskiy _et al_. [6] propose a synthetic dataset of moving 3D chairs superimposed on the images from Flickr. Similarly, Mayer _et al_. [8] present datasets where not only chairs but various objects are scattered in the background. Aleotti _et al_. [7] leverage an off-the-shelf monocular depth network to synthesize a novel view from a single image and compute an accurate flow map.
Mayer _et al_. [10] present critical factors of the synthetic dataset, _i.e_., the object shape, motion types and distributions, textures, real-world effects, data augmentation, and learning schedules. Sun _et al_. [22] generate a learning-based synthetic dataset for training accurate optical flow networks, but it is still challenging to distinguish the key factors for synthetic data intuitionally. We build upon the observations of Mayer _et al_. [10] and design easily controllable synthetic dataset generation recipes and identify additional key factors such as _balanced motion distribution, amount of data, texture combination, and learning schedules with occlusion masks_.
## III Data Generation Pipeline
In this section, we present a simple method to generate an effective optical flow dataset. Unlike the prior arts using 3D motions and objects with computer graphics, our generation scheme remains simple by using 2D image segment datasets and 2D affine motion group. The proposed simple dataset enables analyzing the effect of each factor of the synthetic dataset.
**Overall Pipeline.** The overall data generation pipeline is illustrated in Fig. 2. As shown, we use a simple cut-and-paste method where foreground objects are pasted on an arbitrary background image. Inspired by Oh _et al_. [23], the segmented foreground objects and random background images are obtained from two independent datasets to encourage combinatorial diversity while avoiding texture overlaps. In this work, we use PASCAL VOC [24] and MS COCO [25] as suggested by Oh _et al_. [23]. The foreground objects are first superimposed randomly, and its consecutive frame is composed of randomly moving both the foreground objects and the background image by simple affine motions. This allows us to express diverse motions, easily control the motion distribution, and compute occlusion masks.
**Background Processing.** We first sample an image from an image dataset for background and resize them to \(712\times 584\). We regard this frame as the target frame (Frame B in Fig. 2). Then, we generate a flow map using random affine coefficients, including translation, rotation, and scaling (zooming), and inverse-warp the target frame to obtain the reference frame (Frame A in Fig. 2). We sample the translation coefficient of background from the range \([-20,20]\) pixels for each direction, and with a \(30\%\) chance, the translation coefficient is reset to zero. The rotation and scale coefficients are sampled from \([-\frac{\pi}{100},\frac{\pi}{100}]\) and \([0.85,1.15]\), respectively. From the sampled affine matrix, we obtain a ground-truth flow map by subtracting the coordinates of two background image pairs as \(\mathbf{f}=\mathbf{A}\mathbf{x}-\mathbf{x}\), where \(\mathbf{f}\) denotes each flow vector of a pixel at the reference frame, \(\mathbf{A}\) the affine transform, and \(\mathbf{x}\) a homogeneous coordinate \([x,y,1]\) of each pixel on the reference frame. We sample 7,849 background images from MS COCO [25].
**Foreground Processing.** For synthesizing foreground objects' motion, we use segmented objects from a semantic image segmentation dataset. For the target frame, we first sample the number of foreground objects to be composited in \(\{7,8,\cdots,14,15\}\). Then, we randomly place these objects on the target one and apply inverse-warping to obtain the warped objects on the reference frame using optical flow maps obtained from random affine transformations. The sampling ranges of rotation and scale coefficients are the same as those of the background case. The distribution of the translation coefficient is designed to follow the exponential distribution as \(\frac{1}{Z}\exp(-f/T)\), where the temperature \(T\) is empirically set to \(20\), and \(Z\) the normalization term. The distribution is inspired by natural statistics of optical flow [26], where
Fig. 1: **The prior arts of synthetic data and our proposed dataset.** Sampled frames and its corresponding flow maps are visualized. While being diverse in motion, (a,b) include many thin object parts and unrealistically simple reflectance. (c) includes semantically coherent flow map but the diversity of the motion is limited by a global camera motion. Our method, in contrast, includes both controllable and diverse motion characteristics with semantically coherent object shapes and rich texture.
Fig. 2: **Schematic overview of our data generation pipeline and occlusion mask estimation.** (a) Given a background image and foreground objects, we sample affine flow coefficients and generate a consecutive frame. These coefficients can be used to extract exact ground-truth optical flow map. (b) We describe the process of estimating the occlusion mask (\(\text{M}_{r,i}\)) for the first layer (\(i=0\)), which is the background. This process is recursively conducted in ascending order until the end of the layers.
the statistics of motions tend to follow Laplacian distribution. We limit the distribution range \([0,150]\) by resampling if the magnitude is over \(150\) pixels. The translation direction of foregrounds is sampled at uniformly random. We use \(2913\) images from PASCAL VOC [24], and from the set, we extract \(5543\) preprocessed segments as foreground objects.
**Composition.** We sequentially paste foregrounds on the background to generate a single pair of consecutive frames. We particularly care about the regions near object boundaries when compositing optical flow maps. Directly applying alpha blending of a stack of foreground flow maps with a background flow map yields inconsistent flows near object boundaries. To deal with this, we paste the flow maps of each foreground only when the alpha channel value is at least \(0.4\). After composition, we conduct the center crop to the composited images to obtain outputs of size \(512\times 384\) which is the same as FlyingChairs [6]. Our data generation speed is faster than AutoFlow [22], which generates a learning-based dataset for given target data, and ours about 500 times faster than dCOCO [7] as shown in Table I. Our fast data generation speed is beneficial for analyzing the required characteristics to train accurate optical flow networks.
**Occlusion Mask.** Similar to the prior arts [8, 9, 19, 11], our data generation method exports occlusion masks as well. Predicting motions of regions being occluded is an intractable problem and requires uncertain forecasting, which can act as detrimental outliers during training. Thus, prior arts [27, 28] estimate occlusion masks as well to encourage reliable optical flow estimation. Unlike prior arts, we utilize occlusion masks in a supervised method by suppressing gradients on occluded regions in a supervised optical flow. The gradient suppression with occlusion masks serves as a powerful initial state in the curriculum learning protocol, which will be discussed in the experimental section. To obtain occlusion masks, given the alpha maps of each layer including foregrounds (\(i\geq 1\)) and background (\(i=0\)) in order, we binarize the alpha map by thresholding with \(0.4\), denoting \(\alpha_{\{r,t\},i}\) for the \(i\)-th object layer in the reference and target frames, respectively. The non-visible regions \(\mathrm{V}_{\{r,t\},i}\) of the \(i\)-th layer in each frame are computed by \(\mathrm{V}_{\{r,t\},i}=\alpha_{\{r,t\},i}\cap(\cup_{k=i+1}^{L}\alpha_{\{r,t\},k})\). Using the \(i\)-th layer flow map \(\mathbf{f}_{i}\), we inverse-warp the \(\mathrm{V}_{t,i}\) to the reference frame as \(\mathrm{V}_{t\to r,i}=\mathbf{f}_{i}\circ\mathrm{V}_{t,i}\) and binarize it by \(0.4\) again, where \(\circ\) denotes the warping operation. Then, because the occluded regions are only visible in the reference frame, we can find such an occlusion mask of each layer by \(\mathrm{M}_{r,i}=\max(\mathrm{V}_{t\to r,i}-\mathrm{V}_{r,i},0)\). The compromised occlusion mask \(\mathrm{M}_{r}\) is obtained by \(\mathrm{M}_{r}=\cup_{i=0}^{L}\mathrm{M}_{r,i}\).
## IV Experiments
In this section, we compare the performance of respective optical flow networks by training on our datasets with/without the occlusion mask and competing datasets. Utilizing the simple data generation recipe, we also analyze the effects of characteristics in optical flow datasets.
**Optical Flow Network.** We use RAFT [5] as a reference model to evaluate the benefits of our synthetic dataset in generalization and fine-tuning setups. RAFT is a representative supervised model that is widely used to estimate the effectiveness of optical flow datasets [22, 7]. We follow the same hyper-parameters suggested by the implementation of Teed _et al._[5], and the experiment setup by Aleotti _et al._[7] that shows one-/multi-stage training results. For our synthetic datasets, in the initial training stage, we train RAFT for \(100\)k iterations with the batch size1 of \(10\), image crops of size \(496\times 368\), the learning rate \(4\times 10^{-4}\), and the weight decay of \(1\times 10^{-4}\).
Footnote 1: The authors of [5, 7] use the batch size of \(12\) and \(6\) for training FlyingChairs and dCOCO, respectively.
For multi-stage training with FlyingThings3D [8], from the RAFT networks pre-trained on our datasets, we further train with the frames_cleanpass split of FlyingThings3D that includes 40k consecutive frame pairs. We train the model for \(100\)k iterations with a batch size of \(6\), image crops of size \(720\times 400\), the learning rate of \(1.25\times 10^{-4}\), and the weight decay of \(1\times 10^{-4}\). These hyper-parameters are the same with the _Things training stage_ reported in [5].
**Competing Datasets for Training.** We choose FlyingChairs (Ch) [6] and dCOCO [7] as the competing datasets, and leverage the RAFT networks pre-trained on each dataset provided by the authors and dCOCO. For multi-stage training models, from the networks pre-trained on ours, we further train with FlyingThings3D (Th) [8] in sequence to compare with the RAFT model trained with FlyingChairs followed by FlyingThings3D (Ch\(\rightarrow\)Th).
**Test Datasets.** We evaluate on Sintel [9] and KITTI 2015 [11]. These datasets contain crucial real-world effects, such as occlusions, illumination changes, motion blur, and camera noise, making them challenging and widely used standard benchmarks for evaluating optical flow models. We report the performance of the model trained with the base datasets without fine-tuning on Sintel or KITTI, called _generalization_ and that of the model fine-tuned on the training set of Sintel or KITTI, called _fine-tuning_.
**Evaluation.** Following the convention, we report the average End-Point Error (EPE) and the errors that exceed \(3\) pixels and \(5\)% of its true value (FI). We further evaluate the percentage of pixels with an absolute error smaller or equal to \(1\) (\(\leq\)1). The **bold** will be used to highlight the best one among the methods.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & Dataset & Number of & 100 pairs \\ & & foregrounds & generation time \\ \hline (A) & AutoFlow [22] & - & 336 days \\ (B) & dCOCO [7] & - & 5593.2 s \\ (C) & Ours & 2 & 6.86 s \\ (D) & Ours & 7 & 9.49 s \\ (E) & Ours & 15 & 12.98 s \\ \hline \hline \end{tabular}
\end{table} TABLE I: **Data generation speed.** We evaluate the speed for generating 100 pairs of synthetic data with a single NVIDIA Titan RTX GPU: (A) dCOCO, and (B, C, D) ours with the different number of foregrounds. The number of the foregrounds is sampled between 7 to 15.
### _Comparison with Other Synthetic Datasets_
We compare the generalization and fine-tuning performance of the networks trained on our dataset and other competing datasets [6, 8, 7]. For fair comparisons, we train the network on our dataset (denoted as Ours) with \(20\)k image pairs that include translation, rotation, and zooming. We also evaluate our dataset with occlusion masks \(\langle\mathrm{O}\rangle\) (denoted as Ours+O).
**Generalization.** The left part of Table II summarizes the generalization test. Among the models trained on a single dataset, our datasets (C, D) show the best performance on Sintel. However, dCOCO (B) shows better performance on KITTIs. We further evaluate the performance on two other benchmarks as shown in Table III, and observe that dCOCO achieves better performance on Virtual KITTI [29], which is a synthetic dataset. On the other hand, ours achieves more accurate optical flow estimation in a real dataset, _i.e._, HD1K [21]. From these results, we assume that dCOCO, which uses depth-aware data generation approach with real images, is effective in autonomous driving scenarios and the similar motion distribution and texture between the synthetic and target dataset are key factors of generalization. We also pre-train the network on 2D motion datasets, such as FlyingChairs [6] and our datasets, and sequentially train on FlyingThings3D [8]. Compared to (E) which uses FlyingChairs at the initial stage, (F, G) show better generalization performance in the KITTIs and Sintel Clean pass. These show that the choice of the initial training stage significantly affects the final performance.
**Fine-tuning.** We fine-tune the networks of the left part of Table II on Sintel or KITTIs, and the results are reported in the right part of the table. Overall, our datasets show favorable performance. Compared to (E) first pre-trained on FlyingChairs, (F, G) show better performance. (G) especially achieves the lowest Fl and noticeable performance improvement in KITTI 2015. These results suggest that utilizing occlusion masks as a gradient suppression tool is effective in fine-tuning real-world datasets, _i.e._, KITTI 2012 and KITTI 2015. We observe a consistent tendency with the online benchmark results as follows.
**Online Benchmarks.** We follow the training procedure described in RAFT [5] to fine-tune the model pre-trained by our dataset and test on the public benchmarks of Sintel and KITTI15. As summarized in Table IV, using our dataset for the initial curriculum outperforms the original RAFT on both public benchmarks. On the KITTI15 test set, the network pre-trained on our synthetic dataset with occlusion masks shows better performance compared to RAFT. In the Sintel test dataset, we observe that the performance improvement in Sintel Clean and Final passes with our dataset. With and without the _warm-start_ initialization, the network trained with our training schedule also achieves better results in both passes. From these results, we assume that learning the simplest characteristics for estimating optical flow at the initial learning schedule without occlusion estimation helps the
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{} & \multicolumn{3}{c}{Model} & \multirow{2}{*}{Dataset} & \multicolumn{3}{c}{Sintel C.} & \multicolumn{3}{c}{Sintel F.} & \multicolumn{3}{c}{KITTI12} & \multicolumn{3}{c}{KITTI15} \\ \cline{3-14} & & & \multicolumn{3}{c}{EIPE} & \multicolumn{3}{c}{EIPE} & \multicolumn{3}{c}{EIPE} & \multicolumn{3}{c}{F1} & \multicolumn{3}{c}{EIPE} & \multicolumn{3}{c}{F1} & \multicolumn{3}{c}{EIPE} & \multicolumn{3}{c}{F1} \\ \hline (A) & FlowNetC & Ch & 5.17 & 6.43 & 11.82 & 57.67 & 20.65 & 62.91 & \multicolumn{3}{c}{62.91} \\ (B) & FlowNetC & Ours & **4.48** & **6.07** & **10.64** & **52.72** & **18.53** & **58.15** \\ \hline (C) & PWC-Net & Ch & 3.25 & 4.36 & 6.27 & **27.18** & 14.22 & 40.38 \\ (D) & PWC-Net & Ours & **2.94** & **4.29** & **5.26** & 27.28 & **10.61** & **38.63** \\ \hline (E) & RAFT & Ours & 1.98 & 3.85 & 3.63 & 20.00 & 7.17 & 29.24 \\ \hline \hline \end{tabular}
\end{table} TABLE V: **Generalization results on other backbone networks.** We evaluate the generalization performance of the FlowNetC and PWC-Net trained on different datasets: (A, C) FlyingChair, and (B, D) our dataset. (B, D) achieves better performance compared to (A, C). (E) is RAFT trained on our dataset as a reference.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c c c c c} \hline \hline & & & \multicolumn{6}{c}{Generalization test} & \multicolumn{6}{c}{Finetuning test} \\ \cline{3-14} & & & \multicolumn{3}{c}{Sintel C.} & \multicolumn{3}{c}{Sintel F.} & \multicolumn{3}{c}{KITTI12} & \multicolumn{3}{c}{KITTI15} \\ \cline{3-14} & Dataset & Motions & EPE & \(\leq\)1 & EPE & \(\leq\)1 & EPE & Fl & EPE & Fl & EPE & \(\leq\)1 & EPE & \(\leq\)1 & EPE & Fl & EPE & Fl \\ \hline (A) & Ch & 2D & 2.28 & 0.79 & 4.51 & 0.72 & 4.66 & 30.54 & 9.85 & 37.56 & 0.89 & 0.93 & 1.49 & 0.89 & 1.39 & 4.69 & 2.36 & 8.43 \\ (B) & dCOCO & 3D & 2.62 & 0.45 & 3.90 & 0.39 & **1.82** & **6.62** & **3.81** & **12.43** & 1.08 & 0.92 & 1.84 & 0.88 & 1.37 & 4.76 & 2.76 & 9.15 \\ \hline (C) & Ours & 2D & **1.98** & **0.86** & 3.85 & **0.82** & 3.63 & 20.00 & 7.17 & 29.24 & **0.85** & **0.94** & 1.40 & **0.89** & **1.33** & 4.37 & 2.70 & 8.19 \\ (D) & Ours+O & 2D & 2.02 & **0.86** & **3.67** & **0.82** & 3.66 & 19.37 & 7.88 & 28.41 & 0.89 & 0.93 & **1.39** & **0.89** & 1.35 & **4.36** & **2.15** & **7.60** \\ \hline (E) & Ch \(\rightarrow\) Th & 2D+3D & 1.47 & 0.90 & **2.79** & 0.85 & 2.15 & 9.30 & 5.00 & 17.44 & 0.84 & 0.93 & 1.31 & 0.89 & 1.31 & 4.25 & 2.28 & 7.96 \\ \hline (F) & Ours\(\rightarrow\) Th & 2D+3D & **1.29** & **0.91** & 2.81 & 0.85 & 20.49 & **0.47** & **10.72** & **0.83** & **0.94** & **1.29** & **0.90** & **1.32** & 4.24 & 2.10 & 7.52 \\ (G) & Ours+O\(\rightarrow\) Th & 2D+3D & **1.29** & **0.91** & 2.86 & **0.86** & **2.03** & **0.84** & 4.84 & **16.38** & 0.86 & **0.94** & 1.31 & **0.90** & **1.28** & **4.11** & **2.02** & **7.34** \\ \hline \hline \end{tabular}
\end{table} TABLE II: **Comparison with other datasets.** We evaluate the generalization and fine-tuning test of the RAFT networks trained on training datasets: (A) FlyingChairs, (B) dCOCO, (C) ours, (D) ours with occlusion mask, (E) FlyingThings3D, (F) ours and FlyingThings3D, and (G) ours with occlusion mask and FlyingThings3D. (B\(\uparrow\)) is obtained from the original paper of [7].
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & & \multicolumn{2}{c}{w/warm-start} & \multicolumn{2}{c}{w/warm-start} & - \\ \cline{2-5} & & \multicolumn{2}{c}{Sintel C.} & \multicolumn{2}{c}{Sintel F.} & \multicolumn{2}{c}{Sintel C.} & \multicolumn{2}{c}{Sintel F.} & \multicolumn{2}{c}{KITTI115} \\ \cline{3-14} & Training methods & EPE & EPE & EPE & EPE & EPE & Fl \\ \hline (A) & RAFT & 1.61 & 2.86 & 1.94 & 3.18 & 5.1 \\ (B) & RAFT-Ours+O & **1.59** & **2.83** & **1.81** & **3.10** & **4.91** \\ \hline \hline \end{tabular}
\end{table} TABLE IV: **Test results on Sintel and KITTI 2015.** We evaluate the test performance of RAFT and RAFT-ours. Using our synthetic dataset with occlusion masks as an initial learning schedule achieves the higher performance in Sintel and KITTI 2015 test set.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c c c} \hline \hline & & & \multicolumn{6}{c}{Generalization test} & \multicolumn{6}{c}{Finetuning test} \\ \cline{3-14} & & & \multicolumn{3}{c}{Sintel C.} & \multicolumn{2}{c}{Sintel F.} & \multicolumn{2}{c}{KITTI12} & \multicolumn{3
network perform better.
**Other Backbone Networks.** To evaluate the effectiveness of our dataset other than RAFT, we selected two more optical flow models: FlowNet [6] and PWC-Net [2]. We use the re-implementation of FlowNet 2 and PWC-Net 3. Table V shows the result of each network trained on our dataset outperforming the one trained on FlyingChairs [6]. We also contain the previous experiment with RAFT in (C) as a reference. These results prove that the simple properties of our dataset are effective for not only the RAFT [5], but also general optical flow networks.
Footnote 2: [https://github.com/ClementPinard/FlowNetPytorch](https://github.com/ClementPinard/FlowNetPytorch)
Footnote 3: [https://github.com/visinf/irr](https://github.com/visinf/irr)
### _Ablation Study_
By virtue of the fast generation speed from the simple recipes and the controllability of our dataset, we can conduct a series of ablation studies to determine the critical factors of our dataset which affect the network performance the most.
**Foreground Translation Distributions.** We evaluate the effect of the translational motion distribution of foregrounds with 20K image pairs. We use three different distributions to sample magnitudes of translation. Figure 3 shows the histograms of each dataset distribution and summarizes the generalization results achieved by the RAFT network. (A) is uniform distribution and (B) is Gaussian distribution suggested by FlowNet [6]. (C) is the proposed distribution that follows natural statistics [30].
As shown in the histograms, peaks are near zero (in a factor of \(10^{9}\)) due to the background translation. Thus, we focus on the tails of the distributions, which typically occur by foregrounds. (A) includes excessively large motions, which are unrealistic in real-world scenarios and eventually degrade the performances. Comparing with (B), (C) outperforms on overall metrics of benchmarks. The main difference between these two is the density of the focused region in the histogram, where (C) decays faster than (B). From this, we observe that slight differences in tails of translation distributions affect the performance of the model significantly; thus, we take special care of a balanced motion distribution design. We choose (C) as the distribution of translation for the following experiments.
**Motion Complexity.** We measure the effect of each motion type in training. Starting from the dataset having the translation \(\langle\mathrm{T}\rangle\) only, we sequentially apply rotation \(\langle\mathrm{R}\rangle\) and zooming \(\langle\mathrm{Z}\rangle\). As shown in (B) of Table VI, adding rotation transformation to (A) lowers the EPE on Sintel while increasing on KITTI. We observe that KITTI consists of driving scenes, where there are rare rotation motions. On the other hand, Sintel is a cinematic dataset including rotation motions caused by objects and cameras. This implies that adding rotation might confuse the network on the test datasets that contain few rotation motions. Interestingly, both (A) and (B) show comparable performance to the network trained on FlyingChairs (E), which contains three motion types, T+R+Z. We regard that different translation distributions and abundant textures led to these results. Finally, by adding zooming (C), the generalization performance outperforms (A), (B), and (E) in all cases. We
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c} \hline \hline &
observe that zooming mimics the backward and forward object or camera motions, which frequently happens in both benchmarks. Hence, this may hint that the effect of adding rotation motion depends on the characteristics of test datasets, while zooming acts as more important factor for generalization.
The networks trained on our datasets have not seen any 3D motion during training; thus, we can further fine-tune on another dataset, including 3D motions in practice. To figure out the ability of our datasets as pre-training datasets, we further fine-tune the aforementioned networks to the benchmarks, KITTI 2015 or Sintel. We follow the same fine-tuning protocol suggested by Aleotti _et al._[7] on the KITTI datasets. The fine-tuning results in the right part of Table VI show a consistent tendency with the above generalization study. While the improvement is marginal due to the high accuracy regime, the best performance is achieved when zooming is included in pre-training. This suggests that zooming motion is challenging for the network to learn in fine-tuning before the pre-training in the curriculum learning sense [31]. We conclude that zooming is the most crucial factor among motion types of the synthetic dataset, improving the performance both in generalization and fine-tuning.
**Effects of Occlusion Mask.** The prior works [32, 33, 34] show the effectiveness of occlusion masks \(\langle\text{O}\rangle\). Unlike these prior arts, we propose an intuitive and effective method utilizing the easily obtainable occlusion masks by suppressing the gradients at the regions to be occluded in a supervised manner. In the left part of Table VI, generalization results with occlusion mask (D) show comparable EPE to (C) on the benchmarks but lower FI on the KITTI datasets. To further evaluate, we fine-tune the network (D) from the left part of Table VI on the benchmarks and show its results in the right part of the table. The results also show lower FI on the KITTI dataset. Besides, (D) outperforms (C) on both metrics in fine-tuning on KITTI 2015, which contains the most complicated real-world scenes. This shows that focusing on the areas that can be clearly learned from synthetic data helps networks learn complex effects, e.g., occlusion handling, real-world effects, and 3D motion in complicated scenes. We observe the consistent tendency with the results of multi-stage training and public benchmarks as shown in Table IV. This phenomenon can be regarded as curriculum learning, where learning more concepts gradually to complex one helps the network perform better. Applying the occlusion mask is an intuitive method for curriculum learning, and we proved the high effectiveness in improving the final performance.
**Abundant Textures.** We analyze the effect of the abundant textures of foregrounds in training. Considering that the average number of foregrounds in the FlyingChairs [6] is 5, we compared the case when the number of foregrounds is 4 and 8. We also apply a Gaussian filter whose kernel size is 5 to the foregrounds for simulating the lack of high-frequency textures of chairs used in FlyingChairs. Table VII shows that more foregrounds with high-frequency textures lead to overall improvement. These results hint that abundant textures are another important factor in generating synthetic data.
## V Conclusion
We propose an easily controllable synthetic dataset recipe by cut-and-paste, which enables conducting comprehensive studies. Through the experiments, we reveal the simple yet crucial factors for generating synthetic datasets and learning curriculums. We introduce a supervised occlusion mask method, which stops the gradient at the regions to be occluded. Combining these findings, we observe that the networks trained on our datasets achieve favorable generalization performance, and our datasets with occlusion masks serve as a powerful initial curriculum, which achieves superior performance in fine-tuning and online benchmarks.
|
2305.16289 | Diversify Your Vision Datasets with Automatic Diffusion-Based
Augmentation | Many fine-grained classification tasks, like rare animal identification, have
limited training data and consequently classifiers trained on these datasets
often fail to generalize to variations in the domain like changes in weather or
location. As such, we explore how natural language descriptions of the domains
seen in training data can be used with large vision models trained on diverse
pretraining datasets to generate useful variations of the training data. We
introduce ALIA (Automated Language-guided Image Augmentation), a method which
utilizes large vision and language models to automatically generate natural
language descriptions of a dataset's domains and augment the training data via
language-guided image editing. To maintain data integrity, a model trained on
the original dataset filters out minimal image edits and those which corrupt
class-relevant information. The resulting dataset is visually consistent with
the original training data and offers significantly enhanced diversity. We show
that ALIA is able to surpasses traditional data augmentation and text-to-image
generated data on fine-grained classification tasks, including cases of domain
generalization and contextual bias. Code is available at
https://github.com/lisadunlap/ALIA. | Lisa Dunlap, Alyssa Umino, Han Zhang, Jiezhi Yang, Joseph E. Gonzalez, Trevor Darrell | 2023-05-25T17:43:05Z | http://arxiv.org/abs/2305.16289v2 | # Diversify Your Vision Datasets with Automatic Diffusion-Based Augmentation
###### Abstract
Many fine-grained classification tasks, like rare animal identification, have limited training data and consequently classifiers trained on these datasets often fail to generalize to variations in the domain like changes in weather or location. As such, we explore how natural language descriptions of the domains seen in training data can be used with large vision models trained on diverse pretraining datasets to generate useful variations of the training data. We introduce ALIA (Automated Language-guided Image Augmentation), a method which utilizes large vision and language models to automatically generate natural language descriptions of a dataset's domains and augment the training data via language-guided image editing. To maintain data integrity, a model trained on the original dataset filters out minimal image edits and those which corrupt class-relevant information. The resulting dataset is visually consistent with the original training data and offers significantly enhanced diversity. On fine-grained and cluttered datasets for classification and detection, ALIA surpasses traditional data augmentation and text-to-image generated data by up to 15%, often even outperforming equivalent additions of real data. Code is avilable at [https://github.com/lisadunlap/ALIA](https://github.com/lisadunlap/ALIA).
## 1 Introduction
While modern pretraining data are incredibly diverse, datasets for specialized tasks such as fine-grained animal identification are often much less so, resulting in trained classifiers that fail when encountering new domains such as a change in weather or location. An effective method to address this is to add more training data from the test domain [37], but obtaining and labeling this additional data is often costly and it can be challenging to determine which domains to gather data from.
To address this, recent works have utilized image generation models trained on large pretraining datasets to supplement incomplete training sets by generating diverse examples from generative models fine-tuned on the training set [8; 14; 1]. Furthermore, previous work in language-guided data augmentation with vision and language models relies on user-supplied domain descriptions [7] or descriptions generated from word-to-sentence models [9].
Although these methods can be effective in increasing image diversity, they either require finetuning the image generation model, which can be prohibitively expensive, or generating images which have no visual grounding in the original training data. While the latter may be suitable for common benchmarks, such as ImageNet [6], it proves to be much less effective when we move to a specialized setting where generative models like Stable Diffusion [31] cannot recreate images which resemble the training data from text alone, as shown in Figure 1. In our work, we focus on how to utilize pretrained vision and language models for image captioning and generation as a _translation_ layer between task-specific image data and task-agnostic natural language descriptions of domains. Since these high-level domain descriptions are well-represented by image generation models like Stable Diffusion, we can use them to perform _language-guided image editing_ of the specialized training data.
This produces images which are visually consistent with the training data, vary the task-agnostic domains, and preserve the task-relevant information present in the original image.
Specifically, our method ALIA (Automated Language-guided Image Augmentation) first generates captions for each image, summarizes the captions into a short list of domain descriptions with a large language model (LLM), and then uses these descriptions to generate edits of the training data with Stable Diffusion. To ensure data quality, we use a classifier trained on our original training set to remove images which (1) do not change the domain of the original image as desired or (2) corrupt task-relevant information. After filtration, we are left with an edited dataset visually consistent with the original data and representing all domains seen in testing (Figure 1). ALIA does not require finetuning the image captioning or image generation model, nor does it require user-supplied prompts.
We evaluate on fine-grained bird classification (CUB [41]), domain generalization (iWildCam [17]), and contextual bias (Planes [22]) datasets. We show that the addition of ALIA generated data outperforms traditional data augmentation techniques and text-to-image generated data by up to 15%, even beating the performance of adding in real data on iWildCam. Furthermore, we investigate how our domain descriptions produce more useful edits than user-provided prompts, and examine the effect of filtering, and the choice of image editing techniques (Section 5.4).
## 2 Related Works
**Supplementing Training Data with Generative Models.** Using generative models for data augmentation is a well-explored area of research, specifically in medicine [8; 33], domain adaptation [12] and bias mitigation [34]. These methods train or use a pretrained GAN to generate images from the desired distribution. Furthermore, GANs have been used to supervise dense visual alignment [29] and generate pixel-level annotations from a small amount of labels [45; 18; 39]. Recently, several works [9; 35; 38] have shown diffusion models' ability to generate training data in zero or few shot settings as well as generate hard training examples [13]. While these works do show the promise of diffusion-generated data, models trained on diffusion-generated data obtain significantly worse accuracy than models trained on real datasets unless finetuned for that specific task [1; 38]. In contrast, we use diffusion models to do _image editing_ with text rather than generating images from text alone, resulting in augmentations that closely resemble the training data without finetuning.
Figure 1: **Overview.** Example augmentations using text-to-images generation, traditional data augmentation methods, and our method, Automated Language-guided Image Editing (ALIA) on CUB [41]. Images generated by ALIA retain task-relevant information while providing more domain diversity as specified by the prompts. Within the prompts, \(\{\ \}\) indicates the specific class name.
**Traditional Data Augmentation.** Traditionally, data augmentation techniques involve random flipping, cropping, color shifting, etc. to manipulate the original images and create new versions of these images [36]. More recently proposed mixup-based data augmentations aim to make the augmented data more diverse by cutting and pasting patches of two input images [42], applying convex combinations [43], and adaptively learning a sample mixing policy [10; 20; 15; 5]. These methods often result in images that look unnatural as shown in Figure 1, while our method aims to create augmentations which are visually consistent with the original training data.
**Language-guided Data Augmentation for Classification.** Recent works [7; 9; 30] have explored how natural language descriptions of the domains and classes seen during testing can be used to improve performance and generalization in a few-shot or domain adaptation setting. These methods typically employ text-to-image generation with prompts sourced from language models [9], or augment training images using user-provided descriptions of both training and unseen test domains in a shared vision and language space [7]. Similar to our work, these techniques use the high-level domain knowledge of large models to diversify image data with language, but crucially they either rely on user-provided domain descriptions or generate descriptions and augmentations that are not grounded in any real image data. In contrast, the foundation of our work is to ground both the domain descriptions and the generated training data in the given training set.
## 3 ALIA: Automated Language-guided Image Editing
Given a labeled training dataset, we aim to augment the dataset with images that are edited to improve the representation of various _domains_. We define _domain_ to be any aspect of an image that is not intended to be used for classification (e.g. location, weather, time of day). The key insight of our method is to utilize image captioning and image generation models trained on large amounts of pretraining data to summarize the domains in the training set and use those descriptions to augment training data using text-conditioned image editing.
Our method consists of 3 stages: generating domain descriptions, generating useful image edits, and filtering out poor edits. An overview of our method is shown in Figure 2.
### Generating Domain Descriptions
A key idea of our method is to use captioning and language models trained on large amounts of pretraining data to summarize the potential domains. We assume knowledge of the superclass that
Figure 2: **ALIA. Given a specialized dataset, we caption all images in our dataset using a pretrained captioning model, and feed these captions into a large language model to summarize them into a small (<10) set of natural language descriptions. Utilizing these descriptions, we perform text-guided image augmentation via a pretrained text-to-image model, thereby generating training images that align with the described settings. Finally, we apply two filtering techniques: a CLIP-based semantic filter to eliminate obvious edit failures, and a confidence-based filter that removes more subtle failures by filtering edits confidently predicted by a classifier trained on the original dataset.**
encapsulates all classes, such as "bird" for CUB or "animal" for iWildCam and a prefix to guide the format of the desired caption form, for example "a photo of a bird...". Once these prompts are generated, we found that we often achieve higher quality edits by adding the class name into the prompts after the fact (e.g. "a photo of a Scott Oriole bird...").
To generate a concise list of domain descriptions, we first caption each image in the dataset using a pretrained captioning model. This produces a comprehensive set of captions, which may highlight potential domains seen at test time. Note that these captions do not need to accurately describe the task-specific information, such as the species of the bird, as their purpose is to provide a broad overview of the context, such as the environment the bird is in or the actions of the bird. Additional image data of possible domains seen in testing but that don't contain any task-relevant information can also be used in this step. For example, when performing animal classification, one can add in background images of different locations that may be seen in deployment.
We assume that the amount of training data is not small (\(<100\) samples). Therefore, we use an LLM to summarize these captions into a list of domains which are agnostic of the class. Due to constraints on context length, we randomly sample 200 unique captions and use the following prompt:
"I have a set of image captions that I want to summarize into objective descriptions that describe the scenes, actions, camera pose, zoom, and other image qualities present. My captions are [CAPTIONS]. I want the output to be a handful of captions that describe a unique setting, of the form [PREFIX]"
We then ask a refinement question to ensure each caption is of only one setting and agnostic of the class with the following prompt:
"Can you modify your response so each caption is agnostic of the type of [SUPERCLASS]. Please output less than 10 captions which cover the largest breadth of concepts."
The end result is a list of less than 10 domain descriptions that often include domains seen in the test data (e.g. "A camera trap photo of an animal near a lake."). These descriptions serve as the foundation for the subsequent text-conditioned image editing stage of our method. We use BLIP [19] captioning model and GPT-4 [26] for summarizing the captions. A complete list of prompts for each dataset is given in Section 5.
### Editing Images with Language Guidance
Once the domain descriptions are generated, they are used to condition the edits of original images in the training set. In our experiments, we employ two editing techniques based on Stable Diffusion; however, ALIA can be used with any text-conditioned image editing method. The two techniques deployed for our experiments are as follows:
_Image to Image with Text Guidance (Img2Img) [31; 2; 23]_: This technique first uses an image encoder to translate a provided image into a latent representation. Then, leveraging a diffusion model, this latent representation is progressively modified through a series of transformations, conditioned on the user-provided text. Finally, the modified latent representation is decoded to generate an augmented image that incorporates the desired modifications specified in the prompt.
_Instruct Pix2Pix [3]_: Given an edit instruction (e.g. "put the animals in the forest") and an image, this method generates a modified image adhering to the specified instruction. This is accomplished by training a conditional diffusion model on a dataset composed of paired images and edit instructions.
Among the many existing image editing techniques [24; 11; 4; 44; 27], we selected the two above for their ease of use and quality of outputs. Notably, the methods highlighted here share a common goal: to generate coherent, visually-grounded augmentations conditioned on the natural language descriptions extracted from the dataset.
### Filtering Failed Edits Using Semantic and Visual Features
As depicted in Figure 3, there are three distinct failure cases for the text-conditioned augmentation: (1) total failure, where the edited image is vastly different from the original, (2) identity failure, where
the edited image is nearly identical to the original, and (3) class corruption failure, where the edit significantly changed task-specific features. While previous work [9] utilized CLIP-based filtering to remove low-quality images, it only removes instances of total failure in settings where CLIP does not have a good representation of the classes. As such, we also employ a confidence-based filtering technique which uses a classifier trained on the original training set to determine instances of identity and class corruption failures.
_Semantic Filtering._ We use CLIP to predict whether the generated image is related to the task or not. For example, in the CUB dataset, we provide the text prompt "a photo of a bird" as well as the filtering prompts "a photo of an object", "a photo of a scene", "a photo of geometric shapes", "a photo", and "an image". All images that are not classified as "a photo of a bird" are removed.
_Confidence-based Filtering._ We take inspiration from previous work in identifying mislabeled examples using model confidence [25]. After training a model \(f\) on the original dataset, we calculate a confidence threshold \(t_{y}\) for each class \(y\) by averaging the softmax score of the correct label for each image in the training set. Specifically, given edited image \(x^{\prime}\) with label \(y\) and prediction \(\hat{y}\), it is filtered out if \(\text{confidence}(\hat{f}(x^{\prime}),\hat{y})\geq t_{\hat{y}}\). In the case that the image was correctly predicted (\(y=\hat{y}\)), since the model is already confident in its prediction, the edited image is likely to contain roughly the same information as the original, and thus should be filtered out. In cases where \(y\neq\hat{y}\), high confidence suggests that the edit has corrupted the class to a point that it more highly resembles another class.
In our experiments, we randomly sample a portion of the filtered augmentations to incorporate back into the training set, leading to a dataset size expansion between 20-100%. Given that the prompts and the image edits are grounded in the original training set, ALIA is able to preserve visual consistency and encompass a broader array of domains.
## 4 Experimental setup
### Implementation.
We fine-tune a ResNet50 for the CUB [41] and iWildCam [17] datasets and a ResNet18 for the Planes [22] dataset. We use the PyTorch pretrained models [28] on ImageNet with an Adam optimizer [16] and cosine learning rate scheduler. For each method, we do a hyperparameter sweep across learning rate and weight decay and choose the parameters with the highest validation performance. We train on 10 GeForce RTX 2080 Ti GPUs.
For all the diffusion-based editing methods, we use Stable Diffusion version 1.5 [32] from HuggingFace [40] with the default hyperparameters aside from the edit strength (how much to deviate from the original image) and the text guidance (how closely the generated image should align with the text prompt). For these parameters, we search over 5 different values each for edit strength and text guidance, visualizing the resulting generations for a random sample (10 images) across 4 random seeds (40 generated images in total). We pick the parameters which generate the most diverse images that both retain the task-relevant information and remain faithful to the edit instruction. More details on this selection process including visualizations as well as results on training on data generated with different edit strengths and text guidance are in the Appendix. Results are averaged over 3 random seeds and further details on the hyperparameter search space and final choice of hyperparameters are listed in the Appendix.
Figure 3: **Filtering.** Semantic filtering eliminates instances of total failure (left), but often misses instances of minimal edits (center) or edits which corrupt class information (right). Meanwhile, our confidence-based filtering mechanism is able to remove these failures by leveraging the prediction confidence of classifier trained on the original dataset applied to the image edits.
### Baselines.
The _Baseline_ model is trained on the original training dataset, without any data augmentation. We additionally compare adding in the generated data with our method to the original training set (+_ALIA_), adding in real data from the test distribution (+_Real_), adding in diffusion generated data from text alone (+_Txt2Img_), and two traditional augmentation baselines (+_CutMix_, +_RandAug_). In order to perform a fair comparison, we keep the number of images added per class to the training set consistent across methods.
For the +_ALIA_ and +_Txt2Img_ results, we generate twice as much data as the original training set for each language prompt to ensure enough data is available after filtering. For +_ALIA_, this entails generating 2 edits for every image. We then randomly sample from these datasets to match the class distribution of the held-out real data used for the +_Real_ baseline. For the +_Txt2Img_ baseline, we generate images with a hand-crafted prompt for each dataset, and apply our semantic filtering to the generated images to remove low-quality samples. We provide the amount of data added and the prompts used for the +_Txt2Img_ baseline in the subsequent sections.
Lastly, we compare to two data augmentation baselines taken from recent literature: (1) _CutMix_[42], which generates mixed samples by randomly cutting and pasting patches between training images to encourage the model to learn more localized and discriminative features and (2) _RandAugment_[5], an automated data augmentation technique that reduces the search space of the type and magnitude of augmentations to find the best image transformations for a given task. For these baselines, we use the implementations in the PyTorch Torchvision library [21].
## 5 Experiments
We evaluate on 3 specialized tasks: domain generalization via camera trap animal classification (iWildCam), fine-grained bird classification (CUB), and fine-grained aircraft classification with contextual bias (Airbus VS Boeing). Details of each dataset are listed in subsequent sections, with more statistics in the Appendix.
### Domain Generalization [iWildCam [17]]
The iWildCam dataset is a large-scale collection of images captured from camera traps placed in various locations around the world. We subsample the dataset to create a 7-way classification task (background, cattle, elephant, impala, zebra, giraffe, dik-dik), with 2 test locations that are not in the training or validation set. The training set has 6,000 images with some classes having as few as 50 images per example, and our +_Real_ data baseline contains roughly 2,000 images from locations not seen during training. We generate our domain descriptions from the background images from the test domain, but do not use these images in training. Since many images in the original training set
Figure 4: **iWildCam Subset. Left plot visualizes the original training images and the corresponding data generated from ALIA based on the generated prompts. Right plot shows macro F1-scores for adding in generated data from ALIA and baselines as in Section 4.2. ALIA significantly outperforms all the baselines including adding in real data from the test distribution, while +Txt2Img and +CutMix see smaller improvements and +RandAug results in lower performance than the baseline.**
contain watermarks or timestamps while the text-to-image generated data usually does not, we crop the top and bottom of each image in prepossessing.
The prompts generated by ALIA are _"a camera trap photo of a { }..."_:
(1) _"in a grassy field with trees and bushes."_, (2) _"in a forest in the dark."_, (3) _"near a large body of water in the middle of a field."_, (4) _"walking on a dirt trail with wigs and branches."_.
We use _"a camera trap photo of a { } in the wild."_ as the prompt for the _+Tx2Img_ baseline.
As shown in Figure 4, ALIA not only outperforms all baselines, with a 15% performance improvement in f1-score over training on the original data, but even exceeds the performance of adding in the same amount of real data. Note the improved performance over real can be somewhat attributed to that fact that our prompts are taken from empty images from the test domain rather than the training set.
### Fine-grained Classification [CUB [41]]
CUB is a fine-grained bird classification dataset comprised of photos taken from Flickr. We remove 5 of the 30 images from each of the 200 classes from the training set for the _+Real_ comparison.
The prompts generated by ALIA are _"a photo of a { } bird."_:
(1) _"interacting with flowers."_, (2) _"standing by the waters edge."_, (3) _"perched on a fence."_, (4) _"standing on a rock."_, (5) _"perched on a branch."_, (6) _"flying near a tree, sky as the backdrop."_, (7) _"perched on a birdfeeder."_
We use _"an iNaturalist photo of a { } bird in nature."_ as the prompt for the _+Tx2Img_ baseline.
As shown in Figure 5, ALIA outperforms all the baselines aside from adding in real data, while adding text-to-image generated data results in similar performance with the baseline. Significantly, these results show that ALIA provides performance improvements greater than existing data augmentation techniques even in conditions devoid of domain shifts, thereby broadening its utility.
### Contextual Bias [Airbus VS Boeing [22]]
In order to test our method on a setting of contextual bias for fine-grained classification, we create a custom split of the FGVC-Aircraft Benchmark [22] containing 2 visually similar classes: Boeing-767 and Airbus-322. We then manually label each image as'sky', 'grass', or 'road' based on the background and filter out ambiguous examples. To construct a bias split, we train on 400 samples where Airbus airplanes are only seen in road backgrounds and Boeing airplanes are only seen on grassy backgrounds, with both appearing in sky backgrounds. We evaluate on a testset where each class appears on all backgrounds. Similarly, our _+Real_ baseline adds 400 samples of a similar background distribution to the test set. Exact split breakdowns are in the Appendix.
Interestingly, we found that the edits generated by the Img2Img editing method were almost exclusively total failures (visualizations in the Appendix). Thus, we use InstructPix2Pix, prompting the
Figure 5: **Bird Classification (CUB).** Left plot visualizes the original training images and the corresponding data generated from ALIA based on the generated prompts. Right plot shows class-balanced accuracy for ALIA and baselines. ALIA outperforms all the baselines aside from adding in real data, while adding text-to-image generated data results in similar performance with the baseline.
LLM with the prefixes "a photo of an airplane" and editing the final prompts to fit the instructional format of the editing method.
The prompts generated by ALIA are _"a photo of a [ ] airplane..."_:
(1) _"on the airport tarmac, surrounded by buildings and other infrastructure."_, (2) _"parked on the rummy, grass and trees in the backdrop."_, (3) _"parked on a runway, with a vast desert expanding in the background."_, (4) _"with red and white colors, landing gear down, against a backdrop of a bustling cityscape."_, (5) _"in mid-flight, landing gear deployed against a clear sky."_
We use _"a photo of a [ ] airplane."_ as the prompt for the _+Tx2Img_ baseline.
The prompts generated by ALIA are not only capable of identifying the backgrounds used to create this biased split, but as illustrated in Figure 7, our confidence-based filtering technique can effectively remove many instances of the overrepresented class-domain pairings (Airbus on road and Boeing on grass) from the augmented dataset. These instances are well-represented in the training data, and consequently, the classifier can predict them with high confidence. This results in a higher proportion of the edited data consisting of the underrepresented classes (Airbus on grass and Boeing on road). As a result of these two factors, ALIA outperforms all other augmentation baselines on a test set where each class is seen in the sky, on the grass, and on the road (Figure 6). We believe that these results point to a promising future direction in how to modify ALIA to identify and amplify specific class-domain pairings which are underrepresented in the training set.
\begin{table}
\begin{tabular}{c|l|c c c} \hline \hline Dataset & Metric & User Prompt & ALIA Prompts & ALIA Prompts + Filtering \\ \hline iWildCam & Macro F1-score & 68.87\(\pm\)1.84 & 70.65\(\pm\)1.50 & 72.34\(\pm\)1.00 \\ CUB & Balanced Accuracy & 71.02\(\pm\)0.47 & 71.25\(\pm\)0.86 & 72.70\(\pm\)0.10 \\ Planes & Balanced Accuracy & 62.453\(\pm\)1.03 & 67.03\(\pm\)0.65 & 68.84\(\pm\)0.89 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Effect of ALIA Generated Prompts and Filtering.**
Figure 6: **Airbus VS Boeing Classification. Left plot visualizes the original training images and the corresponding data generated from ALIA based on the generated prompts. Right plot shows class-balanced accuracy for ALIA and baselines. ALIA outperforms all the baselines aside from adding in real data.**
Figure 7: **Examples of filtered edits of AirBus VS Boeing with confidence-based filtering. Our confidence-based filtering effectively prunes overrepresented class-domain pairings (e.g., Airbus on road, Boeing on grass) and boosts the prevalence of underrepresented ones (e.g., Airbus on grass, Boeing on road), thereby enhancing the diversity and balance in the augmented data.**
### Ablations
**Effect of Prompt Quality and Filtering.** We ablate the quality of ALIA generated prompts by comparing the edit prompts generated by ALIA against those provided by users. For the user-supplied prompts, we draw on the engineered prompts that were used for the _+Txt2Img_ baseline. As shown in Table 1, descriptions generated by ALIA outperform user-provided prompts, especially in the contextual bias setting, indicating that our generated prompts are able to accurately describe key domain-specific features and variations, resulting in more effective data augmentation for mitigating biases and improving model generalization. Moreover, Table 1 also demonstrates how our semantic and confidence-based filtering further improves accuracy.
**Choice of Image Editing Method.** As mentioned in Section 5.3, the suitability of image editing methods can vary based on the specific dataset. As such, we explore the impact of the editing method on performance by modifying the iWildCam subset from Section 5.1 with InstructPix2Pix. We employ the same prompts as in the Img2Img case, reformatted to the instructional syntax required by InstructPix2Pix (e.g. "a camera trap photo of a {} in a grassy field with trees and bushes" becomes "put the {} in a grassy field with trees and bushes"). Figure 8 presents examples of edits generated using InstructPix2Pix. Although InstructPix2Pix can produce edits with more significant color variations than Img2Img, it frequently applies an unrealistic airbrush-like effect or erases most of the information present in the original image. This outcome can be attributed to the fact that InstructPix2Pix is fine-tuned on a dataset of paired images and edits, which primarily consist of stylistic changes to artistic images, making the iWildCam images fall outside its domain. When comparing the performance of InstructPix2Pix, we find that it achieves a macro F1-score of \(67.11\pm 1.96\), compared to \(73.34\pm 1.0\) obtained using Img2Img edits. These results indicate that the choice of image editing method has significant impact on the final performance of ALIA, and thus it is crucial to profile the quality of image edits from various techniques on one's specific dataset.
## 6 Limitations
While ALIA is able to generate impressive augmentations, there are limitations. As ALIA depends on large pretrained vision and language models to translate between task-specific image data and task-agnostic natural language descriptions of domains, performance of the method is bottlenecked by the quality of the captioning model, LLM, and image editing method (see Section 5.4). Furthermore, as we assume that the task-specific information in the training data cannot be easily generated via text alone, attempts to change aspects like the pose of a bird in CUB is likely to result in a failed edit. Finally, determining the optimal quantity of augmented data to reincorporate into the training set remains an unresolved question, and is an area we look forward to addressing in future research.
## 7 Conclusion
We present ALIA, a novel approach to data augmentation that leverages the high-level domain knowledge of large language models and text-conditioned image editing methods. By grounding both the domain descriptions and the augmented training data in the provided training set, our method has demonstrated impressive capabilities in several challenging settings, including domain adaptation, bias mitigation, and even scenarios without domain shift. As the capabilities of captioning, LLMs, and image editing methods grow, we expect the efficacy and scope of our approach to increase.
**Acknowledgements.** We thank Suzie Petryk for her invaluable feedback on the manuscript. This work was supported in part by the NSF CISE Expeditions Award (CCF-1730628), DARPA's SemaFor, PTG and/or LwLL programs, and BAIR's industrial alliance programs.
Figure 8: **InstructPix2Pix edits on iWildCam. Despite offering more color variation, the method often imparts an artificial airbrush-like effect or removes substantial original image details.** |
2310.12259 | Comparing first-principles density functionals plus corrections for the
lattice dynamics of YBa$_2$Cu$_3$O$_6$ | The enigmatic mechanism underlying unconventional high-temperature
superconductivity, especially the role of lattice dynamics, has remained a
subject of debate. Theoretical insights have long been hindered due to the lack
of an accurate first-principles description of the lattice dynamics of
cuprates. Recently, using the r2SCAN meta-GGA functional, we were able to
achieve accurate phonon spectra of an insulating cuprate YBa$_2$Cu$_3$O$_6$,
and discover significant magnetoelastic coupling in experimentally interesting
Cu-O bond stretching optical modes [Ning et al., Phys. Rev. B 107, 045126
(2023)]. We extend this work by comparing PBE and r2SCAN performances with
corrections from the on-site Hubbard U and the D4 van der Waals (vdW) methods,
aiming at further understanding on both the materials science side and the
density functional side. We demonstrate the importance of vdW and
self-interaction corrections for accurate first-principles YBa2 Cu3 O6 lattice
dynamics. Since r2SCAN by itself partially accounts for these effects, the good
performance of r2SCAN is now more fully explained. In addition, the
performances of the Tao-Mo series of meta-GGAs, which are constructed in a
different way from SCAN/r2SCAN, are also compared and discussed. | Jinliang Ning, Christopher Lane, Bernardo Barbiellini, Robert S. Markiewicz, Arun Bansil, Adrienn Ruzsinszky, John P. Perdew, Jianwei Sun | 2023-10-18T18:46:52Z | http://arxiv.org/abs/2310.12259v2 | Comparing first-principles density functionals plus corrections for the lattice dynamics of YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6}\)
###### Abstract
The enigmatic mechanism underlying unconventional high-temperature superconductivity, especially the role of lattice dynamics, has remained a subject of debate. Theoretical insights have long been hindered due to the lack of an accurate first-principles description of the lattice dynamics of cuprates. Recently, using the r2SCAN meta-GGA functional, we were able to achieve accurate phonon spectra of an insulating cuprate YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6}\), and discover significant magnetoelastic coupling in experimentally interesting Cu-O bond stretching optical modes [Phys. Rev. B 107, 045126 (2023)]. We extend this work by comparing PBE and r2SCAN performances with corrections from the on-site Hubbard U and the D4 van der Waals (vdW) methods, aiming at further understanding on both the materials science side and the density functional side. We demonstrate the importance of vdW and self-interaction corrections for accurate first-principles YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6}\) lattice dynamics. Since r2SCAN by itself partially accounts for these effects, the good performance of r2SCAN is now more fully explained. In addition, the performances of the Tao-Mo series of meta-GGAs, which are constructed in a different way from SCAN/r2SCAN, are also compared and discussed.
+
Footnote †: preprint: APS/123-QED
## I Introduction
Despite the decades of vigorous efforts devoted to the understanding of unconventional high-temperature superconductivity in the cuprates, a consensus on the underlying mechanism has yet to be reached [1; 2; 3; 4; 5; 6]. Early theoretical works [7; 8; 9] suggested that the conventional BCS theory (electron-phonon coupling mechanism) [10; 11; 12] could not account for such high critical temperatures in cuprate superconductors. However, a more intricate and intriguing picture has been suggested by recent experimental findings [13; 14; 15; 16; 17; 18; 19]. Strong anomalies in Cu-O bond-stretching modes are found near optimal doping, which is associated with charge inhomogeneity and beyond previous pictures and understandings [14]. Optical spectroscopy results indicate that the electron-phonon coupling contributes at least 10% the bosonic pairing glue, although antiferromagnetic spin fluctuations are deemed as the main mediators [20]. Moreover, the electronic interactions and the electron-phonon coupling are found to reinforce each other in a positive-feedback loop, which in turn enhances superconductivity, as suggested by recent ARPES observations [19].
Part of the reason why the role of phonons was dismissed by the theoretical community was that, previous density functional theory (DFT) calculations at the local density approximation (LDA) and generalized gradient approximation (GGA) levels failed to find strong electron-phonon coupling in related cuprates [8]. This issue is related to and compounded by the fact that these density functional approximations (DFAs) cannot stabilize the correct electronic and magnetic ground state in the parent phase, let alone its evolution with doping [21; 22]. While corrections such as the Hubbard U [23; 24; 25; 26] method can stabilize the antiferromagnetic (AFM) ground state [27], their structural predictions can be unexpected and uncontrollable [28]. Obviously, an _ab initio_ treatment is required to capture simultaneously the electronic, magnetic and lattice degrees of freedom.
Recently, utilizing the r2SCAN meta-GGA functional [29], some of us [30] were able to stabilize the AFM state of the pristine oxide YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6}\), and faithfully reproduce the experimental phonon dispersions. We further found significant magnetoelastic coupling in numerous high-energy Cu-O bond stretching optical branches, where the AFM results improve over the soft nonmagnetic phonon bands [30]. Moreover, these phonons correspond to breathing modes within the CuO\({}_{2}\) plane, suggesting a sensitive dependence on magnetoelastic coupling, which may facilitate a positive-feedback loop between electronic, magnetic, and lattice degrees of freedom. The r2SCAN functional is a modified and improved version of the strongly-constrained and appropriately-normed (SCAN) meta-GGA functional [31; 32], which satisfies 17 exact constraints, has demonstrated excellent performance across a diverse range of bonding environments. For cuprates,
SCAN accurately predicts the correct half-filled AFM ground state and the observed insulator-metal transition upon doping [21; 22]. Moreover, SCAN provides improved estimates of lattice constants, across correlated and transition metal compounds [21; 22; 31; 32; 33; 34; 35; 36; 37; 38]. Thus, SCAN is promising in accurate descriptions of lattice dynamics of cuprates and associated electron-phonon couplings, by virtue of its ability to capture the electronic and magnetic ground states. Unfortunately, efficiently obtaining reliable phonon spectra from SCAN calculations can be challenging due to numerical instability problems. By design, r2SCAN [29] solves the numerical instability problem and delivers accurate, transferable, and reliable lattice dynamics for various systems with different bonding characteristics [39]. We thus chose r2SCAN instead of SCAN for the study of lattice dynamics of YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6}\) and achieved remarkable success.
Despite this success, there still exists a notable residual softening trend in the Cu-O bond-stretching optical phonon branches from r2SCAN, especially in the full-breathing modes, for which we achieved further improvements when a Hubbard U correction is applied to r2SCAN [30]. Furthermore, it is not fully understood why r2SCAN/SCAN perform so well on cuprates. Although in general we can attribute it to the power of satisfying exact constraints by design in SCAN/r2SCAN [29; 31; 40; 41], more specific and physical knowledge will be helpful and highly required. Previous studies suggest that vdW corrections are important for first-principles prediction of lattice constants and cohesive energy of ionic solids and heavy metals [42]. In addition, combining vdW correction and self-interaction correction (SIC) is of critical importance for ground state electronic, structural and energetic properties of transition metal monoxides [36]. Therefore, it is expected that the vdW correction and its combination with SIC are crucial for obtaining accurate phonon dispersions of cuprates based on DFT.
To confirm this, in this work we extend our lattice dynamics study of YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6}\) by comparing the PBE and r2SCAN performances with corrections from the Hubbard U (applied to the \(d\) orbitals of Cu) and the D4 van der Waals (vdW) correction methods [43; 44; 45; 46], aiming at further understanding both the physics of cuprate lattice dynamics and density functionals. We demonstrate the importance of vdW interactions and SIC for YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6}\) lattice dynamics. Since r2SCAN by itself provides a partial account of these effects to a greater degree than PBE, the better performance of r2SCAN is more fully explained. In addition, the performances of the Tao-Mo family of meta-GGAs (TMs) [47; 48; 49] are also compared and discussed. The original Tao-Mo meta-GGA (TM) [47] is constructed based on a density matrix expansion of the exchange hole model, while revTM [48] and rregTM [49] are two successors with modifications. The revTM includes a correlation correction obtained from the full high-density second-order gradient expansion, while rregTM includes a regulation to the order-of-limit problem [50], paired with a one-electron self-interaction-free correlation energy functional. In comparison, SCAN and r2SCAN are constructed by satisfying exact-constraints on the exchange-correlation energy [29; 31; 40; 41]. Due to the inherently different way the Tao-Mo meta-GGAs are constructed compared to SCAN/r2SCAN, a comparison of their performances will be interesting and is expected to shed light on both the materials science side in cuprate lattice dynamics and the DFT side. Due to the absence of D4 parametrizations matched to TM functionals, the effects of vdW corrections to TMs are not considered in this work.
The synergy of long-range vdW corrections and +U SIC that we find here for cuprate lattice dynamics has also been found [36] for structural properties and structural phase transitions in MnO, FeO, CoO, and NiO.
## II Methods
First-principles calculations were performed using the pseudopotential projector-augmented wave method [51; 52] with the Vienna _ab initio_ simulation package (VASP) [53; 54] with an energy cutoff of 600 eV for the plane-wave basis set. Several exchange-correlation functionals including PBE at the GGA level, and r2SCAN [29; 39], TM [47], revTM [48], and rregTM [49] at the meta-GGA level were used. For the D4 vdW correction, we use the literature parametrizations fitted separately for PBE (\(s_{6}\)=1.0000, \(s_{8}\)=0.9595, \(a_{1}\)=0.3857, \(a_{2}\)=4.8069) [45] and for r2SCAN (\(s_{6}\)=1.0000, \(s_{8}\)=0.6019, \(a_{1}\)=0.5156, \(a_{2}\)=5.7734) [46]. A Gamma-centered 8\(\times\)8\(\times\)4 mesh for the **k**-space sampling is used for the relaxation of the unit cell with a G-type AFM structure, while a 2\(\times\)2\(\times\)2 mesh is used for the 2\(\times\)2\(\times\)1 supercells for force constant calculations of the phonon dispersion. All atomic sites in the unit cell along with the cell dimensions were relaxed using a conjugate gradient algorithm to minimize the energy with an atomic force tolerance of 0.001 eV/A and a total energy tolerance of 10\({}^{-7}\) eV. The harmonic force constants were extracted from VASP using the finite displacement method (0.015A) as implemented in the Phonopy code [55].
## III Results
Table 1 compares various properties calculated from various methods to available experimental values for YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6}\). The bare DFA results are compared with those with Hubbard U or/and vdW corrections. For the bare DFAs considered here, they all overestimate the lattice constants and underestimate the magnetic moments, in different degrees. In particular, PBE overestimates the lattice constants the most and at the same time is not able to stabilize the correct G-type AFM ground state, while r2SCAN and the TMs at the meta-GGA level can stabilize the G-type AFM ground state, and improve over PBE for structural properties. In addition, notable difference can be observed in the performances of these differ
ent meta-GGAs. TM gives the closest structural properties but notably underestimates the magnetic moments. The revTM functional performs similarly with TM in structural properties but worse in magnetic moments. The rregTM functional improves in magnetic moments but worsens in structural properties. The r2SCAN functional in general predicts a good combination of structural and magnetic properties. What's more, TM and revTM predict different magnetic moments for the G-type AFM unit cell and the 2\(\times\)2\(\times\)1 supercells used for interatomic force constants calculations. This suggests that they could stabilize some spin-density-wave states rather than the simple G-type AFM ground state.
Applying a Hubbard U correction to these DFAs will reduce the delocalization error and increase the predicted magnetic moments. At the same time it also improves the structural properties, which will not be true for LSDA+U as we found previously since LSDA already underestimates the lattice constants [30]. In general, due to self-interaction reduction, the meta-GGAs require smaller U corrections than PBE [58]. The structural properties can be further improved with additional vdW corrections, as demonstrated by the PBE+U+D4 and r2SCAN+U+D4 results. Similarly to SCAN, r2SCAN captures intermediate range vdW interactions [32] while PBE captures little [59]. Therefore, more vdW correction is needed for PBE than r2SCAN, and correspondingly the structural improvements from the vdW correction are greater when applied to PBE than to r2SCAN. As a comparison and cross-check, almost the same structural and magnetic results are achieved with r2SCAN+rVV10\({}^{60}\) and r2SCAN+D4\({}^{46}\). This finding is consistent with previous reports that vdW corrections are important for structural and energetic properties of ionic solids [42], and that the combination of vdW and Hubbard U corrections are important for the ground state electronic, structural and energetic properties of transition metal monoxides [36]. It is also consistent with the underestimation of lattice constants in LSDA, since LSDA tends to overbind weak bonds. Generally speaking, the improvements of meta-GGAs over GGAs can be attributed to the power of satisfying more exact constraints [40; 41], but additionally the self-interaction reduction and better capture of vdW interactions at least contribute to a major part of their improvements. In addition, for PBE and r2SCAN, although applying larger Hubbard U alone can yield structural properties closer to experiment, it could over-localize \(d\) electrons and predict too large magnetic moments, highlighting the different physics of the two corrections and the necessity of appropriately applying both together. Since the phonon dispersion is determined by the inter-atomic forces, which depend sensitively on the ground state electronic structure and equilibrium atomic positions, the improvements for structural and electronic/magnetic properties from vdW and Hubbard U corrections bode well for more accurate predictions of the lattice dynamics.
The phonon dispersion results for the most challenging and also experimentally most interesting _ac_-plane and three-dimensional (3D) full-breathing branches, as shown in Fig. 2 and Fig. 3, confirm our expectations. Both the _ac_-plane and 3D full-breathing modes involve the Cu-O bond-stretching vibrations within the Cu-O plane, and simultaneously the vibration of the apical oxygen in the \(c\) direction. The difference is, the \(ab\) Cu-O plane bond-stretching vibrations happen along both \(a\) and \(b\) direc
\begin{table}
\begin{tabular}{l l l l l l l l l l} PFA & U & vdW & \(a\) (Å) & \(c\) (Å) & \(V\) (Å\({}^{3}\)) & \(m\) (\(\mu_{B}\)) & \(d_{\rm Cu-O}\) (Å) & \(\angle\)O-Cu-O (\({}^{\circ}\)) & \(z_{\rm Cu-O_{ap}}\) (Å) & \(z^{\prime}_{\rm Cu-O_{ap}}\) (Å) \\ \hline PBE & 0 & – & 3.8819 & 12.1905 & 183.70 & 0.00 & 1.948 & 170.32 & 1.816 & 2.672 \\ & 6 & – & 3.8750 & 12.0379 & 180.76 & 0.59 & 1.950 & 167.06 & 1.811 & 2.551 \\ & 6 & D4 & 3.8525 & 11.8702 & 176.17 & 0.59 & 1.938 & 167.25 & 1.803 & 2.492 \\ \hline r2SCAN & 0 & – & 3.8570 & 11.9417 & 177.65 & 0.45 & 1.937 & 169.28 & 1.805 & 2.554 \\ & 5 & – & 3.8562 & 11.8321 & 175.95 & 0.66 & 1.941 & 167.00 & 1.795 & 2.472 \\ & 4 & D4 & 3.8485 & 11.8032 & 174.82 & 0.62 & 1.936 & 167.41 & 1.794 & 2.469 \\ & 4 & rVV10 & 3.8482 & 11.7638 & 174.21 & 0.62 & 1.936 & 167.41 & 1.793 & 2.451 \\ \hline TM & 0 & – & 3.8651 & 11.8224 & 176.61 & 0.15 (0.3) & 1.939 & 170.31 & 1.812 & 2.502 \\ & 5 & – & 3.8626 & 11.7186 & 174.84 & 0.62 & 1.943 & 167.56 & 1.803 & 2.421 \\ revTM & 0 & – & 3.8649 & 11.8952 & 177.68 & 0.09 (0.3) & 1.940 & 170.09 & 1.810 & 2.535 \\ & 5 & – & 3.8621 & 11.7919 & 175.89 & 0.61 & 1.943 & 167.52 & 1.802 & 2.451 \\ rregTM & 0 & – & 3.8982 & 11.9688 & 181.88 & 0.39 & 1.956 & 170.17 & 1.832 & 2.548 \\ & 5 & – & 3.8941 & 11.8648 & 179.92 & 0.63 & 1.960 & 166.74 & 1.824 & 2.445 \\ \hline Expt. & & 3.8544\({}^{a}\) & 11.8175\({}^{a}\) & 175.57\({}^{a}\) & 0.55\({}^{b}\) & 1.940 & 166.78 & 1.786 & 2.471 \\ \end{tabular} \({}^{a}\)Power neutron diffraction at temperature of 5 K [56].
\({}^{b}\)Single crystal neutron scattering [57].
\end{table}
Table 1: Calculated lattice constants, volume, Cu magnetic moment, Cu-O plane buckling angle \(\angle\)O-Cu-O, and Cu-O bond lengths for YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6}\) in the G-type AFM phase, along with the available experimental data. The Cu-O bond length for the two adjacent Cu-O planes in the \(ab\) plane, where Cu shows local magnetic moment \(m\), is denoted by \(d_{\rm Cu-O}\), while \(z_{\rm Cu-O_{ap}}\) (nonmagnetic Cu-apical O bond) and \(z^{\prime}_{\rm Cu-O_{ap}}\) (magnetic Cu-apical O bond) denote the Cu-O bond lengths along the \(c\) direction. For PBE and r2SCAN, the Hubbard U and vdW (D4 and rVV10) corrections are considered. The choice of Hubbard U values is guided by both experimental lattice constants and Cu magnetic moment.
tions for the 3D full-breathing modes, while only along either \(a\) or \(b\) direction for the \(ac\)-plane (or \(bc\)-plane) full-breathing modes, as shown in Fig. 1c. Figure 2 compares the bare PBE and r2SCAN results and those from Hubbard U and D4 vdW corrections. Both bare PBE and r2SCAN results are too soft for the two challenging branches, and the PBE results are even softer, similar to the previous nonmagnetic results from r2SCAN [30]. This is consistent with the fact that bare PBE cannot stabilize the AFM ground state and overestimates lattice constants. With U and vdW corrections, notable improvements are achieved for both PBE and r2SCAN, and the improvement is more significant for PBE than for r2SCAN, although the r2SCAN+U+D4 results remain closest to experiment. This is consistent with the observations and reasons discussed above for the structural and magnetic properties. To summarize, due to self-interaction reduction and capture of more intermediate range vdW interactions, r2SCAN performs much better than PBE in magnetic, structural and lattice dynamics properties of YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6}\), and for that reason the improvements from vdW and Hubbard U corrections are more significant for PBE than r2SCAN.
Figure 3 includes the bare TMs and Hubbard U corrected results, compared with r2SCAN+U+D4 and experimental results. All the bare TMs results are too soft
Figure 1: (a) \(\sqrt{2}\times\sqrt{2}\times 1\) crystal structure of YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6}\) where the related G-type AFM structure is highlighted by coloring the corner-sharing Cu-O pyramids blue(orange) for spin up(down). (b) A schematic of the nonmagnetic (black dashed line) and G-type AFM (blue dashed line) Brillouin zones and high-symmetry **k**-points. (c) Schematic of the typical \(ac\)-plane and 3D full-breathing modes.
Figure 3: Same as Fig. 2, but calculated results are from TM, TM+U (5 eV), revTM, revTM+U (5 eV), rregTM, rregTM+U (5 eV), and r2SCAN+D4+U (4 eV) methods. Due to the complicated band crossing, the bare TM results are not shown for the \(ac\)-plane full-breathing branch.
Figure 2: Comparison of the \(ac\)-plane full-breathing branch (highest branch of \(\Delta 1\)) and the 3D full-breathing branch (highest branch of \(\Sigma 1\)) for YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6}\), calculated from PBE, PBE+U (6 eV), PBE+D4+U (6 eV), r2SCAN, r2SCAN+U (5 eV) and r2SCAN+D4+U (4 eV) methods, with the experimental data (open circles). The Brillouin zone and high-symmetry **k**-points are shown in Fig. 1. The phonon results are obtained with supercell interatomic forces calculated with the same DFA (+U+D4) methods as for geometry relaxations, as detailed in Table 1.
compared to those from r2SCAN, consistent with the fact that r2SCAN gives the best bare DFA predictions in basic magnetic and structural properties which is critical for accurate lattice dynamics predictions. With Hubbard U correction, the improvement is significant and the results are close to but still softer than the r2SCAN+U+D4 results. Therefore r2SCAN generally has better performance than TMs for YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6}\) in structural, magnetic and lattice dynamics properties. This could be attributed to the more nonlocal nature of r2SCAN compared to TMs, since nonlocality is important for descriptions of semiconductors and insulators. Although both are at the same meta-GGA level, r2SCAN is more \(\alpha\)-dependent and thus displays more nonlocality, while the TMs are more density gradient-dependent and less nonlocal.
## IV Conclusions
In summary, we have extended our previous work to a first-principles comparative study of the most challenging and experimentally important full-breathing modes of YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6}\). We achieve further understanding of both the lattice dynamics side and the density functional side. By applying both Hubbard U and the D4 vdW corrections to PBE and r2SCAN, notable improvements are obtained for structural, electronic, magnetic, and phonon dispersion predictions of YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6}\). The improvements from the combined corrections are more significant for PBE than for r2SCAN. With the improvements, PBE+U+D4 gives much better full-breathing phonon frequencies, closer but still softer compared to those from r2SCAN+U+D4 and experimental observations. Considering the general self-interaction reduction and capturing more intermediate range vdW inherent in r2SCAN over PBE, we demonstrate the importance of vdW interactions and SIC in accurate YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6}\) lattice dynamics from first-principles, which in turn contributes to the major reason for the superior overall performance of r2SCAN over PBE.
In addition, for the family of Tao-Mo meta-GGAs, all the bare DFA results are too soft compared to those from r2SCAN. With similar Hubbard U corrections, improvements are notable but still not enough to be as good as r2SCAN with corrections. Therefore r2SCAN generally has better performance than TMs for YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6}\) in structural, magnetic and lattice dynamics properties, which we attribute to the more nonlocal nature of r2SCAN compared to TMs. Nevertheless, the TMs could perform better [61] for doped or gapless systems such as YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{7}\)[21; 62; 62], as implied by their good performances in surface, vacancy, and magnetic properties for metals [48; 63], which could be further studied in the future.
Note that, even with the best results we can achieve from r2SCAN+U+D4, there still exists noticeable softening for the tested full-breathing branches, especially for the peak at \(\mathbf{k}\sim(0.2,0,0)\) along the experimental \(ac\)-plane full-breathing branch. These residual discrepancies could imply extra physics or effects we have not included yet. Recent research efforts have renewed interest in the role of electron-phonon coupling in the mechanism of high-temperature superconductivity in cuprates [64]. The findings in the current work provide insights for future first-principles investigations on cuprates, including phonon anomalies [7; 8; 9], charge inhomogeneity, cavity-phonon-magnon quasiparticle interactions [65], and phase competition, which in turn contribute to a better understanding of cuprate high temperature superconducting materials.
###### Acknowledgements.
J.N. and J.S. acknowledge the support of the U.S. Office of Naval Research (ONR) Grant No. N00014-22-1-2673, with which they designed the project. J.N., A.R. and J.P.P acknowledge support from Tulane University's startup funds (computations). J.P.P. acknowledges support from the National Science Foundation under Grant. No. DMR-1939528. Computational work done at Tulane University was supported by the Cypress Computational Cluster at Tulane and the National Energy Research Scientific Computing Center. The work at Northeastern University was supported by the US Department of Energy (DOE), Office of Science, Basic Energy Sciences Grant No. DE-SC0022216 (modeling complex states in materials). The work at Los Alamos National Laboratory was supported by the U.S. DOE NNSA under Contract No. 89233218CNA000001 and by the Center for Integrated Nanotechnologies, a DOE BES user facility, in partnership with the LANL Institutional Computing Program for computational resources. Additional support was provided by DOE Office of Basic Energy Sciences Program E3B5. B.B. was supported by the Ministry of Education and Culture (Finland) and by the LUT University INERCOM platform.
## Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2307.00292 | Dumbbell dynamics: a didactical approach | In this paper we propose a simplified model to describe the dissipative
effects of tides. We assume a spherical Earth with a dissipative coupling with
a mechanical dumbbell. The latter has a mass much smaller than the Earth's, and
it models the presence of the tidal bulges. Using properly the scale analysis,
we will show that some of the consequences of tidal dissipation are the
circularization and the enlargement of orbit of the Moon and the slowing down
of the Earth's rotation. We will also see that tidal dissipation plays a
fundamental role for the establishment of a regime of spin-orbit resonance in
the celestial systems. The mathematical tools used make our treatment
appropriate for senior high school students or college students. | Benedetto Scoppola, Matteo Veglianti | 2023-07-01T10:19:17Z | http://arxiv.org/abs/2307.00292v1 | # Dumbbell dynamics: a Didactical approach
###### Abstract.
In this paper we propose a simplified model to describe the dissipative effects of tides. We assume a spherical Earth with a dissipative coupling with a mechanical dumbbell. The latter has a mass much smaller than the Earth's, and it models the presence of the tidal bulges. Using properly the scale analysis, we will show that some of the consequences of tidal dissipation are the circularization and the enlargement of orbit of the Moon and the slowing down of the Earth's rotation. We will also see that tidal dissipation plays a fundamental role for the establishment of a regime of spin-orbit resonance in the celestial systems. The mathematical tools used make our treatment appropriate for senior high school students or college students.
## 1. Introduction
All textbooks in introductory astronomy and many in physics and mechanics mention the existence of oceanic tides as an interesting manifestation of universal gravitation: indeed many teachers are interested in this topic. As argued in [1], the most important aspects of the origin and properties of tides are often treated inaccurately or even erroneously. Much of the confusion over generating tides is related to the roles of the orbital motion of the Moon and earth about their common center of mass and of the Earth's axial rotation. In discussing the physics behind this phenomenon, authors usually explain (more or less successfully) why two tidal swells appear on the opposite sides of the globe. However, it is difficult to find a plausible explanation of the physical mechanism responsible for the phase shift between the zenith of the moon and the moment of high tide, which at some places approaches \(90^{\circ}\). Misunderstandings also occur in discussions about the role of tidal friction in the retardation of axial rotations and in the evolution of orbital motions of the gravitationally coupled celestial bodies.
While the conservative aspects of the tides are masterfully treated using elementary tools in [1], [2] and [3], the dissipative ones are only qualitatively described. The scientific, non-pedagogical, works on tidal dissipation are divided into two large areas according to the desired target: rheological aspects (see [4] and [5]) or dynamic aspects (see [6] and [9]). Our aim is therefore to propose a way to quantitatively deal with the dissipative aspects of the tides and their consequences using high school mathematics. To this end, we will use the "dumbbell model" developed in [7], that consists in describing the planet in terms of a point \(P\) of mass \(M-\mu\) and a mechanical dumbbell centered in \(P\), i.e., a system of two points, each of which has mass \(\mu/2\), constrained to be at fixed mutual distance \(2r\), having \(P\) as center of mass.
The idea of a dumbbell model is not original, in fact is developed in [9] and in many works (see, for instance [8]) where however it is used for other purposes.
This model is useful to compute, using an elementary force's approach, the torque acting on the Earth's ocean bulges due to the Moon and the torque acting on the Moon due to the Earth's ocean bulges. We perform the detailed computation in section 2. In section 3 we present a way to describe the evolution of the system imagining that the variation of the parameters does not occur in a continuous way, but rather discretely. This allows us to avoid a treatment through differential equations and replace them with finite difference equations.
In this way, it will be possible to show how the tidal dissipation is responsible for the circularization and the enlargement of the lunar orbit and the slowing down of the earth's rotation. We will see that the first two events occur on very different time scales: the circularization of the lunar orbit first is much faster than the enlargement. Thus the orbit will become circular in shorter times than those of enlargement, and this is what we see in many planet-satellite systems of our galaxy, particularly in the Earth-Moon system. Finally, we will also see that tidal dissipation plays a fundamental role for the establishment of a regime of spin-orbit resonance in the celestial systems.
The mathematical tools used make our discussion appropriate for senior high school students or college students. We believe that an appropriate and non-sterile use of mathematics is useful to understand its functionality and to make children passionate about studying this discipline. Furthermore, this subject is very suitable for teaching and learning the scale analysis, that is a very powerful tool to simplify complex equations by neglecting the suitable small terms.
## 2. Tidal torque
In this section we want to show how the dumbbell model is useful to compute the torque acting on the Earth's ocean bulges due to the Moon. The ocean bulges are modeled by the aforementioned dumbbell: a pair of massive point each of which with a mass of \(\mu/2\) placed at the ends of a segment of length \(2r\), that we can assume, for simplicity, equal to the diameter \(2R\) of the Earth.1
In figure 1 we imagine the Earth as a sphere with radius \(R\) centered at the origin \(O\) of the reference frame and the Moon as a massive point, indicated with \(S\), with mass \(m\), at a distance \(a\) from the center of the Earth. Let the \(x\)-axis be the line joining \(O\) with \(S\) and the \(y\)-axis perpendicular to it. Moreover, for simplicity, we consider the rotation of the Earth perpendicular to the Moon's orbital plane.
On the Earth's surface there are the two massive points \(C_{1}\) and \(C_{2}\), that represent the ocean bulge's center of gravity. Finally, in general, we imagine that the dumbbell is inclined by an angle \(\varepsilon\) with respect to the \(x\)-axis.
Let \(\vec{F_{1}}\) and \(\vec{F_{2}}\) be the attractive forces acting on \(C_{1}\) and \(C_{2}\) due to the Moon and let \(\vec{F^{\prime}_{1}}\) and \(\vec{F^{\prime}_{2}}\) the attractive forces acting on \(S\) due to \(C_{1}\) and \(C_{2}\) respectively. Clearly \(\vec{F^{\prime}_{1}}=-\vec{F_{1}}\) and \(\vec{F^{\prime}_{2}}=-\vec{F}_{2}\).
Moreover, on \(S\) also acts the attractive force \(\vec{F}\), lying on the \(x\)-axis, due to the rest of the Earth, deprived of the two masses of water.
Let \(G\) be the universal gravitational constant and let \(\beta_{1}\) and \(\beta_{2}\) be the two angles \(O\hat{S}C_{1}\) and \(O\hat{S}C_{2}\) respectively; we have:
\[F_{1}=-F^{\prime}_{1}=-\frac{Gm\frac{\mu}{2}}{SC_{1}^{2}}(-\hat{x}\cos\beta_{ 1}+\hat{y}\sin\beta_{1}), \tag{1}\]
\[F_{2}=-F^{\prime}_{2}=-\frac{Gm\frac{\mu}{2}}{SC_{2}^{2}}(-\hat{x}\cos\beta_{ 2}-\hat{y}\sin\beta_{2}), \tag{2}\]
\[F=-\frac{Gm(M-\mu)}{a^{2}}\hat{x}. \tag{3}\]
Moreover, from the geometry of the system we have that \(\frac{r}{a}\) is a dimensionless small parameter, so we can expand the forces up to the second order in \(\frac{r}{a}\).
To this end, we need the following geometric relation:
\[\begin{array}{l}SC_{1}^{2}=(a-r\cos\varepsilon)^{2}+(r\sin\varepsilon)^{2}=a^{ 2}\biggl{(}1-2\frac{r}{a}\cos\varepsilon+\frac{r^{2}}{a^{2}}\biggr{)}\\ SC_{2}^{2}=(a+r\cos\varepsilon)^{2}+(r\sin\varepsilon)^{2}=a^{2}\biggl{(}1+2 \frac{r}{a}\cos\varepsilon+\frac{r^{2}}{a^{2}}\biggr{)}\\ \beta_{1}=\frac{r\sin\varepsilon}{a-r\cos\varepsilon}=\frac{r}{a}\sin \varepsilon\biggl{(}1+\frac{r}{a}\cos\varepsilon\biggr{)}\\ \beta_{2}=\frac{r\sin\varepsilon}{a+r\cos\varepsilon}=\frac{r}{a}\sin \varepsilon\biggl{(}1-\frac{r}{a}\cos\varepsilon\biggr{)}\end{array} \tag{4}\]
From (1) we have:
\[\begin{array}{l}F_{1}=-F_{1}^{\prime}=-\frac{Gm\frac{\mu}{2}}{a^{2}}\biggl{(} 1+2\frac{r}{a}\cos\varepsilon-\frac{r^{2}}{a^{2}}+4\frac{r^{2}}{a^{2}}\cos^{2 }\varepsilon\biggr{)}\biggl{[}-\hat{x}\biggl{(}1-\frac{r^{2}}{2a^{2}}\sin^{2} \varepsilon\biggr{)}+\hat{y}\biggl{(}\frac{r}{a}\sin\varepsilon\biggl{(}1+ \frac{r}{a}\cos\varepsilon\biggr{)}\biggr{)}\biggr{]}=\\ \qquad\qquad=-\frac{Gm\frac{\mu}{2}}{a^{2}}\biggl{[}-\hat{x}\biggl{(}1+2 \frac{r}{a}\cos\varepsilon-\frac{r^{2}}{a^{2}}\biggl{(}\frac{3}{2}-\frac{9}{2} \cos^{2}\varepsilon\biggr{)}\biggr{)}+\hat{y}\biggl{(}\frac{r}{a}\sin \varepsilon\biggl{(}1+3\frac{r}{a}\cos\varepsilon\biggr{)}\biggr{)}\biggr{]}. \end{array} \tag{5}\]
Similarly, from (2) we have:
\[F_{2}=-F_{2}^{\prime}=-\frac{Gm\frac{\mu}{2}}{a^{2}}\biggl{[}-\hat{x}\biggl{(} 1-2\frac{r}{a}\cos\varepsilon-\frac{r^{2}}{a^{2}}\biggl{(}\frac{3}{2}-\frac{9} {2}\cos^{2}\varepsilon\biggr{)}\biggr{)}-\hat{y}\biggl{(}\frac{r}{a}\sin \varepsilon\biggl{(}1-3\frac{r}{a}\cos\varepsilon\biggr{)}\biggr{)}\biggr{]}. \tag{6}\]
We are now ready to calculate the torque acting on the dumbbell due to the Moon and the torque acting on the Moon due to the dumbbell. We will show that the two torques are exactly opposite, as it is expected from the conservation of angular momentum for isolated systems.
For simplicity we impose that the Moon moves in a circular orbit around the point G, the center of gravity of the Earth-Moon system. This is equivalent to stating that the sum of the components along the \(x\)-axis of the forces acting on the Moon is the centripetal force. Let \(\omega\) be the angular
Figure 1. Geometry of the system.
velocity of the revolution of the Moon and \(r_{S}=SG\) the radius of the circular orbit around the point \(G\), so we have:
\[m\omega^{2}r_{S}=F_{x}+F^{\prime}_{1x}+F^{\prime}_{2x}=\]
\[=\frac{Gm(M-\mu)}{a^{2}}+\frac{Gm\frac{\mu}{2}}{a^{2}}\biggl{(}1+2\frac{r}{a} \cos\varepsilon-\frac{r^{2}}{a^{2}}\biggl{(}3-\frac{9}{2}\cos^{2}\varepsilon \biggr{)}\biggr{)}+\frac{Gm\frac{\mu}{2}}{a^{2}}\biggl{(}1-2\frac{r}{a}\cos \varepsilon-\frac{r^{2}}{a^{2}}\biggl{(}3-\frac{9}{2}\cos^{2}\varepsilon \biggr{)}\biggr{)}=\]
\[=\frac{GmM}{a^{2}}-\frac{Gm\mu}{a^{2}}\frac{3}{2}\frac{r^{2}}{a^{2}}(1-3\cos^ {2}\varepsilon)\]
Hence:
\[r_{S}=\frac{GM}{\omega^{2}a^{2}}\biggl{(}1+\frac{3}{2}\frac{\mu}{M}\frac{r^{2} }{a^{2}}(3\cos^{2}\varepsilon-1)\biggr{)}. \tag{7}\]
Notice that, in the case of two point masses \(m\) and \(M\) placed at a distance \(a\), it turns out that \(m\) makes a circular orbit around the common center of gravity with a radius \(r_{S}=\frac{GM}{\omega^{2}a^{2}}\).
Since in (7) both \(\frac{\mu}{M}\) and \(\frac{r^{2}}{a^{2}}\) are small quantities, the correction to the Moon's orbital radius due to the dumbbell is completely negligible.
We can now compute both the torque acting on the dumbbell due to the Moon and the torque acting on the Moon due to the dumbbell up to the smallest order in \(\frac{r}{a}\).
Let's start with the latter. The magnitude of the torque acting on the Moon is \(\Gamma_{M}=a|\vec{F_{M}}|\), with \(\vec{F_{M}}\) is the sum of the forces acting on the Moon. Thanks to (7), \(\vec{F_{M}}\) is parallel to the \(y\)-axis, and the magnitude of the torque is:
\[\Gamma_{M}=a(F^{\prime}_{1y}-F^{\prime}_{2y})=\frac{Gm\frac{\mu}{2}}{a^{2}} \biggl{[}\frac{r}{a}\sin\varepsilon\biggl{(}1+3\frac{r}{a}\cos\varepsilon \biggr{)}-\frac{r}{a}\sin\varepsilon\biggl{(}1-3\frac{r}{a}\cos\varepsilon \biggr{)}\biggr{]}=\]
\[=3\frac{r^{2}}{a^{3}}Gm\mu\sin\varepsilon\cos\varepsilon. \tag{8}\]
On the other hand, the magnitude of the torque acting on the dumbbell due to the Moon is:
\[\Gamma_{D}=r|\vec{F_{1}}|\sin(\varepsilon+\beta_{1})-r|\vec{F_{2}}|\sin( \varepsilon-\beta_{2})=\]
\[=\frac{rGm\frac{\mu}{2}}{a^{2}}\biggl{(}1+2\frac{r}{a}\cos\varepsilon-\frac{r^ {2}}{a^{2}}+4\frac{r^{2}}{a^{2}}\cos^{2}\varepsilon\biggr{)}\sin(\varepsilon+ \beta_{1})-\frac{rGm\frac{\mu}{2}}{a^{2}}\biggl{(}1-2\frac{r}{a}\cos \varepsilon-\frac{r^{2}}{a^{2}}+4\frac{r^{2}}{a^{2}}\cos^{2}\varepsilon\biggr{)} \sin(\alpha-\beta_{2})=\]
\[=\frac{rGm\frac{\mu}{2}}{a^{2}}\biggl{(}1+2\frac{r}{a}\cos\varepsilon\biggr{)} (\sin\varepsilon\cos\beta_{1}+\cos\varepsilon\sin\beta_{1})-\frac{rGm\frac{ \mu}{2}}{a^{2}}\biggl{(}1-2\frac{r}{a}\cos\varepsilon\biggr{)}(\sin\varepsilon \cos\beta_{2}-\cos\varepsilon\sin\beta_{2})=\]
\[=\frac{rGm\frac{\mu}{2}}{a^{2}}\biggl{(}1+2\frac{r}{a}\cos\varepsilon\biggr{)} (\sin\varepsilon+\frac{r}{a}\cos\varepsilon\sin\varepsilon)-\frac{rGm\frac{ \mu}{2}}{a^{2}}\biggl{(}1-2\frac{r}{a}\cos\varepsilon\biggr{)}(\sin\varepsilon +\frac{r}{a}\cos\varepsilon\sin\varepsilon)=\]
\[=3\frac{r^{2}}{a^{3}}Gm\mu\sin\varepsilon\cos\varepsilon. \tag{9}\]
As previously outlined, up to the smallest order in \(\frac{r}{a}\) the two torques are equal (and obviously in the opposite direction). This implies that the angular momentum of the system is conserved, as we expected.
Moreover, \(\mu\) represents the mass of the ocean bulge, that is the mass of water whose shape is that of an ellipsoid (of semi-axis \(R,R,R+h\)) from which it is subtracted a sphere of radius \(R\) concentric to it, with \(h\) represents the tidal height of the ocean:
\[h=\frac{3}{2}\frac{m}{M}\biggl{(}\frac{R}{a}\biggr{)}^{3}R. \tag{10}\]
For a detailed computation of \(h\) based on elementary mathematical tools see [3]. So:
\[\mu=\rho_{w}\frac{4}{3}\pi R^{2}h=\rho_{w}\frac{4}{3}\pi R^{2}\frac{3}{2}\frac{m} {\rho_{E}\frac{4}{3}\pi R^{3}}\bigg{(}\frac{R}{a}\bigg{)}^{3}R=\frac{\rho_{w}}{ \rho_{E}}\frac{3}{2}\bigg{(}\frac{R}{a}\bigg{)}^{3}m. \tag{11}\]
Finally, remembering that, according to our assumption, \(r=R\) and using (11), the torque can be written as:
\[\Gamma=3\frac{R^{2}}{a^{3}}Gm\mu\sin\varepsilon\cos\varepsilon=\bigg{(}\frac{ 9}{2}\frac{\rho_{w}}{\rho_{E}}\bigg{)}G\frac{R^{5}}{a^{6}}m^{2}\sin(2 \varepsilon)=kG\frac{R^{5}}{a^{6}}m^{2}\sin(2\varepsilon), \tag{12}\]
where \(k=\frac{9}{2}\frac{\rho_{w}}{\rho_{E}}\) is a dimensionless constant.
The formula (12) is known in literature as "MacDonald formula for body-tide torques". We note that our dumbbell model allows us to derive this formula in a simple way starting from reasonable physical considerations.
## 3. Evolution of the system
In this section we want to study the evolution of the system avoiding advanced mathematical tools. In order to do this, we imagine that the variations of parameters is not continuous but discrete in time. This can be done because the parameters vary on very large time scales, therefore at each revolution they vary by very small quantities. For this reason the difference between a discrete and a continuous evolution is irrelevant.
We start proving by this attitude the circularization of the orbit.
Since the results of the previous section hold in the case of circular orbit, we can imagine that the real elliptical orbit of the Moon is the superposition of two virtual semicircular orbits centered on the Earth and tangent to the real trajectory in the perigee and in the apogee respectively, as shown in figure 2. In this way we can apply the results obtained in the previous section, but unfortunately we have introduced a discontinuity in the trajectory of the Moon, which is very difficult to digest. However, imagining that the evolution of the parameters occur in a discrete way at each semi-revolution, the discontinuity in the virtual trajectory is irrelevant.
So: \(r_{a}\) represent the Earth-Moon distance in the apogee and \(r_{p}\) represent the Earth-Moon distance in the perigee. Clearly: \(r_{a}-r_{p}>0\).
Moreover:
\(a=\frac{1}{2}(r_{a}+r_{p})\) represent the semi-major axis of the orbit;
\(c=\frac{1}{2}|F_{1}-F_{2}|=\frac{1}{2}(r_{a}-r_{p})\) represent the half focal distance;
\(e=\frac{c}{a}\) represent the eccentricity of the orbit.
To determine the evolution of the system, the torque plays a crucial role. As we calculated in the previous section, the torque acting on the Moon in the perigee is:
\[\Gamma_{M_{p}}=kG\frac{R^{5}}{r_{p}^{6}}m^{2}\sin(2\varepsilon). \tag{13}\]
At the same time, the torque acting on the dumbbell is:
\[\Gamma_{D_{p}}=-kG\frac{R^{5}}{r_{p}^{6}}m^{2}\sin(2\varepsilon). \tag{14}\]
obviously the signs \(\pm\) are arbitrary: in any case the torques have opposite sign.
The torques in the apogee (\(\Gamma_{M_{a}}\) and \(\Gamma_{D_{a}}\)) are similar: just replace \(r_{p}\) with \(r_{a}\).
\(\Gamma_{M}\) determines the variation of the orbital parameters (semi-major axis, focal distance, eccentricity);
\(\Gamma_{D}\) determines the variation of \(\Omega\), the sidereal angular velocity of the Earth.
To determine the evolution of the parameters, we compare their values during the \(n\)-th revolution \((a_{n};c_{n};e_{n};\Omega_{n})\) with their values during the \((n+1)\)-th revolution \((a_{n+1};c_{n+1};e_{n+1};\Omega_{n+1})\).
As we argued before, we imagine that the change of the parameters at the end of \(n\)-th revolution consist of two contribution at the end of each virtual semi-circumference.
Moreover: \(r_{a,n}\) represents the Earth-Moon distance at the apogee during the \(n\)-th revolution;
\(r_{p,n}\) represents the Earth-Moon distance at the perigee during the \(n\)-th revolution;
\(a_{n}=\frac{1}{2}(r_{a,n}+r_{p,n})\) represents the semi-major axis during the \(n\)-th revolution;
\(c_{n}=r_{a,n}-r_{p,n}\) represents the half focal distance during the \(n\)-th revolution;
\(e_{n}=\frac{c_{n}}{a_{n}}\) represents the eccentricity during the \(n\)-th revolution.
The same parameters during the \((n+1)\)-th revolution are indicated with the same notation, replacing the subscript \(n\) with \((n+1)\).
Finally, the change of parameters between two successive revolutions are indicated with \(\Delta\):
\(\Delta r_{p}=r_{p,n+1}-r_{p,n}\); \(\Delta r_{a}=r_{a,n+1}-r_{a,n}\); \(\Delta a=a_{n+1}-a_{n}\); \(\Delta c=c_{n+1}-c_{n}\); \(\Delta e=e_{n+1}-e_{n}\).
We can start by considering the evolution of lunar orbital parameters. From now on, we indicate generically with \(r_{i}\) the Earth-Moon distance in the position \(i\): \(i\) can be either \(p\) (the perigee) or
Figure 2. The real trajectory of the Moon around the Earth is the ellipse in black, the point \(P\) represents the focus occupied by the Earth. The virtual trajectory is the dashed line in red, composed by the semi-circumference \(ABC\) centered in \(P\) and tangent to the ellipse in the perigee \(B\) and the semi-circumference \(DEF\) centered in \(P\) and tangent to the ellipse in the apogee \(E\).
(the apogee).
Let \(\Gamma_{Mn}=kG\frac{R^{5}}{r_{i}^{6}}m^{2}\sin(2\varepsilon)\) the torque acting on the Moon in the \(n\)-th revolution. Then the variation of Moon's angular momentum \(L=m\omega r_{i}^{2}\) between the \(n\)-th and the \((n+1)\)-th revolution is \(\Delta L=L_{n+1}-L_{n}\) is:
\[\Delta L=\Gamma_{Mn}T=\Gamma_{Mn}\frac{2\pi}{\omega}. \tag{15}\]
From this equation we can derive the evolution of \(a\), the semi-major axis of the orbit. We remember that \(\omega\) depends on \(r_{i}\), indeed from the Kepler's third laws \(\omega^{2}r_{i}^{3}=GM\). 2 Therefore we have:
Footnote 2: This result holds in the case of a circular orbit. Thanks to the assumptions made above, this is our case.
\[\omega=\frac{\sqrt{GM}}{r_{i}^{3/2}} \tag{16}\]
Hence, from (15) and (16), we have:
\[\Delta\left(m\sqrt{GMr_{i}}\right)=kG\frac{R^{5}}{r_{i}^{6}}m^{2}\sin(2 \varepsilon)\frac{2\pi r_{i}^{3/2}}{\sqrt{GM}}, \tag{17}\]
that we can rewrite as:
\[\Delta\left(\sqrt{r_{i}}\right)=2\pi k\frac{m}{M}\frac{R^{5}}{r_{i}^{9/2}} \sin(2\varepsilon). \tag{18}\]
We can rewrite the l.h.s. of the previous equation as:
\[\begin{split}&\Delta\left(\sqrt{r_{i}}\right)=\sqrt{r_{i,n+1}}- \sqrt{r_{i,n}}=\sqrt{r_{i,n}+\Delta r_{i}}-\sqrt{r_{i,n}}=\sqrt{r_{i,n}}\left( \sqrt{1+\frac{\Delta r_{i}}{r_{i,n}}}-1\right)\\ &\simeq\sqrt{r_{i,n}}\left(1+\frac{1}{2}\frac{\Delta r_{i}}{r_{i,n}}-1\right)=\frac{\Delta r_{i}}{2\sqrt{r_{i,n}}}=\frac{\Delta r_{i}}{2\sqrt {r_{i}}},\end{split} \tag{19}\]
where we have used the approximation: \(\sqrt{1+x}\simeq 1+\frac{1}{2}x\) (see appendix), with \(x=\frac{\Delta r_{i}}{r_{i,n}}<<1\).
Hence equation (18) becomes:
\[\Delta r_{i}=4\pi k\frac{m}{M}\frac{R^{5}}{r_{i}^{4}}\sin(2\varepsilon)=\frac {K}{r_{i}^{4}}, \tag{20}\]
with \(K=4\pi k\frac{m}{M}R^{5}\sin(2\varepsilon)>0\) constant independent of \(r_{i}\).
Actually \(K\) depends on \(\varepsilon\) which depends on \(r_{i}\), indeed \(\varepsilon\) is the difference between the angular position on the Moon (with respect a certain reference axis), \(\omega t\), and the sidereal angular position of the Earth, \(\Omega t\):
\[\varepsilon=\Omega t-\omega t=\Omega t(1-\frac{\omega}{\Omega}). \tag{21}\]
But if we suppose that \(\omega<<\Omega\) (this assumption is currently true for the Earth-Moon system, indeed: \(\frac{\omega}{\Omega}\simeq\frac{1\,\mathrm{day}}{1\,\mathrm{month}}\simeq \frac{1}{30}\simeq 0,03\)), then \(\varepsilon\simeq\Omega t\) is independent on \(r_{i}\).
Let us now suppose, as we argued before, that the variation of \(a\) consists of two contributions: the variation when the Moon is at perigee (\(\Delta a_{p}\)) and the variation when the Moon is at apogee (\(\Delta a_{a}\)):
\[\begin{cases}\Delta r_{p}=\frac{K}{r_{p}^{2}}\\ \Delta r_{a}=\frac{K}{r_{a}^{4}}\end{cases} \tag{22}\]
But \(r_{a}>r_{p}\), then:
\[\Delta r_{p}>\Delta r_{a}. \tag{23}\]
From equation (23) we can derive two important results:
First, \(\Delta a=\Delta r_{a}+\Delta r_{p}>0\), this implies
\[a_{n+1}>a_{n}, \tag{24}\]
then the semi-major axis of the orbit increases.
Second, \(\Delta c=c_{n+1}-c_{n}=\frac{1}{2}\left(r_{a,n+1}-r_{p,n+1}\right)-\frac{1}{2} \left(r_{a,n}-r_{p,n}\right)=\frac{1}{2}\left(r_{a,n+1}-r_{a,n}\right)-\frac{1} {2}\left(r_{p,n+1}-r_{p,n}\right)=\frac{1}{2}(\Delta r_{a}-\Delta r_{p})<0\), this implies
\[c_{n+1}<c_{n}, \tag{25}\]
then the focal distance of the orbit decreases.
Finally, the two previous results implies that the eccentricity decreases and so the orbit becomes circular. Indeed:
\[\Delta e =e_{n+1}-e_{n}=\frac{c_{n+1}}{a_{n+1}}-\frac{c_{n}}{a_{n}}=\frac{ c_{n+1}a_{n}-c_{n}a_{n+1}}{a_{n+1}a_{n}}=\frac{c_{n+1}a_{n}-c_{n}a_{n}+c_{n}a_{n}-c_ {n}a_{n+1}}{a_{n+1}a_{n}}\] \[=\frac{a_{n}(c_{n+1}-c_{n})-c_{n}(a_{n+1}-a_{n})}{a_{n+1}a_{n}}= \frac{a_{n}\Delta c_{n}-c_{n}\Delta a_{n}}{a_{n+1}a_{n}}<0,\]
the last inequality follow from the fact that all the terms are positive, except for \(\Delta c\). This implies
\[e_{n+1}<e_{n}, \tag{26}\]
then the eccentricity of the orbit decreases.
Moreover, we can determine the rate of decrease of \(c_{n}\). Indeed:
\[\Delta c =\Delta(r_{a}-r_{p})=\Delta r_{a}-\Delta r_{p}=\frac{K}{r_{a}^{4 }}-\frac{K}{r_{p}^{4}}=\frac{K(r_{p}^{4}-r_{a}^{4})}{r_{a}^{4}r_{p}^{4}}\] \[=\frac{K(r_{p}^{2}+r_{a}^{2})(r_{p}+r_{a})(r_{p}-r_{a})}{r_{a}^{4 }r_{p}^{4}}=-c\frac{Ka(r_{p}^{2}+r_{a}^{2})}{r_{a}^{4}r_{p}^{4}}\simeq-c\frac{ K}{a^{5}}=-\lambda c,\]
with \(\lambda=\frac{K}{a^{5}}>0\).
Therefore \(\Delta c_{n}\) decreases in a way directly proportional to \(c_{n}\): this kind of decrease is called "exponential decrease " where \(\lambda\) is the rate of decrease. Since \(\lambda\) in positive, then \(c_{n}\) decreases until it becomes lesser than any prefixed positive quantity. When \(c_{n}\) approaches \(0\), then \(r_{a,n}\simeq r_{p,n}\) and hence, from (22), \(\Delta r_{a}\simeq\Delta r_{p}\). This implies \(\Delta c\simeq 0\). Thus, when the focal distance "becomes" zero, it no longer varies.
But \(c_{n}\simeq 0\), \(\Delta c\simeq 0\) imply that \(e_{n}\simeq 0\), \(\Delta e\simeq 0\). Thus even the eccentricity of the orbit decreases until it becomes lesser than any prefixed positive quantity and, "at the end" it no longer varies. So the orbit becomes circular.
On the other hand, the semi-major axis \(a\) increases indefinitely: even when the orbit becomes circular, \(a\) (that is its radius) continues to increase.
But the growth of \(a\) and the decrease of \(e\) occur on different time scales and in different ways, indeed while the variation of \(e\) is exponential, i.e. very fast, that of \(a\) is polynomial 3
Footnote 3: Indeed,
\[\Delta a=\frac{1}{2}\Delta(r_{a}+r_{p})=\frac{1}{2}(\Delta r_{a}+\Delta r_{p}) =\frac{1}{2}\left(\frac{K}{r_{a}^{4}}+\frac{K}{r_{p}^{4}}\right)=\frac{K(r_{p} ^{4}+r_{a}^{4})}{2r_{a}^{4}r_{p}^{4}}\simeq\frac{K}{2a^{4}}.\]
Let us now study the evolution of \(\Omega\), the sidereal angular velocity of the Earth: it's variation is due to \(\Gamma_{D}\), the torque acting on the dumbbell. Indeed the dumbbell is pulled back from the Moon and thanks to the friction between it and the underlying planet, it slows down the rotation of the Earth. The angular momentum of the Earth is \(I_{E}\Omega\), where \(I_{E}\) is the moment of inertia. As we argued
before, we suppose that it varies in each lunar revolution. Thus, at the end of \(n\)-th revolution we have:
\[\Delta\left(I_{E}\Omega\right)=\Gamma_{D,n}T=\Gamma_{D,n}\frac{2\pi}{\omega}, \tag{27}\]
and hence:
\[\Delta\Omega=\frac{\Gamma_{D,n}}{I_{E}}\frac{2\pi}{\omega}=-2\pi kG\frac{R^{5} }{r_{i}^{9/2}}\frac{m^{2}}{I_{E}}\sin(2\varepsilon)<0. \tag{28}\]
So, as we expect, \(\Omega\) decrease.
But if \(\Omega\) decreases, also \(\varepsilon\) decreases; indeed as argued before (see equation (21)), \(\varepsilon\simeq\Omega t\), then:
\[\Delta\varepsilon=\varepsilon_{n+1}-\varepsilon_{n}=\Omega_{n+1}t-\Omega_{n} t=(\Omega_{n+1}-\Omega_{n})t=\Delta\Omega t<0, \tag{29}\]
this implies
\[\varepsilon_{n+1}<\varepsilon_{n}, \tag{30}\]
then \(\varepsilon\) decreases.
This dynamics has as a stationary situation that in which \(\varepsilon=0\). Indeed, when \(\varepsilon\) approaches \(0\), then \(\Delta\Omega\) becomes \(0\) (see equation (28)), and then \(\Omega\) no longer varies. But, from equation (21), when \(\varepsilon=0\) then \(\Omega-\omega=0\) that implies \(\Omega=\omega\).
So the stationary situation is the spin-orbit resonance regime: that is a dynamics in which the Earth complete a rotation over a period of time equal to a Moon revolution. In such a situation the dumbbell (that rotates with velocity \(\omega\)) moves on the Earth at the same speed as that of the underlying layer (that rotates with velocity \(\Omega\)) and thus friction between the dumbbell and the underlying Earth ends: in such a situation tides will no longer be observed on Earth and no more energy will be dissipated by this mechanism.
This type of dynamics is often observed in the oldest planet-satellite systems of the Solar System, indicating the fact that the equilibrium situation predicted by our model is indeed attractive.
## Appendix A Special approximation
In this section we want to show two ways to prove the special approximation we used in equation (19):
\[\sqrt{1+x}\simeq 1+\frac{1}{2}x, \tag{31}\]
valid if \(x<<1\).
The first way uses the linearization formula: given a function \(f(x)\), if in its point \(x_{0}\) the function \(f\) and its derivative \(f^{\prime}\) are both continuous, it is possible to approximate the function with the tangent line in \(x_{0}\):
\[f(x_{0}+\delta)=f(x_{0})+f^{\prime}(x_{0})\delta+R(\delta), \tag{32}\]
with \(R(\delta)=o(\delta^{2})\).
So equation (31) follows from (32) if \(f(x)=\sqrt{1+x}\), \(x_{0}=0\) and \(\delta=x<<1\).
A second way to prove (31) does not use derivative, but only techniques of Cartesian geometry. The issue consists to determine the tangent line to the function \(y=\sqrt{1+x}\) in its point \(x_{0}=0\).
In the Cartesian plane, \(y=\sqrt{1+x}\) represents the equation of a semi-parabola. The tangent point is \((0,1)\) and then the equation of tangent line is: \(y=mx+1\). \(m\) can be determined by imposing the tangent condition.
We can rewrite the equation of our semi-parabola as \(y^{2}=1+x\) and we can substitute \(y\) with its expression \(mx+1\), obtaining:
\[m^{2}x^{2}+(2m-1)x=0. \tag{33}\]
Finally we have to impose the tangent condition (uniqueness of the solution): the discriminant of (33) is null. This implies: \(m=1/2\).
Then, the equation of tangent line is \(y=\frac{1}{2}x+1\).
So, the semi-parabola \(y=\sqrt{1+x}\) can be approximated near the point \(x_{0}=0\) with its tangent line \(y=\frac{1}{2}x+1\).
The authors have no conflicts to disclose.
|
2308.00417 | The use of the invariant's properties in the primality test and prime
search | The purpose of this article is to delve into the properties of invariants.
The properties, explained in [2], reveal new ways to develop algorithms that
allow us to test the primality of a number. In this article, some of these are
shown, indicating the advantages and disadvantages of these new algorithms. The
information provided by these algorithms also gives additional information
regarding the factorization of a compound number. | Juan Hernandez-Toro | 2023-08-01T09:52:50Z | http://arxiv.org/abs/2308.00417v1 | # The use of the invariant's properties in the
###### Abstract
The purpose of this article is to delve into the properties of invariants. The properties, explained in [2], reveal new ways to develop algorithms that allow us to test the primality of a number. In this article, some of these are shown, indicating the advantages and disadvantages of these new algorithms. The information provided by these algorithms also gives additional information regarding the factorization of a compound number.
**keywords:**Prime Numbers, Compound Numbers, Primality test, Factorization.
## 1 Introduction
The concept of invariant could be used to explain the behavior of compound numbers. Using the invariant, it is possible to infer some additional properties from the compound numbers, such as non-trivial quadratic remainders and non-trivial triangular remainders. If one of these properties is found, then it is automatically assumed that this number is not prime. Additionally, these remainders provide information regarding his factorization. In the conclusion of this article, some algorithms are shown to test primality using non-trivial invariants, non-trivial quadratic remainders, and non-trivial triangular remainders.
## 2 Some definitions regarding the quadratic number's remainder
### Definition of quadratic symmetric numbers
The numbers whose quadratic powers have equivalent remainder are called quadratic symmetric numbers. For \(p<m\), \(s<m\) and \(p\neq s\), \(p\) is the Symmetric number of \(s\) regarding \(m\) if \(s^{2}\equiv p^{2}\pmod{m}\). This always happen if \(m-s=p\).
#### 2.1.1 Demonstration:
\[p^{2}=m^{2}-m\cdot s+s^{2}\equiv s^{2}\pmod{m} \tag{1}\]
#### 2.1.2 Example:
For \(s=4\) and \(m=15\) the symmetric number of n is \(15-4=11\). As result \(4^{2}\equiv 11^{2}\equiv 1\pmod{15}\).
### Definition of numbers with trivial quadratic remainder
A number q with trivial quadratic remainder regarding \(m\) is defined as \(q^{2}\equiv t^{2}\pmod{m}\) and \(q=t\).
#### 2.2.1 Conclusion:
All the numbers \(m>1\) have Trivial quadratic remainder numbers. If \(q^{2}<m\) then q is a trivial quadratic remainder.
#### 2.2.2 Example:
The numbers with trivial quadratic remainder regarding 15 are (0,1,2,3).
### Definition of symmetric numbers with trivial quadratic remainder
If \(t\) is a number with quadratic remainder regarding \(m\) then \(s\) is a Symmetric number with trivial quadratic remainder if \(s=m-t\)
#### 2.3.1 Conclusion:
There are the same quantity of symmetric numbers with trivial quadratic remainder than numbers with trivial quadratic remainder.
#### 2.3.2 Example:
The numbers with symmetric trivial quadratic remainder regarding 15 are (15,14,13,12).
### Definition of numbers with non trivial quadratic remainder
A number q with non trivial quadratic number remainder regarding \(m\) is defined as \(q^{2}\equiv d^{2}\pmod{m}\) where \(d^{2}<m\), \(q\neq d\) and \(m-q\neq d\)
#### 2.4.1 Example:
The numbers with non trivial quadratic regarding 15 are (4,7,8,11).
## 3 Some definitions regarding the triangular number's remainders
### Definition of symmetric triangular numbers
The odd numbers whose triangular numbers have equivalent remainder are called triangular symmetric numbers. For \(p<m\), \(s<m\), m odd number and \(p\neq s\); \(p\) is the symmetric triangular number of \(s\) regarding \(m\) if \(T(s)\equiv T(p)\pmod{m}\).
#### 3.1.1 Demonstration:
This always happen if \(m-s-1=p\). 1
Footnote 1: For the even number the solution is slightly different
\[\begin{split}\frac{p^{2}+p}{2}&=\frac{m^{2}-2ms-2m+ s^{2}+2s+1+m-s-1}{2}\\ &\equiv\frac{s^{2}+2s+1-s-1}{2}=\frac{s^{2}+s}{2}\pmod{m}\end{split} \tag{2}\]
#### 3.1.2 Example:
For \(s=6\) and \(m=15\) the symmetric number of n is \(15-6-1=8\). As result \(T(6)=21\equiv T(8)=36\equiv 6\pmod{15}\).
### Definition of numbers with trivial triangular remainder
A number q with trivial triangular remainder regarding \(m\) is defined as \(T(q)=\frac{q^{2}+q}{2}\equiv T(t)=\frac{t^{2}+t}{2}\pmod{m}\) and \(q=t\).
#### 3.2.1 Conclusion:
All the numbers \(m>1\) have Trivial triangular remainder numbers. If \(T(q)=\frac{q^{2}+q}{2}<m\) then q is number with a trivial triangular remainder number.
#### 3.2.2 Example:
The numbers with trivial triangular remainder regarding 15 are (0,1,2,3,4).
### Definition of symmetric numbers with trivial triangular remainder
If \(t\) is a number with trivial triangular remainder regarding \(m\) then \(s\) is a symmetric number with trivial triangular remainder if \(s=m-t-1\).
#### 3.3.1 Conclusion:
There are the same quantity of symmetric numbers with trivial triangular remainder than numbers with trivial triangular remainder.
#### 3.3.2 Example:
The symmetric numbers with trivial quadratic number regarding 15 are (14,13,12,11).
### Definition of numbers with non trivial triangular remainder
A number q with non trivial triangular remainder regarding a number \(m\) is defined as \(T(q)=\frac{q^{2}+q}{2}\equiv T(d)=\frac{d^{2}+d}{2}\pmod{m}\) where \(\frac{d^{2}+d}{2}<m\), \(q\neq d\) and \(m-q-1\neq d\)
#### 3.4.1 Example:
The numbers with non trivial triangular remainder regarding 15 are (5,6,8,9).
## 4 Invariant and anti-invariant definition
A complete description about invariant and anti-invariant definition can be found in [2]. Please refer to this article for more information.
## 5 Properties of invariants
Proposition 1. Numbers and symmetric number with trivial and non trivial quadratic remainder can be inferred from the anti-invariant invariant tuples in odd numbers
#### 5.1.1 Demonstration:
The tuples of anti invariant invariant number can be expressed as \((c,d)\) where \(d=c+1\). If both numbers are multiplied by \(2\cdot s\), where \(s^{2}<m\), then, are obtained two numbers \(c^{\prime}=2\cdot c\cdot s\) and \(d^{\prime}=2\cdot c\cdot s+2\cdot s\). The equidistant number between c' and d' can be expressed as \(e=2sc+s\) or also \(e=2sd-s\). then \(e^{2}\pmod{m}\) is:
\[e^{2}=4s^{2}cd+2ds^{2}-2cs^{2}-s^{2}\equiv 2ds^{2}-2cs^{2}-s^{2}\pmod{m} \tag{3}\]
if d is substituted by \(d=c+1\) then (3) is now:
\[e^{2}\equiv 2cs^{2}+2s^{2}-2cs^{2}-s^{2}\equiv s^{2}\pmod{m} \tag{4}\]
Proposition 2. The trivial (0,1) anti-invariant invariant tuple inferred only numbers with trivial quadratic remainder in odd numbers
This is a particular case of 5.1.
#### 5.2.1 Demonstration:
The anti-invariant is 0. Then \(e=2sc+s\) is always \(e=s\). Due to \(s^{2}<m\) there are not numbers with non trivial quadratic remainder because never meet \(e\neq s\). For this reason, a number with non trivial quadratic remainder cannot be inferred from (0,1) trivial anti-invariant, invariant tuple.
Proposition 3. The trivial (m-1,m) anti-invariant invariant tuple inferred only symmetric numbers with trivial quadratic remainder in odd numbers
This is a particular case of 5.1.
#### 5.3.1 Demonstration:
\((m-1,m)\equiv(-1,0)\pmod{m}\). The invariant is 0. Then \(e=2sd-s\) is always \(e=-s\) Due \(s^{2}<m\), there are not number with non trivial quadratic remainder because never meet \(m-e\neq s\). For this reason a number with non trivial quadratic remainder cannot be inferred from (m-1,m) trivial anti.invariant, invariant tuple.
### Proposition 4. The numbers with non trivial quadratic remainder are compounds numbers
The existence of a non trivial quadratic number probe automatically that the number is compound number.
#### 5.4.1 Demonstration:
Using 5.2, 5.3 and the conclusions of [2] is probed that the numbers with non trivial quadratic remainder can be inferred only from the non trivial anti-invariant, invariant tuple. For [2] is demonstrated that is a compound number.
### Proposition 5. The prime number and the remainder 1
If the number m is prime or power of prime \(n^{2}\equiv 1\) only if n=1 or n=m-1.
#### 5.5.1 Demonstration:
Is a particular case of 5.4 where s=1. If exists a non trivial quadratic remainder 1, then exists a non trivial invariant and m is not prime.
## 6 The triangular number and the invariant
### Proposition 6. The invariant and the triangular number remainder.
The invariant also exists in the triangular numbers remainders. An Invariant A regarding m is a number where \(T_{A}\equiv A\pmod{m}\).
#### 6.1.1 Demonstration:
The multiplication of the tuple anti-invariant, invariant \((c,d)\) produces a rhomboid number. If this number is divide by 2 the result is the triangular number, \(T_{n-1}\) and multiple of m, therefore \(T_{n-1}\equiv 0\pmod{m}\). the next triangular number:
\[T_{n}=T_{n-1}+n \tag{5}\]
Then:
\[T_{n}=T_{n-1}+n\equiv n\pmod{m} \tag{6}\]
#### 6.1.2 Conclusion:
The invariant is also present in triangular number's remainder. There are tuples (d-1,d) where d-1 is zero remainder and d is an invariant. Some properties can be also extrapolate to the triangular number's remainder.
Proposition 7. The trivial and non trivial triangular remainder can be inferred from the zero, invariant tuple in odd numbers
From the zero invariant tuple, the trivial and non trivial triangular remainder can be inferred. The number \(n=2cs+c+s\) produces a triangle remainder if \(\frac{(s^{2}+s)}{2}<m\), where c is the position of the zero remainder.
#### 6.2.1 Demonstration:
\[T_{n}=\frac{(2cs+c+s)^{2}+2cs+c+s}{2}=\frac{4(c^{2}+c)(s^{2}+s)+(c^{2}+c)+(s^{2 }+s)}{2} \tag{7}\]
From 6.1 the \(\frac{c^{2}+c}{2}\equiv 0\pmod{m}\) then :
\[\frac{4(c^{2}+c)(s^{2}+s)+(c^{2}+c)+(s^{2}+s))}{2}\equiv\frac{(s^{2}+s)}{2} \pmod{m} \tag{8}\]
#### 6.2.2 Conclusion:
If \(\frac{(s^{2}+s)}{2}<m\) all the numbers with trivial and non trivial triangular remainder can be inferred from the zero, invariant tuples.
Proposition 8. The trivial (0,1) zero invariant tuple inferred only trivial triangular remainder numbers in odd numbers
This is a particular case of 6.2.
#### 6.3.1 Demonstration:
Is Supposed that in 6.2 c=0, for this reason n=s. Due to \(\frac{(s^{2}+s)}{2}<m\) then n is a trivial triangular remainder.
Proposition 9. The trivial (m-1,m) zero invariant tuple inferred only trivial symmetric triangular remainder numbers in odd numbers
This is a particular case of 6.2.
#### 6.4.1 Demonstration:
\((m-1,m)\equiv(-1,0)\pmod{m}\). For this c=-1 and \(n=-s-1\). The triangular remainder of n is:
\[\frac{(-s-1)^{2}-s-1}{2}\equiv\frac{s^{2}+s}{2}\pmod{m} \tag{9}\]
Then \(n\equiv m-s-1\pmod{m}\) and \(\frac{(s^{2}+s)}{2}<m\). As conclusion n is a trivial symmetric triangular remainder.
### Proposition 10. The numbers with non trivial triangular remainder are the compounds numbers
The existence of a non trivial triangular remainder probe automatically that the number is compound number
#### 6.5.1 Demonstration:
For 6.3, 6.4 and the conclusions of [2] is probed that the non trivial triangular remainder can be inferred only from the non trivial zero, invariant tuple. The existence of a non trivial invariant (see [2]) demonstrate that the number is a compound number.
7 Conclusions. The invariants properties can be used to develop new algorithms to test the primality
From 5.5, 6.5 and [2] is possible to play with different strategies in order to test the primality or get additional information to factorize the number. The main strategies are:
### Using the invariant to test the primality
The search for invariant could be used to test the primality and even provide additional information related to the factorization. The main advantage is that it is not necessary to check if the number is a power or triangular. On the other side is the method with the minimal potential solutions. There are \(2^{\beta(m)}-2\) non-trivial invariants.2
Additional to this, The algorithms can be distributed in different machines, can run in both directions and statistically strategies can be used.
Footnote 2: \(\beta(m)\) is the number of prime factor of m.
#### 7.1.1 Algorithm 1. Algorithm to test the primality using invariants remains
A very easy algorithm can be created to check the primality and even identifying if is a power of a prime or compound number just searching invariants. In order to do it is quite simple to create one algorithm. This algorithm go from down to up but could be done also in the opposite direction. The procedure is as following:
1. Create one counter C1. This counter go from 2 to (m-1)/2 and each iteration is increased by 1.
2. Create a second counter C2 this counter start in 4 and each iteration is increased by 2*C1-1.
3. If C2 in one iteration is bigger than m then C2=C2-m.
4. If C2=m then the program stop and m is a power.
5. If C2=C1 then the program stop and m is a compound number.
6. Finally if C1=(m-1)/2 an the program doesn't stop before the program stop and is a prime number.
As example is shown three different number and the result with the algorithm. The example is started with 55
\begin{tabular}{|l|l|l|l|} \hline
**Iteration** & **C1** & **C2** & **Result** \\ \hline
1 & 2 & 4 & \\ \hline
2 & 3 & 9 & \\ \hline
3 & 4 & 16 & \\ \hline
4 & 5 & 25 & \\ \hline
5 & 6 & 36 & \\ \hline
6 & 7 & 49 & \\ \hline
7 & 8 & 64-55=9 & \\ \hline
8 & 9 & 26 & \\ \hline
9 & 10 & 45 & \\ \hline
10 & 11 & 66-55=11 & **11=11 is compound** \\ \hline \end{tabular}
If the same process is realized for a prime number like 23:
\begin{tabular}{|l|l|l|l|} \hline
**Iteration** & **C1** & **C2** & **Result** \\ \hline
1 & 2 & 4 & \\ \hline
2 & 3 & 9 & \\ \hline
3 & 4 & 16 & \\ \hline
4 & 5 & 25-23=2 & \\ \hline
5 & 6 & 13 & \\ \hline
6 & 7 & 26-23=3 & \\ \hline
7 & 8 & 18 & \\ \hline
8 & 9 & 35-23=12 & \\ \hline
9 & 10 & 31-23=8 & \\ \hline
10 & 11 & 29-23=6 & **is prime** \\ \hline \end{tabular}
Is observed that \(C1=\frac{m-1}{2}\) and C2 is not 0 or equal to C1 then is prime Finally if 9 is tested then:
\begin{tabular}{|l|l|l|l|} \hline
**Iteration** & **C1** & **C2** & **Result** \\ \hline
1 & 2 & 4 & \\ \hline
2 & 3 & 9-9=0 & **At least one of his factor is a power.** \\ \hline \end{tabular}
**Factorization**
The result provide a anti invariant tuple (c,d) Each of then is multiple of one factor and for this reason factorizing c or d provide one of the factor numbers. Other strategy is multiply \(c\cdot d=f\) then \(f/m=g\) an the factorization of g give as result all the factors independents of c and d.
**Code**
Following code implement the previous algorithm. This code is not optimize but give an idea about how easy is.
string = input('please-insert-a-odd-number:') num = int(string) # initializecontrolvariable control = int((num - 1) / 2) prime = True # initializetheothervariables C1 = 2 C2 = 4 # controlloop while(control >= C1): if (C2 > num):
C2 = C2 - num elif(C2 == num): print(f'thenumberis'raisedto-the-a-power{C1}:') prime=False break if(C1 == C2): print(f'thenumberis'not-a-prime-{C1}:') prime=False break C1 = C1 + 1 C2 = C2 + 2 * C1 - 1 if(prime): print(f'thenumberis'a-prime:') ```
### Using the non trivial quadratic remains to test the primality
Another possibility could be the non-trivial quadratic remainders, which would also provide additional information related to the factorization. There are \(\epsilon(m)(2^{\beta(m)}-2)\) solutions.3
Footnote 3: \(\epsilon(m)\) is the quantity of numbers with trivial quadrangular remainder where GCD(n,m)=1.
The main disadvantage is the process to check if it is a quadratic number. These algorithms can be distributed on different machines and run in both directions.
#### 7.2.1 Algorithm 2. Algorithm to test the primality using non trivial quadratic remains up down
In this code the primality is tested searching non trivial quadratic remains in a number but from up to down. the algorithm is :
1. Create one counter C1. this counter increase from 1 to (m-1)/2-\(\sqrt{m}\) and each iteration is increased by 1.
2. Create a second counter C2 this counter start in remains of the \(((number-1)/2)^{2}\) and each iteration is increased by 2*C1.
3. Create a number P2 where contains the square number nearest to C2.
4. If C2 in one iteration is bigger than m then C2=C2-m.
5. If C2=m then the program stop and m is a power.
6. If C2=P2 then the program stop and m is a compound number.
7. Finally if C1=(m-1)/2-\(\sqrt{m}\) an the program doesn't stop before the program stop and is a prime number.
For example the primality of 93 are going to be checked:
\begin{tabular}{|l|l|l|l|} \hline
**Iteration P1** & **C2** & **P2** & **Result** \\ \hline
0 & 70 & 81 & \\ \hline
1 & 72 & 81 & \\ \hline
2 & 76 & 81 & \\ \hline
3 & 82 & 100 & \\ \hline
4 & 90 & 100 & \\ \hline
5 & 100-93=7 & 100 & \\ \hline
6 & 19 & 25 & \\ \hline
7 & 33 & 36 & \\ \hline
8 & 49 & 49 & 49=49 is not prime \\ \hline \end{tabular}
If the same process is realized for a prime number like 23:
\begin{tabular}{|l|l|l|l|} \hline
**Iteration P1** & **C2** & **P2** & **Result** \\ \hline
0 & 6 & 9 & \\ \hline
1 & 8 & 9 & \\ \hline
2 & 12 & 16 & \\ \hline
3 & 18 & 25 & \\ \hline
4 & 26-23=3 & 4 & \\ \hline
5 & 13 & 16 & \\ \hline
6 & 25-23=2 & 18 & end is found is prime \\ \hline \end{tabular}
Finally if 9 is tested then:
\begin{tabular}{|l|l|l|l|} \hline
**iteration P1** & **C2** & **P2** & **result** \\ \hline
0 & 7 & 9 & \\ \hline
2 & 9 & 9-9=0 & **At least one of his factor is a power** \\ \hline \end{tabular}
**Factorization**
This produce two square numbers \(((m-1-2P1)/2)\)2 and P2. Obtaining the following numbers \(c=((m-1-2P1)/2)+\sqrt{P2}\) and \(d=((m-1-2P1)/2)-\sqrt{P2}\). For the properties of the invariant ca be seen that each contain a factor of m.
Footnote 2: \(\gamma(m)\) is the quantity of numbers with trivial triangular remainder where GCD(n,m)=1.
#### 7.3.1 Algorithm to test the primality using non trivial triangular remains
In this code the primality is tested searching non trivial triangular remains in a number in direction from up to down. The algorithm is as following:
1. Create one counter C1. This counter increase from 1 to (m-1)/2-s where s is the last number where \(T(s)<m\) and each iteration is increased by 2.
2. Create a second counter C2. This counter start in remains of the \((T(number-1)/2)\) and each iteration is increased by 2*C1.
3. Create a third counter C3. This counter start in remains of the \((T(number-3)/2)\) and each iteration is increased by 2*(C1-1).
4. Create a number P2 where contains the triangular number nearest to C2.
5. Create a number P3 where contains the triangular number nearest to C3.
6. If C2 in one iteration is bigger than m then C2=C2-m.
7. If C3 in one iteration is bigger than m then C3=C3-m.
8. If C2=m then the program stop and m is a power.
9. If C3=m then the program stop and m is a power.
10. If C2=P2 then the program stop and m is a compound number.
11. If C3=P3 then the program stop and m is a compound number.
12. Finally if C1=(m-1)/2-s an the program doesn 't stop before the program stop and is a prime number.
\begin{tabular}{|l|l|l|l|l|l|} \hline
**iteration C1** & **C2** & **P2** & **C3** & **P3** & **result** \\ \hline
0 & 12 & 15 & 58 & 66 & \\ \hline
1 & 16 & 21 & 60 & 66 & \\ \hline
2 & 24 & 28 & 66 & 66 & T(45-2)= \\ & & & & & 946 & and \\ & & & & & 66=66 is not \\ & & & & & prime \\ \hline \end{tabular}
If the same process is realized for a prime number like 23:
\begin{tabular}{|l|l|l|l|l|l|} \hline
**iteration C1** & **C2** & **P2** & **C3** & **P3** & **result** \\ \hline
0 & 9 & 10 & 20 & 21 & \\ \hline
1 & 13 & 15 & 22 & 25 & \\ \hline
2 & 21 & 21 & 5 & 6 & T(10-4)=21 \\ & & & & & 21=21 is \\ & & & & & prime \\ \hline \end{tabular}
Finally if 15 is tested then:
\begin{tabular}{|l|l|l|l|l|l|} \hline
**iteration C1** & **C2** & **P2** & **C3** & **P3** & **result** \\ \hline
0 & 6 & 6 & 13 & 15 & T(6)=21 is not prime \\ & & & & & 6=6 \\ \hline
1 & 10 & 10 & 15 & 15 & C3=number is a triangular number \\ \hline
2 & 24 & 28 & 66 & 66 & 66=66 is not prime \\ \hline \end{tabular}
|
2303.02172 | Observable signatures of stellar-mass black holes in active galactic
nuclei | Stellar-mass black holes (BHs) are predicted to be embedded in the disks of
active galactic nuclei (AGN) due to gravitational drag and in-situ star
formation. However, clear evidence for AGN disk-embedded BHs is currently
lacking. Here, as possible electromagnetic signatures of these BHs, we
investigate breakout emission from shocks emerging around Blandford-Znajek jets
launched from accreting BHs in AGN disks. We assume that the majority of the
highly super-Eddington flow reaches the BH, produces a strong jet, and the jet
produces feedback that shuts off accretion and thus leads to episodic flaring.
While these assumptions are highly uncertain at present, they predict a
breakout emission characterized by luminous thermal emission in the X-ray
bands, and bright, broadband non-thermal emission from the infrared to the
gamma-ray bands. The flare duration depends on the BH's distance $r$ from the
central supermassive BH, varying between $10^3-10^6$ s for $r \sim 0.01-1$ pc.
This emission can be discovered by current and future infrared, optical, and
X-ray wide-field surveys and monitoring campaigns of nearby AGNs. | Hiromichi Tagawa, Shigeo S. Kimura, Zoltán Haiman, Rosalba Perna, Imre Bartos | 2023-03-03T19:00:02Z | http://arxiv.org/abs/2303.02172v1 | # Observable Signatures of Stellar-mass Black Holes in Active Galactic Nuclei
###### Abstract
Stellar-mass black holes (BHs) are predicted to be embedded in the disks of active galactic nuclei (AGN) due to gravitational drag and in-situ star formation. However, clear evidence for AGN disk-embedded BHs is currently lacking. Here, as possible electromagnetic signatures of these BHs, we investigate breakout emission from shocks emerging around Blandford-Znajek jets launched from accreting BHs in AGN disks. We assume that the majority of the highly super-Eddington flow reaches the BH, produces a strong jet, and the jet produces feedback that shuts off accretion and thus leads to episodic flaring. While these assumptions are highly uncertain at present, they predict a breakout emission characterized by luminous thermal emission in the X-ray bands, and bright, broadband non-thermal emission from the infrared to the gamma-ray bands. The flare duration depends on the BH's distance \(r\) from the central supermassive BH, varying between \(10^{3}-10^{6}\) s for \(r\sim 0.01-1\) pc. This emission can be discovered by current and future infrared, optical, and X-ray wide-field surveys and monitoring campaigns of nearby AGNs.
gravitational waves - stars: black holes - galaxies: active
## 1 Introduction
It is a common belief that stars and compact objects (COs), including stellar-mass black holes (BHs), are embedded in the disks of active galactic nuclei (AGNs) due to capture via dynamical interactions between the nuclear star cluster (NSC) and the AGN disk (Ostriker, 1983; Syer et al., 1991), and in-situ star formation (Levin & Beloborodov, 2003; Goodman & Tan, 2004; Thompson et al., 2005; Levin, 2007). There are several observations supporting this picture. The high metallicity of quasars is presumably related to frequent explosive phenomena of COs and stars in AGN disks (Artymowicz et al., 1993; Wang et al., 2010; Xu et al., 2018; Wang et al., 2021; Toyouchi et al., 2021). The existence of young stars (Genzel et al., 2003; Levin & Beloborodov, 2003) and clusters (Milosavljevic & Loeb, 2004) around Sgr A*, as well as the high metallicity component of NSCs (Antonini et al., 2015; Do et al., 2020; Neumayer et al., 2020; Fahrion et al., 2021) imply that stars, and hence COs, form in-situ in AGN disks. Furthermore, the spatial distribution of low-mass X-ray binaries discovered in the Galactic center (Hailey et al., 2018; Mori et al., 2021) is consistent with the evolution of COs and stars in an AGN disk (Tagawa et al., 2020).
AGN disks are plausible environments for BH-BH (e.g. Bartos et al., 2017; Stone et al., 2017; McKernan et al., 2018; Yang et al., 2019; Tagawa et al., 2020) and BH-neutron star (NS) mergers (McKernan et al., 2020; Tagawa et al., 2021; Yang et al., 2020) reported as gravitational wave (GW) events by the LIGO (Aasi et al., 2015), Virgo (Acernese et al., 2015) and KAGRA (Akutsu et al., 2021) detectors (Venumadhav et al., 2019; Abbott et al., 2020; The LIGO Scientific Collaboration et al., 2021). This pathway can explain the distributions of masses, mass ratios (Yang et al., 2020; Gayathri et al., 2021), spin vectors (Tagawa et al., 2020), and correlation between the masses and spin magnitudes (Tagawa et al., 2021) for the bulk of merging events. Furthermore, AGN disks are promising environments to explain the characteristic properties, high mass (Tagawa et al., 2021), possible high eccentricity (Samsing et al., 2020; Tagawa et al., 2021; Romero-Shaw et al., 2020; Gayathri et al., 2022), and hypothesized electromagnetic (EM) counterpart, ZTF19abanrhar (Graham et al., 2020), of the unexpected GW event GW190521 (Abbott et al., 2020). In addition, the first GW event, GW150914 (Abbott et al., 2016), might be associated with a bright gamma-ray event, GW150914-GBM (Connaughton et al., 2016, 2018, but see Greiner et al., 2016; Xiong, 2016; Savchenko et al., 2016), which may imply a merger in a gas-rich environment.
Recently, several studies have investigated emission from transients emerging from AGN disks. Zhu et al. (2021), Zhu et al. (2021), Perna et al. (2021), Yuan et al. (2021), Wang et al. (2022), and Lazzati et al. (2022) estimated the emission from gamma-ray bursts, and Perna et al. (2021) and Zhu et al. (2021) discussed the electromagnetic signatures expected from accretion induced collapse of neutron stars and white dwarfs. Yang et al. (2021) studied the properties of tidal disruption of stars by stellar-mass BHs, while Grishin et al. (2021) investigated supernova explosions, and Bartos et al. (2017) and Stone et al. (2017) estimated the electromagnetic emission produced by thermal radiation and/or outflows
from circum-BH disks in AGN disks. There are several studies which investigated possible transients from merging BHs in AGN disks, focusing on the association of the optical flare, ZTF19abanrhr, with the BH merger. McKernan et al. (2019) discussed emission from shocks caused by collision between gas bound to the merged remnant and unbound gas after recoil kicks due to anisotropic radiation of GWs. Graham et al. (2020) assessed the net luminosity and timescales for gas accretion induced by recoil kicks. de Mink and King (2017) considered flares emerging from shocks in a circum-BH disk due to recoil kicks. Kimura et al. (2021) and Wang et al. (2021, 2021), respectively, considered thermal and non-thermal emission from bubbles around BHs due to strong outflows considering continuous and episodic super-Eddington accretion, and Wang et al. (2021) further considered emission from shocks emerging due to interactions of Blandford-Znajek (BZ) jets (Blandford and Znajek, 1977) launched from accreting BHs to the broad line regions. Tagawa et al. (2022) (hereafter Paper I) estimated the structure of the cavity created by the BZ jet and dynamical evolution of gas around the BHs. Tagawa et al. (2023) (hereafter Paper II) investigated the properties of emission from shocks emerging around jets launched from a BH merger remnant.
In this paper, we apply the method developed in Paper II, and evaluate properties and observabilities of thermal and non-thermal emission from shocks emerging around jets launched by accreting solitary BHs due to the BZ effect (Fig. 1). We find that thermal emission is bright in X-ray bands, while non-thermal emission is bright in infrared to gamma-ray bands. This emission is predicted to be discoverable by current and future optical and X-ray telescopes, that is the Zwicky Transient Facility (ZTF), Vera Rubin, XMM-Newton, HiZ-GUNDAM, Einstein Probe, NuSTAR, FORCE, XRT, Chandra, JWST, and WISE.
## 2. Emission
In this section we describe the method for calculating the properties of the breakout emission produced from solitary BHs in AGN disks. More details on this computation are provided in Paper II.
### Mechanisms for breakout emission
Here we highlight the physical mechanisms responsible for producing the breakout emission from solitary BHs in AGN disks (see Fig. 1 for a schematic representation). In these disks, isolated BHs are surrounded by circum-BH disks, since the gas captured by BHs from the AGN disk has enough angular momentum to circularize around the BHs (Tanigawa et al., 2012). When the circum-BH disk is advection dominated, as expected here, a magnetically dominated state can be realized (e.g. Meier, 2001; Kimura et al., 2021) owing to the accumulation of the magnetic flux in the vicinity of the BH (Cao, 2011). Even if the magnetic flux is initially weak, the outflow from the disk converts the toroidal magnetic field generated by the shear motion into a poloidal field (Liska et al., 2020). In these cases, jets from spinning BHs can be launched through the BZ process (Blandford and Znajek, 1977). The jet power (\(L_{\rm j}\)) is proportional to the mass accretion rate onto the BH (\(\dot{m}\)),
\[L_{\rm j}=\eta_{\rm j}\dot{m}c^{2}, \tag{1}\]
where \(\eta_{\rm j}\) is the conversion efficiency from rest mass to jet power, which is approximated by \(\eta_{\rm j}\sim a_{\rm BH}^{2}\) for a magnetically dominated jet (e.g. Tchekhovskoy et al., 2010; Narayan et al., 2021), \(a_{\rm BH}\) is the dimensionless spin of the BH (see SS 2.2 for its choice), and \(c\) is the speed of light. Since the power of a shock emerging around the jet and the luminosity of radiation emitted from the shock are roughly proportional to the jet power, the accretion rate onto the BH is a key quantity to determine the observed luminosity from the system.
The accretion rate onto a circum-BH disk in the AGN disk is often evaluated via a modified Bondi-Hoyle-Lyttleton (BHL) rate, as given by Eq. (1) of Paper I. To consider a possible reduction from the BHL rate, we parameterized the fraction of the accretion rate onto the BH (\(\dot{m}\)) over the Bondi-Hoyle-Lyttleton rate (\(\dot{m}_{\rm BHL}\)) as \(f_{\rm acc}=\dot{m}/\dot{m}_{\rm BHL}\). For example, low \(f_{\rm acc}\) may be predicted due to winds from an accretion disk with a super-Eddington rate, although recent simulations suggest that the conversion to wind is moderate (Kitaki et al., 2021) for accretion flows in which the circularization radius (where gas is circularized after being captured by a BH) is much larger than the trapping radius (within which photons are advected to a BH without escaping), as is the case for BHs embedded in an AGN disk. In addition, the accretion rate onto a BH in a cavity during the active phases is estimated to be lower by a factor of a few compared to that without a cavity (Tagawa et al., 2022). As a fiducial value, we simply adopt \(f_{\rm acc}=1\).
Once the jet collides with the AGN gas, a cocoon of shocked gas forms around the jet. Due to the high pressure of the cocoon, AGN gas around the BH, together with the outer regions of the circum-BH disk, are quickly evacuated. The BH keeps accreting and the jet remains active until the inner remnant regions of the truncated circum-BH disk are consumed by the accretion. Subsequently, the BH is quiescent and the cavity begins to fill in gradually. Finally, AGN gas is recaptured by the BH, and the cocoon reopens a cavity. We predicted in Paper I that such a cycle repeats many times until the dissipation of the AGN disk.
As the jet collides with unshocked gas in the AGN disk, strong shocks form. During the early phases, photons in the shocked medium cannot escape from the system because they are surrounded by the optically thick AGN disk. As the shock approaches the surface of the AGN disk, thermal photons in the shocks begin escaping from the system, and non-thermal electrons begin to be accelerated due to the formation of collisionless shocks, leading to luminous thermal and non-thermal emission. As non-thermal emission, we take into account synchrotron radiation, synchrotron-self Compton scattering, and second-order inverse Compton scattering. Because of the high density of AGN gas, we need to consider synchrotron self-absorption.
In Paper II we predicted the properties of the breakout emission emerging from merger remnant BHs, and the same formulae can be applied to the emission from solitary BHs. Hence here, by applying the models constructed in Paper II, we discuss the properties and the
observability of the breakout emission from solitary BHs.
### Numerical choices
In the fiducial model we adopt the same parameter values as in Paper I. More specifically: the BH mass is \(m=10\,\mathrm{M}_{\odot}\), the radial distance of the BH from the central SMBH is \(R_{\mathrm{BH}}=1\,\mathrm{pc}\), the mass of the SMBH is \(M=10^{6}\,\mathrm{M}_{\odot}\), the gas inflow rate from the outer boundary (\(R_{\mathrm{out}}=5\,\mathrm{pc}\)) of the AGN disk is \(\dot{M}_{\mathrm{in}}=1\,\,L_{\mathrm{Edd}}/c^{2}\), where \(L_{\mathrm{Edd}}\) is the Eddington luminosity of the SMBH, the angular momentum transfer parameter in the outer AGN disk is \(m_{\mathrm{AM}}=0.15\)(Thompson et al., 2005), the viscous parameter in the inner disk is \(\alpha_{\mathrm{AGN}}=0.1\)(King et al., 2007; Martin et al., 2019), and the opening angle of the injected jet is \(\theta_{0}=0.2\)(e.g. Pushkarev et al., 2009; Hada et al., 2013, 2018; Berger, 2014).
We set the jet energy conversion efficiency to \(\eta_{\mathrm{j}}=0.1\) considering that spin-up by accretion and spin-down by the BZ jet may be roughly equal at around \(a_{\mathrm{BH}}\lesssim 0.3\)1 (e.g., Fig. 10 of Narayan et al., 2021), the fraction of postshock energy carried by the post shock magnetic field and by electrons to \(\epsilon_{\mathrm{B}}=0.03\)(e.g., Panaitescu & Kumar, 2001; Uchiyama et al., 2007; Santana et al., 2014) and \(\epsilon_{\mathrm{e}}=0.1\)(e.g., Waxman & Loeb, 1999; Panaitescu & Kumar, 2001; Sironi et al., 2013; Santana et al., 2014), respectively, and the power-law slope for injected electrons accelerated by the first-order Fermi process to \(p=2.5\).
Footnote 1: The spin magnitude of the BHs in the AGN disk is assumed to be lower than that observed for typical X-ray binaries (Reynolds, 2021). This is because the former keeps powering a BZ jet while accreting, while the other does not in soft states. The fiducial choice is conservative, and the breakout emission becomes brighter if the spin magnitude is higher.
## 3 Properties of Breakout Emission
In the following we discuss the properties of the breakout emission (see Paper II for computational methods).
### Properties of breakout emission from solitary BHs
In the outer regions of \(R\gtrsim 0.1\) pc for the fiducial model, since the aspect ratio (\(H_{\mathrm{AGN}}/R\)) of the AGN disk is large due to intense star formation to stabilize the disk (Thompson et al., 2005), the accretion rates onto BHs, and accordingly the breakout luminosity \(L_{\mathrm{breakout}}\), are low (Fig.2 a). In the inner regions of \(R\lesssim 10^{-2}\) pc,
Figure 1: Schematic picture of quiescent phases (left) and the breakout emission from the head of a jet launched from a solitary BH (right) embedded in an AGN disk.
Figure 2: The luminosity and duration as a function of the distance from the SMBH (\(R\)) for emission from breakout of shocks produced around solitary BHs in the fiducial model. (a) The shock kinetic (solid black), breakout (dashed black), and non-thermal (solid orange) luminosity. (b) The duration of emission, \(t_{\mathrm{duration}}\). The BH locations adopted in the fiducial model are indicated with filled circles superposed on the black solid lines.
\(L_{\rm breakout}\) is low since gaps form in these regions, which reduce the accretion rates onto BHs. The accretion rates onto BHs at \(R=1\) and \(10^{-2}\) pc are, respectively, \(3\times 10^{-4}\) M\({}_{\odot}\) yr\({}^{-1}\) and \(3\times 10^{-3}\) M\({}_{\odot}\) yr\({}^{-1}\), corresponding to \(\sim 10^{4}\) and \(10^{5}\) times the Eddington rate. Despite the fact that the accretion rate and the duration of accretion are, respectively, much lower and longer than for gamma-ray bursts, the range of values for the plasma parameters (\(\epsilon_{\rm e}\), \(\epsilon_{\rm B}\)) can be reasonably adopted from observations of afterglow emission of gamma-ray bursts (e.g. Panaitescu & Kumar, 2001; Santana et al., 2014). This is motivated by the consideration that the basic physics of the radi
Figure 4.— Same as Fig. 3, but for BH sources at \(R=0.01\) pc.
Figure 3.— The spectral energy distribution for non-thermal (thick solid black) and thermal (thick solid brown) emission in the fiducial model (§ 2.2) at \(R=1\) pc. The left, middle, and right components in black lines represent synchrotron emission, synchrotron-self Compton (SSC), and second-order inverse Compton (IC) scattering, respectively. Blue and dashed blue lines represent emission from the host AGN and its variability, respectively. Dotted cyan, green, red, purple, gray, gold, pink, and orange lines indicate the sensitivities of ZTF, Vera Rubin and Roman space telescope, Chandra and XMM-Newton, HiZ-GUNDM and Einstein Probe, BAT and XRT, NuSTAR and FORCE, WISE, and JWST, respectively. The results are also shown for models with lower accretion rate onto BH (\(f_{\rm acc}=0.1\), thin solid lines), or lower efficiencies of electron acceleration (\(\epsilon_{\rm e}=0.01\), thin dashed) in panel (a), or lower magnetic field amplification (\(\epsilon_{\rm B}=10^{-5}\), thin dotted), or a higher jet efficiency (\(\eta_{\rm j}=1\), thin dashed-dotted) in panel b.
ation processes in the two contexts is similar, since in both cases the emission is produced by the interaction of a relativistic jet with the surrounding medium. However, we do note that the jet production physics may be different (e.g. Bromberg & Tchekhovskoy, 2016; Liu et al., 2017), the compositions of the jets may be different (e.g. Beloborodov, 2003; Kimura et al., 2022; Chen et al., 2022), the duration and the accretion rate are different, and the contribution of synchrotron self-absorption to emission is different due to the difference in the density of the ambient material. The above differences may result in significantly different distributions of the plasma parameters between the jets from BHs in AGN disks and those in gamma-ray bursts.
It is predicted that BHs tend to reside in the outer regions with \(R\gtrsim\) pc since the Type I migration timescale is long, as well as in the inner regions with \(R\lesssim 10^{-2}\) pc as annular gaps are predicted to form where migration is slow causing BHs to accumulate (e.g. Tagawa et al., 2020; Gilbaum & Stone, 2021; Perna et al., 2021). Therefore, below we consider the two cases with BHs at 1 pc and at \(10^{-2}\) pc as representative examples.
The spectral energy distributions for emission from BHs at \(R=1\) pc and \(R=0.01\) pc in the fiducial model are shown in Figs. 3 and 4, respectively. In this model, the thermal emission is computed under the assumption that the radiation energy in the shock is released once the shock becomes optically thin (e.g. Levinson & Nakar, 2020). Non-thermal emission is produced from electrons accelerated at collisionless shocks by synchrotron radiation, synchrotron-self Compton scattering, and second-order inverse Compton scattering.
The non-thermal emission is bright from the infrared to the gamma-ray bands (solid black lines). The three peaks are contributed by synchrotron radiation, synchrotron-self Compton scattering, and second-order inverse Compton scattering, while the lower cutoff in non-thermal emission is due to synchrotron self-absorption. The thermal emission is mostly bright in X-rays (solid brown lines), and it has a much higher luminosity than the non-thermal emission (due to the reduction by a factor of \(\epsilon_{\rm e}\) compared to the total energy).
In Paper I we evaluated the breakout of shocks around a jet produced from a solitary BH and found it to be episodic. Emission phases last for the consumption timescale of the circum-BH disk with \(t_{\rm cons}\sim 300\) yr, followed by quiescent phases lasting for the resupply timescale of \(t_{\rm re}\sim 10^{4}\) yr at \(R\sim 0\) pc, with the cycle repeating. At \(R\sim 10^{-2}\) pc, \(t_{\rm cons}\sim 1\) yr and \(t_{\rm re}\sim 10\) yr. By using the duration of emission (solid black line in Fig. 2 b), which is \(t_{\rm duration}\sim 2\times 10^{6}\) s at \(R\sim\) pc and \(\sim 200\) s at \(R\sim 0.01\) pc, the total duration for breakout emission to be released from one BH over the AGN lifetime time is estimated to be \(f_{\rm active}\sim t_{\rm duration}/t_{\rm re}\sim 10^{-5}\)-\(10^{-6}\). Here, note that the duration is reduced for the model with \(\eta_{\rm j}=1\) to \(7\times 10^{5}\) s at \(R=1\) pc and 300 s at \(R=0.01\) pc, and enhanced for the model with \(f_{\rm acc}=0.1\) to \(3\times 10^{6}\) s at \(R=1\) pc and \(6\times 10^{3}\) s at \(R=0.01\) pc. As discussed in Paper I, the number of AGN disk-embedded BHs is \(\sim 300\) (\(M_{\rm in}/1\)\(L_{\rm Edd}c^{2}\))\({}^{1/2}\). Using the predicted mass distribution of \(dN_{\rm BH}/dR\propto R^{\gamma_{\nu}}\) with \(-0.5\lesssim\)\(\gamma_{\nu}\lesssim\)\(0\)(Freitag et al., 2006; Hopman & Alexander, 2006; Alexander et al., 2007), we tentatively assume that \(N_{\rm BH}\sim 200\) and \(\sim 30\) BHs are embedded at \(R\sim\) pc and \(\sim 10^{-2}\) pc, respectively. Note that these numbers evolve with time and are highly uncertain. With these, the time interval between flares in one AGN is \(t_{\rm interval}\sim t_{\rm re}/N_{\rm BH}\sim 50\) yr and \(\sim 0.3\) yr at \(R\sim\) pc and \(R\sim 10^{-2}\) pc, respectively. From solitary BHs at \(R\sim\) pc, thermal emission with luminosity of \(\sim 2\times 10^{42}\) erg/s in X-ray bands (brown line in Fig. 3) and non-thermal emission with \(\sim 10^{39}\)-\(10^{41}\) erg/s in infrared to gamma-ray bands (black line in Fig. 3) are predicted with duration of \(t_{\rm duration}\sim 0.1\) yr (solid black line in Fig. 2 b) by observing one AGN for \(t_{\rm interval}\sim 50\) yr. From solitary BHs at \(R\sim 10^{-2}\) pc, thermal emission with luminosity of \(\sim 4\times 10^{43}\) erg/s in hard X-ray bands (brown line in Fig. 4) and non-thermal emission with luminosity of \(\sim 10^{41}\)-\(10^{42}\) erg/s in optical to gamma-ray bands (black line in Fig. 4) are predicted with duration of \(\lesssim 10^{3}\) s by observing an AGN for \(\sim 0.3\) yr.
Here, the luminosity from the host AGN in the relevant energy range is
\[\nu L_{\rm AGN}(\nu)\sim 10^{42}\ {\rm erg/s}(M/10^{6}\ {\rm M _{\odot}})\] \[(\dot{M}c^{2}/1\ L_{\rm Edd})(f_{\rm bol}/10)^{-1}\,. \tag{2}\]
where \(f_{\rm bol}\) is the bolometric correction at the given frequency. As depicted by the blue lines in Figs. 3 and 4, we assume that \(f_{\rm bol}\sim 5\) at \(c/\nu=4400\) A and extrapolate the luminosity for \(10^{12}\ {\rm Hz}\lesssim\nu\lesssim 10^{15}\) Hz using the cyan or blue points in Fig. 7 of Ho (2008) depending on the assumed Eddington rate, and \(f_{\rm bol}\sim 10\) in \(0.1\ {\rm keV}\leq h\nu\)(Ho, 2008; Trakhtenbrot et al., 2017; Duras et al., 2020) with the upper exponential cut off at 300 keV (e.g. Ricci et al., 2018). We also assume that the fraction of the variable luminosity compared to the average luminosity (\(f_{\rm var}\)) in optical bands with \(t_{\rm duration}\lesssim 0.1\) yr is \(f_{\rm var}\lesssim 0.1\)(Kozlowski, 2016) and that in X-ray bands is \(f_{\rm var}\sim 0.3\)(Soldi et al., 2014; Maughan & Reiprich, 2019, dashed blue lines).
In the optical, X-ray, and gamma-ray bands, the luminosity for non-thermal emission at \(R=0.01\) pc exceeds the variable luminosity (solid black and dashed blue lines in Fig. 4). Additionally, the variability of AGNs is typically stronger at shorter wavelengths (Arevalo et al., 2008), while non-thermal emission for \(R=0.01\) pc is brighter at longer wavelengths in the optical bands. This unusual trend can help distinguish the breakout emission from a solitary BH from stochastic AGN variability. Also, the thermal X-ray luminosity clearly exceeds the AGN luminosity at both \(R=1\) pc and 0.01 pc (blue and brown lines in Figs. 3 and 4). Hence, non-thermal emission from BHs at \(R=0.01\) pc and thermal emission at \(R=0.01\) pc and \(R=1\) pc can be recognized as unusual variability of AGNs due to their luminosity and color.
However, we do note that the properties of emission are significantly influenced by uncertainties in the model parameters, namely \(f_{\rm acc}\), \(\epsilon_{\rm e}\), \(\epsilon_{\rm B}\), and \(\eta_{\rm j}\). Since \(\epsilon_{\rm e}\) and \(\epsilon_{\rm B}\) are respectively found to vary within the ranges \(\sim 0.01\)-\(0.3\) and \(\sim 10^{-5}\)-\(0.1\) from GRB afterglow observations (Panaitescu & Kumar, 2001; Santana et al., 2014), and \(\eta_{\rm j}\) can typically range between 0.1-1 as discussed above, in Figs. 3 and 4 we also show emission models with \(\epsilon_{\rm e}=0.01\) (thin dashed lines in panel a), \(\epsilon_{\rm B}=10^{-5}\) (thin dotted lines in panel b), and \(\eta_{\rm j}=1\) (thin dashed-dotted lines in panel b). Given that \(f_{\rm acc}\) is also highly uncertain, we additionally present a model with \(f_{\rm acc}=0.1\) (thin solid
lines in panel a) as a representative example. Figs. 3 and 4 show that non-thermal emission at \(R=0.01\) pc can be dimmer than the AGN variability if the values of the model parameters are in their lower range. On the other hand, thermal emission is relatively less affected by these parameters. If we assume that \(\eta_{\rm j}\) is well constrained by numerical simulations (e.g. Narayan et al., 2021), then thermal emission is almost solely influenced by the accretion rate onto the BHs. Thus, by observing the thermal emission, we can improve our understanding of the accretion processes in super-Eddington regimes.
As an additional point in relation to observations, we note that the estimates above suggest that we need to wait a long time to come across breakout emission by monitoring a single AGN. A more viable strategy would be that of observing many AGNs and check whether there is variability in various bands, as discussed in SS 3.3.
### Differences with respect to the emission from merging remnants
While the basic physical emission mechanisms are the same for solitary BHs and the post-merger BHs discussed in Paper II, there are some important quantitative differences between the two cases, which we highlight below. (1) Merger remnants tend to be massive compared to isolated BHs. (2) Merger remnants have higher BH spin magnitude, and hence higher conversion efficiency \(\eta_{\rm\tilde{b}}\) of mass to jet power. (3) An enhancement of the accretion rate (compared to the solitary BH case) is expected for merger remnants due to shocks emerging in circum-BH disks by GW recoil kicks. (4) The flares from merger remnants are correlated with GW events. Due to (1)-(3), the luminosity of the breakout emission is higher in the case of a post-merger BH, and due to (4) the transients are easier to discover when produced by merger remnants and the associated GW source has already been detected. The above suggests that the emission from solitary BHs is more difficult to observe compared to that from merger remnants; therefore, the search for emission from solitary BHs needs to be strategized, as discussed in the next section.
### Observability of breakout emission
Here, we consider whether emission from solitary BHs can be discovered by current and future observing facilities.
The luminosity from non-thermal emission from solitary BHs at \(R=0.01\) pc exceeds the sensitivity limit by ZTF, Vera Rubin, XRT, Chandra, XMM-Newton, WISE, and JWST at \(d_{\rm L}=30\) Mpc (solid black, dashed cyan, dashed green, dashed gray, dashed red, and dashed pink, and dashed orange lines in Fig. 4). Here, the typical variable luminosity of AGNs with duration of \(\lesssim 0.1\) yr is (Kozlowski, 2016)
\[\nu L_{\rm AGN,vari}(\nu)\sim 2\times 10^{40}\ {\rm erg/s}(M/10^{6}\ { \rm M}_{\odot})\] \[(\dot{M}c^{2}/0.01\ L_{\rm Edd})(f_{\rm bol}/5)^{-1}(f_{\rm var}/ 0.1), \tag{3}\]
which is generally lower than the luminosity of breakout emission \(L_{\rm flare}\sim f_{\rm acc}(10^{41}\)-\(10^{42})\) erg/s, unless the reduction in the accretion rate from the Bondi-Hoyle-Lyttleton rate is \(f_{\rm acc}\lesssim\)0.1. Additionally, non-thermal emission for \(R=0.01\) pc is redder around the optical bands as mentioned in SS 3.1. Thus, we can identify the breakout emission by the magnitude of its luminosity as well as by the color of the flare. Here, note that the luminosity of the breakout emission is roughly proportional to the accretion rate onto the SMBH (e.g. Paper I). Hence, it reduces the influence of both the AGN accretion rate \(\dot{M}_{\rm in}\) and mass \(M\) on the detectability of the breakout emission, since the AGN luminosity is also proportional to the accretion rate onto the SMBH. However, in pessimistic cases in which the accretion rate onto BHs is lower than assumed in the fiducial model, or the efficiencies of electron acceleration or magnetic field amplification are lower, then non-thermal emission is dimmer than the typical variability of AGNs, and hence it would be difficult to observe.
On the other hand, thermal emission from solitary BHs at \(R=0.01\) pc and \(R=1\) pc can most likely be discovered by several X-ray telescopes (Figs. 3 and 4), unless the accretion rate onto BHs is significantly lower than assumed in our fiducial model. For emission from \(R=1\) pc, since the flare is as rare as \(t_{\rm interval}\sim 50\) yr per AGN, many (\(\sim t_{\rm interval}/t_{\rm obs}\sim 50(t_{\rm interval}/50\ {\rm yr})(t_{\rm obs}/{\rm yr })^{-1}\)) AGNs need to be simultaneously observed to be discovered within the observational timescale (\(t_{\rm obs}\)). This requires wide field surveys, such as HiZ-GUNDAM/Einstein Probe (Table 1). Multi-wavelength observations are likely to be key to identifying the breakout emission. This is because in the solitary BH model, flares occur simultaneously over a broad range of bands (infrared, optical, and X-ray), while for AGN variability the evolution is delayed depending on the frequency. The actual false-alarm probability for the detection of the breakout emission should be quantified in future work, using observed multi-band AGN light-curves.
We also estimate observability when the BH spin magnitude is maximal (\(a_{\rm BH}\sim 1\)), and so is the jet energy conversion efficiency (\(\eta_{\rm j}\sim 1\), Narayan et al., 2021), as an upper limit (dotted lines in panels b of Figs. 3 and 4). In this case the luminosity of the breakout emission is enhanced by an order of magnitude compared to that in the fiducial model. Additionally, the shock velocity is higher by a factor of \(\sim 1.6\)(Bromberg et al., 2011), and the frequency of the emission is enhanced accordingly. Then, the non-thermal emission from the BH at \(R=0.01\) pc would be detectable by ZTF and the HiZ-GUNDAM/Einstein Probe. Additionally, the non-thermal emission for \(R=1\) pc can be identified by wide-field surveys in radio bands, such as the Atacama Cosmology Telescope (Naess et al., 2021) and the Large Submillimeter Telescope (Kawabe et al., 2016). Therefore, in this optimistic case, the emission could be easily discovered by several instruments at various wavelengths.
More generally, in the fiducial model non-thermal emission from solitary BHs at \(R\sim 10^{-2}\) pc can be discovered by ZTF, Vera Rubin, XRT, Chandra, XMM-Newton, JWST, and WISE as unusually intense and red-colored variability, while thermal emission from solitary BHs at \(R\sim 1\) pc can be discovered by HiZ-GUNDAM and Einstein Probe.
## 4 Conclusions
In this paper we have evaluated the properties of breakout emission from shocks emerging around jets launched
from accreting and spinning solitary BHs embedded in AGN disks, and discussed the observability of such emission. In our model, accretion, and hence jet formation, is episodic, since gas around the BHs is evacuated by the jets; once gas is resupplied, the jet is expected to collide with the gas. Due to the formation of shocks at collision, thermal emission produced by the shocked gas and non-thermal emission produced by accelerated electrons are expected. Our main results are summarized as follows:
1. Thermal and non-thermal emission are bright in X-ray bands and in infrared to gamma-ray bands, respectively.
2. Breakout-emission from solitary BHs is harder to observe than from merger remnants because (1) the non-thermal and thermal emission are not as bright, and (2) the burst is rare and there is no GW trigger. Hence, catching it requires monitoring a large number of AGNs. However, we can still identify breakout emission from solitary BHs as peculiar flares in nearby AGNs, characterized by broad-band non thermal emission (from the \(\gamma\)-rays to the IR), with superimposed thermal emission and duration that depends on the distance of the BH from the central SMBH, varying between \(10^{3}-10^{6}\) s for distances \(R\sim 0.01-1\) pc.
3. Non-thermal emission from solitary BHs at \(R~{}\sim~{}0.01\) pc from the SMBH with duration of \(\sim 10^{3}\) s can be discovered by infrared, optical, and X-ray telescopes as unusually red-colored variability of less luminous AGNs. Additionally, thermal emission from solitary BHs at \(R\sim 1\) pc with duration of \(\sim 10^{6}\) s can be discovered by current and future X-ray telescopes.
We find that the observability of the breakout emission from solitary BHs in AGN disks is strongly influenced by accretion processes in super-Eddington regimes. To discover signatures from the solitary BHs, the accretion processes and plasma physics should be better understood through numerical simulations. Conversely, if the emission is discovered but their properties are different from what we predict, this would improve our understanding of the underlying accretion processes and plasma physics.
This work was financially supported by Japan Society for the Promotion of Science (JSPS) KAKENHI grant Number JP21J00794 (HT) and 22K14028 (S.S.K.). S.S.K. was supported by the Tohoku Initiative for Poster-Froisb of MEXT's Strategic Professional Development Program for Young Researchers. Z.H. was supported by NASA grant NNX15AB19G and NSF grants AST-2006176 and AST-1715661. R.P. acknowledges support by NSF award AST-2006839. I.B. acknowledges the support of the Alfred P. Sloan Foundation and NSF grants PHY-1911796 and PHY-2110060.
## Appendix
TELESCOPES
We list the name and properties of telescopes in Table 1. |
2302.03145 | Techniques to Improve Neural Math Word Problem Solvers | Developing automatic Math Word Problem (MWP) solvers is a challenging task
that demands the ability of understanding and mathematical reasoning over the
natural language. Recent neural-based approaches mainly encode the problem text
using a language model and decode a mathematical expression over quantities and
operators iteratively. Note the problem text of a MWP consists of a context
part and a question part, a recent work finds these neural solvers may only
perform shallow pattern matching between the context text and the golden
expression, where question text is not well used. Meanwhile, existing decoding
processes fail to enforce the mathematical laws into the design, where the
representations for mathematical equivalent expressions are different. To
address these two issues, we propose a new encoder-decoder architecture that
fully leverages the question text and preserves step-wise commutative law.
Besides generating quantity embeddings, our encoder further encodes the
question text and uses it to guide the decoding process. At each step, our
decoder uses Deep Sets to compute expression representations so that these
embeddings are invariant under any permutation of quantities. Experiments on
four established benchmarks demonstrate that our framework outperforms
state-of-the-art neural MWP solvers, showing the effectiveness of our
techniques. We also conduct a detailed analysis of the results to show the
limitations of our approach and further discuss the potential future work. Code
is available at https://github.com/sophistz/Question-Aware-Deductive-MWP. | Youyuan Zhang | 2023-02-06T22:41:51Z | http://arxiv.org/abs/2302.03145v1 | # Techniques to Improve Neural Math Word Problem Solvers
###### Abstract
Developing automatic Math Word Problem (MWP) solvers is a challenging task that demands the ability of understanding and mathematical reasoning over the natural language. Recent neural-based approaches mainly encode the problem text using a language model and decode a mathematical expression over quantities and operators iteratively. Note the problem text of a MWP consists of a context part and a question part, a recent work finds these neural solvers may only perform shallow pattern matching between the context text and the golden expression, where question text is not well used. Meanwhile, existing decoding processes fail to enforce the mathematical laws into the design, where the representations for mathematical equivalent expressions are different. To address these two issues, we propose a new encoder-decoder architecture that fully leverages the question text and preserves step-wise commutative law. Besides generating quantity embeddings, our encoder further encodes the question text and uses it to guide the decoding process. At each step, our decoder uses Deep Sets to compute expression representations so that these embeddings are invariant under any permutation of quantities. Experiments on four established benchmarks demonstrate that our framework outperforms state-of-the-art neural MWP solvers, showing the effectiveness of our techniques. We also conduct a detailed analysis of the results to show the limitations of our approach and further discuss the potential future work1.
Footnote 1: Code is available at [https://github.com/sophistz/Question-Aware-Deductive-MWP](https://github.com/sophistz/Question-Aware-Deductive-MWP).
## 1 Introduction
Math word problem (MWP) solving is the task of answering a mathematical question described in natural language (Bobrow, 1964). Figure 1 shows an example of a MWP. The input is composed of a context text and a question text, where the context part depicts several narratives with some quantities, and the question part defines the goal. The output is a numerical value as the answer. To obtain such an answer, one needs to infer a mathematical expression that specifies the operators over the quantities in the problem. To solve a MWP, the machine is required to not only understand the description and query of the problem but also perform mathematical reasoning over the text.
Over the last few years, with the presence of deep learning models in the field of NLP (Chung et al., 2014; Vaswani et al., 2017; Devlin et al., 2019), a growing number of neural-based approaches are proposed for effectively solving MWPs (Faldu et al., 2021; Lan et al., 2022; Sundaram et al., 2022). Inspired by machine translation research (Sutskever et al., 2014), a line of research (Wang et al., 2017; Amini et al., 2019) formulates the task as a sequence-to-sequence (Seq2Seq) translation problem from natural language to mathematical expression. They use an encoder to embed the problem text using a language model (e.g., GRU (Chung et al., 2014), BERT (Devlin et al., 2019)) and then employ a decoder to generate an expression sequentially. Later works (Xie and Sun, 2019; Zhang et al., 2020; Cao et al., 2021; Jie et al., 2022) develop Seq2Tree or Seq2DAG neural models, where the decoders produce operators and quantities based on a specific structure iteratively. Despite the recent progress,
Figure 1: An example of a MWP.
these neural solvers are still miles away from building robust representations and effective solutions like humans. Some work Patel et al. (2021) finds existing neural-based solvers that do not have access to the question part asked in the problem can still solve a large fraction of MWPs, indicating these solvers may only perform shallow reasoning or pattern matching rather than fully understanding and reasoning over the mathematical text. Additionally, most neural solvers do not preserve the mathematical laws (e.g., commutative law) when decoding an expression Faldu et al. (2021), where the representations for mathematical equivalent expressions are not the same. For example, the hidden representation for the expression \((1-2)+3\) is different from that of the expression \((-2+1)+3\), although they are computing the same quantity.
To mitigate the two issues mentioned above, we propose a novel neural framework that fully exploits the question part of a problem and enforces step-wise commutative law. Specifically, after feeding both the context and the question into an encoder, we obtain the representation of the question text and then regard it as a query to guide the decoding process. This mimics the problem-solving procedure like humans: we often seek the answer to a problem conditioned on our goal. When generating a new expression from quantities and operators, we apply Deep Sets Zaheer et al. (2017) for commutative operators like addition and multiplication, and MLPs for non-commutative operators like additive inverse and multiplicative inverse. Such a design preserves commutative law at each step so that it is more robust to make predictions. To evaluate our proposed approach, we experiment on four multilingual MWP benchmarks (MAWPS Koncel-Kedziorski et al. (2016), Math23k Wang et al. (2017), SVAMPS Patel et al. (2021) and MathQA Amini et al. (2019)), showing our framework outperforms state-of-the-art (SOTA) neural-based solvers. Detailed analysis of the results is also conducted to discuss the limitations and potential improvement.
Our contributions are summarized as follows:
* We propose a novel encoder-decoder framework that fully leverages the question part of a MWP and enforces step-wise commutative law to improve MWP solving.
* Our experimental results on four established benchmarks across two languages show that our model outperforms SOTA neural-based MWP solvers. We further analyze the experimental results and discuss the limitations of our techniques and potential future work.
## 2 Related Work
Research on solving MWP has a long history back to the 1960s Bobrow (1964). Most early works Fletcher (1985); Dellarosa (1986) employ a rule-based method to convert text input to an underlying pre-defined schema. While this slot-filling-like approach is robust to handle irrelevant information, the rules are hard to exhaustively capture the myriad nuances of language and thus do not generalize well across varying language styles.
With the development of machine learning, a stream of research Kushman et al. (2014); Zhou et al. (2015); Mitra and Baral (2016); Roy and Roth (2018) leverages statistical machine learning techniques into MWP solving. These techniques score several expressions within an optimization-based score framework and subsequently arrive at the correct mathematical model for the given text. However, similar to the rule-based methods, these methods discover the expression templates from the training data and do not generalize to the unseen templates during the inference time.
Recently, learning-based models have become a new trend in solving MWPs Faldu et al. (2021); Lan et al. (2022); Sundaram et al. (2022). These neural solvers mainly first encode the problem text into the latent space and then decode the expression using operators and quantities iteratively. Wang et al. (2017) is the seminal work that adopts recurrent neural networks as the encoder and decoder to generate target equations in a Seq2Seq fashion. Following works enhance the Seq2Seq model with reinforcement learning Huang et al. (2018), multi-head attention Li et al. (2019), and large language models Tan et al. (2021). However, although humans write the expression from left to right in a sequence, the mathematical expression has a structure indeed. Therefore, Xie and Sun (2019) proposes a goal-driven tree-structured model to generate the expression tree during the decoding process. This Seq2Tree approach significantly improved the performance over the traditional Seq2Seq approaches and became the majority approach of neural MWP solvers. Later works improve the Seq2Tree model by leveraging the semantic information and quantity relations Zhang
et al., 2020) or external knowledge such as syntactic dependency Lin et al. (2021) and commonsense knowledge Wu et al. (2020).
Although recent neural-based approaches show some promising results in MWP solving, the SOTA neural-based solver is not able to compete with humans, even on the elementary MWPs. Additionally, it is still unclear whether the neural solvers actually understand the problem and perform the reasoning procedure like humans Sundaram et al. (2022). Patel et al. (2021) discovers that the neural-based approaches can still generate the expression and solution even if we discard the question part of a problem. They suggest that the neural networks may only learn the shallow heuristic to generate the expression rather than perform the actual mathematical reasoning. Additionally, most of the neural solvers do not enforce the mathematical laws (i.e., the commutative, associative, and distributive laws) into the design Faldu et al. (2021), where the representation for the mathematical equivalent expressions are different. Although these equivalent expressions compute the same quantity, the neural solvers would bias toward generating a specific expression, making the prediction not robust. To address this issue, some works focus on generating diverse mathematical equivalent expressions during encoding, where they either apply multiple decoders to generate multiple expressions Zhang et al. (2020); Shen and Jin (2020) or enhance the datasets with more golden expressions Yang et al. (2022). However, the improvement of these methods is very limited, and the issue has still remained.
## 3 Problem Formulation
A math word problem can be described as a set of words in a natural language containing \(l\) words \(\mathcal{S}=\{w_{1},w_{2},...,w_{l}\}\). Given the set of quantities \(\mathcal{Q}=\{q_{1},q_{2},...,q_{n}\}\subset\mathcal{S}\), a fixed set of constants \(\mathcal{C}=\{c_{1},c_{2},...,c_{m}\}\), and a fixed set of binary operators \(\mathcal{O}=\{o_{1},o_{2},...,o_{k}\}\), the model aims to generate a \(T\)-step mathematical expression list which leads to the final answer. The expression list can be formulated as \(\mathcal{E}=\{e_{1},e_{2},...,e_{T}\}\), where \(e_{t}=(e_{i},e_{j},o_{t})\) such that \(i<t\) or \(e_{i}\in\mathcal{Q}\), \(j<t\) or \(e_{j}\in\mathcal{Q}\), and \(o_{t}\in\mathcal{O}\). \(e_{t}\) denotes the expression of applying the binary operator \(o_{t}\) over quantities or previously generated intermediate mathematical expressions. The final mathematical expression for the problem is \(e_{T}\).
## 4 Methodology
We propose a novel encoder-decoder framework that fuses the knowledge of question part information into the decoding process and preserves step-wise invariance for mathematical-equivalent expressions. The overview of our model is shown in figure 2.
Our encoder first uses a language model to get the representation for each quantity in the problem text and the representation for the whole question text. Given both quantity and question embeddings, our decoder follows a bottom-up manner that generates the expression iteratively. At each step, the decoder generates the mathematical expression conditioned on the question embedding, using the question text to guide the decoding process. More details of our encoder and decoder are in the following subsections.
### Encoder
A language model is first applied to the raw problem text to acquire the representations of the quantities that appeared in the text. In specific, the quantities in the text are converted into the token "\(\boldsymbol{<}quant\boldsymbol{>}\)" and the converted text is tokenized and fed into the language model. In practice, we use RoBERTa Liu et al. (2019); Cui et al. (2021) as our encoder to generate the representation \(h_{q}\) for each quantity \(q\).
Besides generating the representations for the quantities in the text, we also produce the representation for the question text. Note that the question text may not be defined explicitly in some MWPs, so we define the question part \(\mathcal{S}_{qn}\subset\mathcal{S}\) as the last complete sentence separated by a full stop of the problem text. To obtain the question representation \(h_{qn}\), a mean pooling is applied to all embeddings of the question tokens.
### Decoder
Our decoder consists of three components: a constructor, a scorer, and a rationalizer.
#### 4.2.1 Constructor
Given the embeddings of quantities, our constructor first enumerates all the possible combinations of the quantities and operators at each step. Each combination is a candidate expression for the current step, and we choose one of them as the output expression at this step. To compute the representation for each candidate, the constructor uses
a permutation-invariant function for commutative operators like addition and multiplication and a nonlinear transformation for non-communicative operators like additive inverse and multiplicative inverse. In our implementation, we use Deep Sets for communitative binary operators and MLPs for non-communicative unary operators. An illustration of our constructor is shown in figure 3. To compute the representation of expression \(e=3-5\), we first apply a MLP on the quantity embedding \(h_{q=5}\) to obtain the representation \(h_{q=-5}\) and uses Deep Sets [1] on the two quantity embeddings:
\[h_{q=-5}=\text{MLP}_{\text{add\_inv}}(h_{q=5}) \tag{1}\]
\[h_{e=3-5}=\text{MLP}_{\text{add}}^{2}\left(\sum_{v=3,-5}\text{ MLP}_{\text{add}}^{1}(h_{q=v})\right) \tag{2}\]
It should be stressed that the representations of candidate expressions are invariant under any permutation of the quantities (e.g., \(3-5\) and \(-5+3\), \(3\div 5\) and \(1/5\times 3\)), so our constructor guarantees step-wise commutative law and unifies the representation of expression using \(+\) and \(-\) operators or \(\times\) and \(\div\) operators.
#### 4.2.2 Scorer
With the representations of all candidate expressions generated from our constructor, our scorer scores each of them to decide which expression should be generated at the current step and whether it is the final expression. Figure 4 shows the architecture of our scorer.
Our scorer consists of three parts: a variable scorer, an expression scorer, and a stopping scorer. The variable scorer and expression scorer compute a continuation score, which refers to the score of choosing the expression as an intermediate expression. The stopper computes the termination score, indicating the score of choosing the expression as a final result expression. Specifically, the variable scorer takes the representations of the two quantities of the expression and computes the variable score \(s_{\text{var}}\), representing the probability that quanti
Figure 4: The architecture of our scorer.
Figure 3: The architecture of our constructor.
Figure 2: The overview of our framework. Given the quantity embeddings and question embedding from our encoder, our decoder generates the expression \(e=3+5\) at the first step.
ties \(q_{i},q_{j}\) should be used at the current step:
\[s_{\text{var}}=\text{MLP}_{\text{var}}(h_{q=q_{i}})+\text{MLP}_{\text{var}}(h_{q =q_{j}}) \tag{3}\]
The expression scorer takes the representation of the candidate expression \(h_{e}\) and computes the expression score \(s_{\text{expr}}\) using a MLP:
\[s_{\text{expr}}=\text{MLP}_{\text{epxr}}(h_{e}) \tag{4}\]
To determine whether the decoding process should be stopped, the stopping scorer takes each representation of candidate expression \(h_{e}\) and question \(h_{qn}\) as input and computes the stopping score \(s_{\text{stop}}\) using a MLP and a GRU cell Chung et al. (2014):
\[s_{\text{stop}}=\text{GRU}_{\text{stop}}\left(\text{MLP}_{\text{stop}}(h_{e}), h_{qn}\right) \tag{5}\]
Finally, the score for the candidate expression \(s_{e}\) can be obtained by the sum of the three parts:
\[s_{e}=s_{\text{var}}+s_{\text{expr}}+s_{\text{stop}} \tag{6}\]
#### 4.2.3 Rationalizer
Following Jie et al. (2022), we also adapt a rationalizer to update the representations of all quantities and intermediate expressions at the end of each step. This module is crucial because if the representations are not updated, those expressions that were initially highly ranked would always be preferred. For example, the expression \(e=3+5\) is scored the highest in the first step, it is likely scored very high in the latter step if its representation remains the same.
Figure 5 shows the architecture of our rationalizer. In our implementation, we rationalize the quantity representations using the newly chosen expression and question representations using two GRU cells:
\[h^{{}^{\prime}}_{q}=\text{GRU}^{2}_{\text{rat}}\left(h_{qn},(\text{GRU}^{1}_{ \text{rat}}(h_{q},h_{e})\right) \tag{7}\]
The first GRU cell takes the quantity representations as input and the new expression representation as the hidden state. The second GRU cell takes the representations from the first GRU cell as input and the question representation as the hidden state. The final output is the updated representation for each quantity.
### Training and Inference
We adopt the teacher-forcing strategy Williams and Zipser (1989) to guide the model with the golden expression at each step during training. Denote all the learnable parameters in our framework as \(\theta\), our loss is defined as:
\[\mathcal{L}_{\theta}=\sum_{t=1}^{T}\left(\max_{e\in\text{candidates}^{t}}s_{e} -s_{e_{t}^{*}}\right) \tag{8}\]
where \(T\) is the total step to generate the ground truth expression, candidates\({}^{t}\) represents all the candidate expressions at step \(t\), and \(s_{e_{t}^{*}}\) is the ground truth expression at step \(t\). For each step, this loss minimizes the gap between the scores of the gold expression and the expression with the highest score, encouraging our framework to generate the highest score for the ground truth expression.
During inference, we iteratively choose the expression with the highest score at each step until \(t\) reaches a predefined maximum step \(T\). For the generated expressions \(e_{1},e_{2},\cdots,e_{T}\), the expression with the highest termination score is chosen as the output expression and its corresponding numerical answer is computed as the final answer.
## 5 Experiments
### Datasets
We conduct experiments on four established MWP benchmarks: MAWPS Koncel-Kedziorski et al. (2016), Math23k Wang et al. (2017), SVAMPS Patel et al. (2021), and MathQA Amini et al. (2019). Table 1 shows the statistics of these datasets. MAWPS and Math23k are commonly used in previous research and contain primary school math questions. MathQA and SVAMP are much more challenging. MathQA contains more complex GRE questions in many domains including physics, geometry, and probability, and therefore has a large portion of equations with more operations. SVAMP contains manually designed challenging questions created by applying variations over the problems
Figure 5: Architecture of rationalizer.
from MAWPS, which has lots of unrelated quantities and context text. See Appendix A.1 for problem samples from these four datasets.
### Evaluation Metric
We use the final value accuracy (Val Acc.) as our evaluation metric, indicating the accuracy for which our computed numerical answer from the equation equals the ground-truth answer. Note that there can be multiple mathematical expressions that lead to the same numerical answer (e.g., \((1+2)\times 3\) and \(1\times 3+2\times 3\)), so generating any of them is considered correct.
### Baselines
The baseline approaches are categorized into sequence-to-sequence (Seq2Seq) models and sequence-to-tree (Seq2Tree) models. We compare our proposed model against four Seq2Seq models and six Seq2Tree models on the four benchmarks. Among those Seq2Seq MWP solvers, **GroupAttn**(Li et al., 2019) proposes to design several types of attention mechanisms such as quantity-related attentions in the seq2seq model. **mBERT-LSTM**(Tan et al., 2021) uses multilingual BERT as the encoder and LSTM as the decoder. **BERT-BERT**(Lan et al., 2022) and **RoBERTa-RoBERTa**(Lan et al., 2022) employs BERT and RoBERTa as both encoder and decoder. Among those Seq2Tree models, **GTS**(Xie and Sun, 2019) is the seminal work that uses the bidirectional GRU to encode the problem text and decode the mathematical expression using a tree structure in a top-down manner. **Graph2Tree**(Zhang et al., 2020) enhances the encoder of GTS by modeling the quantity relationships and order information using a GCN. **BERT-Tree**(Liang et al., 2021) pre-trains BERT using 8 pre-training tasks to solve the number representation issue and uses it as the encoder. **RoBERTa-GTS** and **RoBERTa-Graph2Tree** replace the original encoder of GTS and Graph2Tree with RoBERTa. The most similar work of our approach is **RoBERTa-DeductiveReasoner**(Jie et al., 2022), which uses RoBERTa as the encoder and decodes the mathematical expression using a bottom-up approach. However, it encodes the problem text as a whole without any special attention to the question text and fails to preserve any mathematical law when computing the representation for each candidate expression.
### Implementation Details
For the English datasets MAWPS, MathQA, and SvAMP, we use RoBERTa (Liu et al., 2019) as the encoder. For the Chinese dataset Math23K, we use Chinese RoBERTa (Cui et al., 2021) as the encoder. The pre-trained RoBERTa models are initialized from HuggingFaces Transformers (Wolf et al., 2020). All MLPs have 2 hidden layers with 768 units each and use ReLU as the activation function. We use a batch size of \(30\) when training on the 4 datasets. On Math23k, MathQA, and SvAMP, we train our model for \(1,000\) epochs. On MAWPS, we train the model for \(100\) epochs. The Adam optimizer with a learning rate of \(2\times 10^{-5}\) is used to optimize our loss function. All experiments are run 3 times for each dataset on a cluster with 4 NVIDIA RTX-8000 GPUs. Following previous works (Lan et al., 2022; Jie et al., 2022), we report the 5-fold cross-validation accuracy on MAWPS and the test accuracy on Math23k, MathQA, and SvAMP. The results of the baseline approaches are taken from their original papers.
### Main Results
Table 2 shows the 5-fold cross-validation results for different models on the MAWPS dataset. Table 3, table 4, table 5 show the test accuracy comparison on the Math23k, MathQA, and SvAMP dataset respectively. The results show that Seq2Tree models generally have better performances compared with Seq2Seq models, where we conjecture this
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Dataset & \#Train & \#Valid & \#Test & Operations & Language \\ \hline MAWPS & 1,589 & 199 & 199 & + - \(\times\) + - & English \\ Math23k & 21,162 & 1,000 & 1,000 & + - \(\times\) + - & Chinese \\ MathQA & 16,191 & 2,411 & 1,605 & + - \(\times\) + - & English \\ SvAMP & 3,138 & - & 1,000 & + - \(\times\) + & English \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset statistics.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Class** & \multicolumn{2}{c}{**Model**} & **Val Acc.** \\ \hline \multirow{4}{*}{Seq2Seq} & GroupAttn & 76.1 \\ & Transformer & 85.6 \\ & BERT-BERT & 86.9 \\ & RoBERTa-RoBERTa & 88.4 \\ \hline \multirow{4}{*}{Seq2Tree} & GTS & 82.6 \\ & Graph2Tree & 85.6 \\ \cline{1-1} & RoBERTa-GTS & 88.5 \\ \cline{1-1} & RoBERTa-Graph2Tree & 88.7 \\ \cline{1-1} & RoBERTa-DeductReasoner & 92.0 \(\pm\) 0.20 \\ \hline \multicolumn{2}{c}{**Ours**} & **92.9 \(\pm\) 0.14** \\ \hline \hline \end{tabular}
\end{table}
Table 2: 5-fold cross-validation result comparison on MAWPS.
is because the tree decoder can better handle the structure of a mathematical expression than the sequence decoder. Meanwhile, the performances of the models like GTS and Graph2Tree are improved significantly by using pre-trained large language models like BERT or RoBERTa as the encoder, indicating the representation ability of large language models is useful in MWP solving.
The test accuracies of our model on MAWPS, Math23k, MathQA, and SVAMP are \(92.9\%\), \(85.3\%\), \(81.1\%\), and \(44.4\%\) respectively. On all the benchmarks, our model achieves state-of-the-art performances, which surpasses the best baseline model RoBERTa-DeductReasoner by \(0.9\%\), \(0.2\%\), \(3.9\%\), and \(0.4\%\) respectively. Most notably, our model improves the vanilla RoBERTa-DeductReasoner by \(3.9\%\) on the MathQA dataset, which is the largest and hardest dataset of the four. These significant improvements show that both leveraging the question text information and enforcing mathematical laws into the design are useful in MWP solving. Conclusively, the comparisons well demonstrate the effectiveness of our proposed techniques to improve neural MWP solvers.
### Ablation Study
To analyze our techniques in detail, we further perform an ablation study to explore the effectiveness of our proposed two techniques. We replace our constructor with the module in RoBERTa-DeductReasoner (Jie et al., 2022) as the model without preserving communicative law and discard the question representation in our decoder as the model without question representation. The results are shown in Table 6. Compared with the model without preserving the step-wise communicative law, our complete model improves the test accuracies by \(0.3\%\), \(1.2\%\), \(0.2\%\) on MAWPS, MathQA, and SVAMP respectively. Exceptionally, the accuracy decreases by \(0.5\%\) on Math23k. We conjecture this is because the mathematical expression in MathQA is relatively complex so only preserving step-wise communicative law for the expressions is not good enough. On the other hand, the complete model improves the test accuracies of the one without question embedding by \(0.6\%\), \(0.7\%\)\(2.9\%\), \(0.6\%\) on MAWPS, Math23k, MathQA, and SVAMP. These results demonstrate that both enforcing mathematical law and explicitly modeling the question text to guide the decoding procedure with the question representation can generally improve neural MWP solvers. The performance of the model can be further improved by applying both techniques. By comparing these two strategies, we can observe that the exploitation of question part information provides greater improvement to the accuracy than preserving step-wise communicative law for mathematical expressions.
\begin{table}
\begin{tabular}{l l c} \hline \hline
**Dataset** & **Model** & **Val Acc.** \\ \hline \multirow{4}{*}{MAWPS} & RoBERTa-DeductReasoner & 92.0 \\ & **Ours** & **92.9** \\ & - w/o preserving communicative law & 92.6 \\ & - w/o question representation & 92.3 \\ \hline \multirow{4}{*}{Math23k} & RoBERTa-DeductReasoner & 85.1 \\ & **Ours** & 85.3 \\ \cline{1-1} & - w/o preserving communicative law & **85.8** \\ \cline{1-1} & - w/o question representation & 84.6 \\ \hline \multirow{4}{*}{MathQA} & RoBERTa-DeductReasoner & 77.2 \\ \cline{1-1} & **Ours** & **81.0** \\ \cline{1-1} & - w/o preserving communicative law & 79.8 \\ \cline{1-1} & - w/o question representation & 78.1 \\ \hline \multirow{4}{*}{SVAMP} & RoBERTa-DeductReasoner & 44.0 \\ \cline{1-1} & **Ours** & **44.4** \\ \cline{1-1} & - w/o preserving communicative law & 44.2 \\ \cline{1-1} & - w/o question representation & 43.8 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation study of our model. Our model is compared with the one without question part information and the one without preserving communicative law.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Class** & **Model** & **Val Acc.** \\ \hline Seq2Seq &
\begin{tabular}{c} GroupAttn \\ mBERT-LSTM \\ \end{tabular} & 69.5 \\ \hline \multirow{3}{*}{Seq2Tree} & mBERT-LSTM & 75.1 \\ \cline{2-3} & GTS & 75.6 \\ \cline{1-1} & Graph2Tree & 77.4 \\ \cline{1-1} & RoBERTa-DeductReasoner & 85.1 \(\pm\) 0.24 \\ \hline \multicolumn{3}{c}{**Ours**} & **85.3 \(\pm\) 0.21** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Test accuracy comparison on Math23k.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Class** & **Model** & **Val Acc.** \\ \hline Seq2Seq &
\begin{tabular}{c} GroupAttn \\ mBERT-LSTM \\ \end{tabular} & 75.1 \\ \hline \multirow{3}{*}{Seq2Tree} & GTS & 75.6 \\ & Graph2Tree & 77.4 \\ \cline{1-1} & RoBERTa-DeductReasoner & 85.1 \(\pm\) 0.24 \\ \hline \multicolumn{3}{c}{**Ours**} & **85.3 \(\pm\) 0.21** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Test accuracy comparison on Math23k.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Class** & **Model** & **Val Acc.** \\ \hline Seq2Seq & BERT-LSTM & 77.1 \\ \hline \multirow{3}{*}{Seq2Tree} & Graph2Tree & 69.5 \\ & BERT-Tree & 73.8 \\ \cline{1-1} & RoBERTa-DeductReasoner & 77.2 \(\pm\) 0.11 \\ \hline \multicolumn{3}{c}{**Ours**} & **81.1 \(\pm\) 0.13** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Test accuracy comparison on MathQA.
\begin{table}
\begin{tabular}{l l c} \hline \hline
**Dataset** & **Model** & **Val Acc.** \\ \hline \multirow{3}{*}{MAWPS} & RoBERTa-DeductReasoner & 92.0 \\ & **Ours** & **92.9** \\ & - w/o preserving communicative law & 92.6 \\ & - w/o question representation & 92.3 \\ \hline \multirow{3}{*}{Math23k} & RoBERTa-DeductReasoner & 85.1 \\ & **Ours** & 85.3 \\ \cline{1-1} & - w/o preserving communicative law & **85.8** \\ \cline{1-1} & - w/o question representation & 84.6 \\ \hline \multirow{3}{*}{MathQA} & RoBERTa-DeductReasoner & 77.2 \\ \cline{1-1} & **Ours** & **81.0** \\ \cline{1-1} & - w/o preserving communicative law & 79.8 \\ \cline{1-1} & - w/o question representation & 78.1 \\ \hline \multirow{3}{*}{SVAMP} & RoBERTa-DeductReasoner & 44.0 \\ \cline{1-1} & **Ours** & **44.4** \\ \cline{1-1} & - w/o preserving communicative law & 44.2 \\ \cline{1-1} & - w/o question representation & 43.8 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Test accuracy comparison on SVAMP.
## 6 Discussion
### Limitation
Our current model provides a technique to enforce the commutative law and unify representations for a group of operators like addition and subtraction or multiplication and division at each step. For example, the representations \(h_{e=1+2}=h_{e=2+1}\) and \(h_{e=2-1}=h_{e=2+(-1)}=h_{e=(-1)+2}\). However, the commutative law is not preserved when computing a complex expression that contains more than a single operator. For example, \(h_{e=(1+2)+3}\neq h_{e=(3+2)+1}\), where different orders of the operation result in a different representation. Moreover, other mathematic laws like distributive law and associative law also fail to be preserved. For example, \(h_{e=(1+2)\times 3}\neq h_{e=1\times 3+2\times 3}\).
### Future Work
In the future, we plan to enforce more mathematical laws in neural MWP solvers and build an invariant representation for all mathematically equivalent expressions. This idea could be implemented by designing a unified format to represent all mathematical expressions and compute the representations based on such a form. Specifically, we could represent all the expressions in a fine-grained format with no parentheses. For example, to compute the representation of \((1+2)\times 3\), instead of generating the embeddings for (1 + 2) first and then \((1+2)\times 3\), we could first convert the expression to \(1\times 3+2\times 3\) and then compute the representation by applying some permutation invariant functions on such a form. Therefore, all mathematical expressions would have the same embeddings and the model can also preserve the distributive law and the associative law.
### Conclusion
Existing neural math word solvers mostly fail to fully leverage the question text or preserve any mathematical laws. In this work, we propose a new encoder-decoder framework that applies two techniques to address these issues: (1) our encoder generates an embedding for the question text and uses it to guide the decoding process. (2) our decoder applies Deep Sets to compute the representations of candidate expressions to enforce step-wise commutative law. Experiments on four standard MWP benchmarks show that these two techniques could improve the performance of neural MWP solvers and make our model achieve state-of-the-art performance.
### Acknowledgement
I would like to thank Mr. Zhaoyu Li, Prof. Xujie Si and Siva Reddy for their assistance in this work. Special thanks to Mr. Zhaoyu Li who surveyed related publications and proposed the idea.
|
2308.16275 | Quantitative Toolchain Assurance | The software bill of materials (SBOM) concept aims to include more
information about a software build such as copyrights, dependencies and
security references. But SBOM lacks visibility into the process for building a
package. Efforts such as Supply-chain Levels for Software Artifacts (SLSA) try
to remedy this by focusing on the quality of the build process. But they lack
quantitative assessment of that quality. They are purely qualitative. A new
form of assurance case and new technique for structuring it, called process
reduction, are presented. An assurance case for a toolchain is quantitative and
when structured as a process reduction can measure the strength of the
toolchain via the strength of the reduction. An example is given for a simple
toolchain. | Dennis Volpano, Drew Malzahn, Andrew Pareles, Mark Thober | 2023-08-30T19:05:03Z | http://arxiv.org/abs/2308.16275v1 | # Quantitative Toolchain Assurance
###### Abstract.
The software bill of materials (SBOM) concept aims to include more information about a software build such as copyrights, dependencies and security references (Dennis, 2011). But SBOM lacks visibility into the process for building a package. Efforts such as SLSA (Selena et al., 2011) try to remedy this by focusing on the quality of the build process. But they lack quantitative assessment of that quality. They are purely qualitative. A new form of assurance case and new technique for structuring it, called _process reduction_, are presented. An assurance case for a toolchain is quantitative and when structured as a process reduction can measure the strength of the toolchain via the strength of the reduction. An example is given for a simple toolchain.
assurance case, reducibility, Bayesian inference, Beta distribution +
Footnote †: journal: Quantitative Toolchain Assurance
## 1. Introduction
A software toolchain typically contains tools and processes for continuous integration and development. Tools might include automated testing and/or fuzzing. Code may be analyzed for undefined behaviors statically, dynamically, or both. At some point the analyzed executable is packaged and perhaps signed for deployment. The software bill of materials (SBOM) concept basically extends packaging to include more information about a build. Through standards like Software Package Data Exchange (SPDX), an SBOM affords greater visibility into the software's composition through metadata covering libraries used, copyrights, dependencies and security references (Dennis, 2011).
SBOM lacks visibility into the process used to build a package. Supply-chain Levels for Software Artifacts (SLSA) (Selena et al., 2011) tries to remedy this by imposing requirements on the process that warrant giving it a security level (0-3). Level 1 requires provenance be established, Level 2 requires use of a hosted build platform and code signing, and Level 3 requires the build platform be hardened with strong tamper protection. The levels are qualitative. They are assigned without any quantitative assessment of processes or tools. For instance, how hard is it for an intruder or impostor to sign untrusted code on behalf of a trusted party? How strong is tamper protection and just how "hardened" is a platform that uses it? In general, the strength of static analyzers like code sanitizers should be quantified as well. Merely noting their use to justify assigning a security level is insufficient if they can falsely report security. More important is knowing how often false reports have occurred.
An assurance case is a structured argument with evidence used to justify a claim that a system will perform acceptably in a given operating environment (Mayer, 2011). In industry, these cases are typically organized around the architecture of a system and used to establish system safety (Dennis, 2011). Lately they have also been used to argue for software pipeline security (Brands, 2011) but the assurance cases are qualitative just as they are in SLSA. A new form of assurance case and a new technique for structuring it, called _process reduction_, are introduced. An assurance case for a toolchain is quantitative and when structured as a process reduction can measure the strength of the toolchain via the strength of the reduction.
## 2. Process Reductions
Software toolchains consist of processes following best practices for secure software development, as articulated in guidance like NIST SP800-218 (Dennis, 2011). For example, there is a process for generating code signing key pairs on a device. It has its own guidance that states it must be "sufficiently protected, as the security of this process relies upon the protection of the private key" (Brands, 2011). The statement clearly identifies a process whose security is of concern but how does one know the steps they have taken are enough to ensure security? What exactly is the definition of security? To address these questions, suppose security is the property, say (\(A\)), that only the code signer can ever (A)uthenticate as the common name in the signer's certificate. Then one can begin to list hypotheses seemingly sufficient to imply the property:
1. [label=(\(p\))]
2. Signer is issued an RSA public/private key pair
3. Factoring the product of two 1024-bit primes is hard
4. Signer's private key is known only to signer
Note (\(p2\)) is a critical assumption that must be made explicit if one expects to argue that the property holds. What makes it unique is that it is beyond the toolchain architect's control. An architect can introduce steps to increase the likelihood that (\(p1\)) and (\(p3\)) hold. For instance, for (\(p1\)), a RealID may be required to authenticate the signer to the certification authority, and for (\(p3\)), a strong PIN may be required to protect the private key. But similar steps cannot be taken for (\(p2\)). In fact, it may eventually be disproved.
Though it may seem that (\(p1\)) \(\land\) (\(p2\)) \(\land\) (\(p3\)) \(\Rightarrow\) (\(A\)) is true, it cannot be proved. Accessing a private key by say a smart card reader might be possible without knowing the PIN in the future through a new type of reader attack. So (\(A\)) could be false even when the antecedent holds. The implication can only be disproved. But the longer it goes without being disproved, the more widely it is accepted as being true (a more well-known example of this type of implication is the Church-Turing thesis (Chern, 2011)). Suppose then it is accepted as true, though unprovable. The contrapositive is
\[\neg(A)\Rightarrow\neg(p1)\lor\neg(p2)\lor\neg(p3)\]
If (\(p1\)) and (\(p3\)) are true then \(\neg(A)\Rightarrow\neg(p2)\). The complexity of falsifying (\(A\)) rests squarely on the complexity of falsifying (\(p2\)). We say that \(\neg(p2)\)_process reduces_ to \(\neg(A)\) in this case. Thus, it is as hard for someone other than the signer to authenticate as the
common name as it is to factor two 1024-bit primes.1 Note the parallel with many-one reductions. If problem \(L\) many-one reduces to problem \(L^{\prime}\) then \(L^{\prime}\) is as hard as \(L\)(Kang et al., 2015). Unlike many-one reductions, process reductions aren't provable but do have strength.
Footnote 1: The same technique can be used to argue for the correctness of a cryptographic protocol where compromising the protocol should be as computationally hard as compromising its underlying cryptography.
### Process reduction strength
The preceding process reduction is ideal because falsifying (\(A\)) rests squarely on falsifying a hypothesis beyond the architect's control, namely (\(p2\)). It requires proving (\(p1\)) and (\(p3\)) true in order to eliminate them from the contrapositive. But neither is provable. Was it really the trusted signer who was issued the key pair and while the signer may protect the private key with a PIN, how do we know it wasn't disclosed? Thus (\(p1\)) and (\(p3\)) are treated as hypotheses with evidence that may or may not support them. The strength of the reduction \(\neg(p2)\)_process reduces_ to \(\neg(A)\) is measured in terms of probabilities that (\(p1\)) and (\(p3\)) are true. The probabilities can vary because evidence for or against the hypotheses is never final. So each hypothesis is associated with an independent Beta distributed random variable, a continuous variable on the interval \([0,1]\).
Beta distributions have three major advantages. First, a Beta distribution is a conjugate prior for the corresponding Binomial likelihood function in Bayes' rule. Second, applying Bayes' rule to update a prior based on evidence amounts to merely updating the shape hyperparameters of the prior, making Bayesian updates fast. Lastly, for independent Beta distributed random variables \(X\) and \(Y\), a Boolean logic of Beta distributed random variables \(\neg X\), \(X\wedge Y\) and \(X\lor Y\) can be defined with multiplicative and additive identities \(\mathrm{Beta}(1,0)\) and \(\mathrm{Beta}(0,1)\) respectively (Blei et al., 2016). New random variables can be created to capture logical relationships between hypotheses. For instance, \(p1\), \(p2\) and \(p3\) are random variables in the code signing example above. If random variable \(p1\wedge p3\) is associated with (\(A\)) then the strength of the reduction \(\neg(p2)\)_process reduces_ to \(\neg(A)\) is defined to be the mean \(\mathrm{E}[p1\wedge p3]\). Confidence in the strength is defined by the variance \(\mathrm{var}[p1\wedge p3]\).
The rest of the paper describes techniques for defining random variables for a simple software toolchain. An assurance case for the toolchain is given in the form of a process reduction. The logic and advantages of Beta distributions are detailed in the Appendix.
## 3. A Simple Software Toolchain
Consider a simple toolchain comprising a C compiler, a tool (UBSan (Kang et al., 2015)) to detect undefined behaviors and a code signing system. Suppose the compiler is version 8.0.0 of the Clang C compiler, and C code is compiled with the option for detecting undefined behaviors at runtime, specifically option -fsonitize=undefined (we pick version 8.0.0 because it is used in (Cheng et al., 2017)). Examples of undefined behaviors include an array subscript exceeding a static bound, dereferencing misaligned or null pointers and signed integer overflow. As demonstrated in (Cheng et al., 2017), whether the instrumented code generated with this option detects an undefined behavior can depend on whether the code has been optimized. This requires care when defining trials (see Sec. 3.1). If Clang and UBSan do not detect any undefined behaviors then all source files are recompiled without the sanitize option and statically linked without the UBSan runtime libraries. The executable is then digitally signed using the private key of a key pair whose public key appears in a signer certificate with common name Alice.
A toolchain comprises stages of steps executed in some order. Stages are atomic and can be exercised independently of other stages. The toolchain above has four stages: **compile**, **compile-sanitize**, **compile-code sign** and **compile-sanitize-code sign**. Since sanitization and code signing depend on compilation, neither is a stage. A stage can be thought of as a computational problem with inputs defining instances of the problem. Every stage has infinitely-many instances, or inputs. They are C source code files for all stages of the simple toolchain. Assume main bodies have no inputs, as in (Cheng et al., 2017), so that detecting undefined behaviors does not depend on choosing certain inputs. This restriction can be lifted if the set of inputs remains fixed for each main body across toolchain trials. Otherwise the sanitizer, which is dynamic, can effectively be redefined by varying inputs across trials as different control paths afford new opportunities to detect undefined behaviors. This should be avoided as the aim is to learn about one sanitizer at a time by applying it to different main bodies. See Sec. 5.2 for details on how to reach a fixed set of inputs, against which sanitizer strength can be measured, as a fixed point of a function.
### Toolchain trials
For the purpose of quantifying toolchain assurance, we must define a trial or experiment. A _trial_ of a stage in the toolchain is an _instance_ of the stage. Note that running a stage twice on the same input doesn't constitute two trials. Just as inputs must differ to be different instances of the same problem, they must differ to be different trials of the same stage. In practice, the **compile-sanitize** stage would be repeated until a version of the source file is reached for which no undefined behaviors are found. This final version would then be recompiled without the UBSan option but with all other options the same. The sanitizer step is omitted, however recompiling here is not a trial of **compile**, as the source code did not change, only the sanitizer option was removed. In fact, sanitizing the final version is not even a trial of **compile-sanitize** if the signer is ready to sign the executable obtained by compiling without the sanitize option. Instead, this compilation would associate with signing and be a trial of **compile-sanitize-code sign**. Likewise, if linking fails because of an unresolved external reference in a source file then the compilation fails and there is no new trial of **compile**. There is also no new trial of it if linking fails due to a problem that does not require changing any source code files (e.g. recovering a missing object file by just recompiling an old source file).
Keep in mind that since detecting an undefined behavior can depend on the optimization option chosen when code is compiled (see page 22 of (Cheng et al., 2017)), there would be a set of trials for each stage of the toolchain for the unoptimizing compiler (option -O0) and another set for each stage for the optimizing one (options -O1/2/3).
## 4. Assurance Case for the Toolchain
An assurance case in the form of a process reduction is a logical implication, widely believed to be true but unprovable, represented as a tree with the consequent at the root and each node having an independent Beta distributed random variable. Associated with each random variable is a hypothesis. For instance, an assurance
case for the simple toolchain is given in Fig. 1. It has \(8\) random variables, each enclosed within parentheses, and a hypothesis for each. The hypothesis at the root, namely \((G)\), asserts that if executable \(Q\) is signed with private key \((d,n)\) then it is free of undefined behaviors.2 Hypotheses \((s1)\)-\((s3)\) and \((p1)\)-\((p3)\) would seem to imply \((G)\). Only Alice can authenticate as Alice using the RSA key pair by \((p1)\wedge(p2)\wedge(p3)\). Thus by \((s3)\), Alice signs \(Q\) with her private key \((d,n)\). If Alice signs \(Q\) then by \((s1)\), clang -O0/UBSan rejects the source used to build \(Q\), which means no undefined behavior was detected. So \(Q\) has no undefined behaviors by \((s2)\). Therefore we have \(\neg(p2)\) process reduces to \(\neg(G)\). In other words, falsifying \((G)\) implies falsifying \((p2)\). The strength of this reduction is the mean of the Beta random variable at \((G)\):
Footnote 2: Free variables \(Q\), \(e\), \(d\) and \(n\) are implicitly universally-quantified variables in the tree.
\[\mathbb{E}[s1\wedge s2\wedge s3\wedge p1\wedge p3]\]
It requires Beta probability density functions (PDFs) for the constituent random variables. They in turn rely on evidence of the truth of their hypotheses based on trials. Thus, they can vary over trials and be updated by Bayesian inference. In practice, we don't expect the Beta distributions for all variables to vary. For instance, it would likely be fixed (true or false) over all trials for \(p1\). But we expect it to vary when learning about the completeness of Clang and UBSan from trials, which is reflected in the Beta PDF for \(s2\).
### Beta PDFs for the random variables
Suppose we have \(m>0\) trials total of all stages in our toolchain of which \(n>0\) are trials of stages involving code signing. Assume each of the \(n\) trials requires reauthenticating the Signer as Alice. Beta PDFs for each of the random variables are defined below.
* Let \(\alpha_{s1}\) be the number of \(n\) trials in which Alice signed \(Q\) and Clang/UBSan reject the source used to build \(Q\), and \(\beta_{s1}\) be the number of \(n\) trials in which Alice signed \(Q\) and Clang/UBSan accept the source code used to build \(Q\).
* The Beta PDF for this random variable is Beta\((\alpha_{s2},\beta_{s2})\). It measures the completeness of Clang/UBSan over \(m\) trials. It is defined incrementally in practice through Bayesian inference. The posterior PDF at trial \(k-1\) is the Beta PDF used in the assurance case for the toolchain at trial \(k\), which may undergo a Bayesian update at trial \(k\) for use at trial \(k+1\) and so on. See Sec. 5.1. Though updates are optional, performing them more accurately measures completeness. Initially, \(s2\sim\text{Beta}(19,43)\) (see pg. 56 of (Bertson et al., 2016)).
* Let \(\alpha_{s3}\) be the number of \(n\) trials where Alice is authenticated using the key pair and \(Q\) is signed with its private key, and \(\beta_{s3}\) the number of \(n\) trials where an impostor of Alice is authenticated using the key pair and \(Q\) is signed with its private key.
* Let \(\alpha_{p1}\) be the number of \(n\) trials where Alice has been issued the key pair and \(\beta_{p1}=n-\alpha_{p1}\).
* Let \(\alpha_{p3}\) be the number of \(n\) trials where Alice is issued the key pair and the private key is known only to her, and \(\beta_{p3}\) be the number of \(n\) trials where she is issued the key pair but the private key is known to others.
### Reduction strength
The extent to which falsifying \((G)\) rests squarely on falsifying \((p2)\) is the strength of the process reduction from \(\neg(p2)\) to \(\neg(G)\). The reduction is from \(\neg(p2)\) because \((p2)\) is beyond control via any processes in the toolchain. One could argue that \((p3)\) is also beyond control if Alice does not care about disclosing the PIN that protects her private key. In this case, it makes sense to process reduce \(\neg(p2)\vee\neg(p3)\) to \(\neg(G)\). The random variable at \((G)\) would now be \((s1\wedge s2\wedge s3\wedge p1)\). Now the complexity of falsifying \((G)\) rests on that of falsifying \((p2)\) or \((p3)\) with strength \(\mathbb{E}[s1\wedge s2\wedge s3\wedge p1]\). Though the strength is greater, it requires admitting in the assurance case that Alice is an uncontrolled threat to the PIN.
## 5. Beta PDFs for Program Analyses
A code sanitizer is an example of dynamic program analysis aimed at deciding whether a given program has an undefined behavior, which is one that is not defined by the semantics of the programming language. Examples of such behaviors are pointers escaping their scope and array bounds violations. There are also static program analyses aimed at detecting undefined behaviors. Some are embedded in compilers like the GCC and Clang C compilers, while others like Frama-C stand alone (Bertson et al., 2016). The technique for building a Beta PDF for program analysis is the same whether it is static or dynamic. Moreover, the technique is independent of the analysis done, as it measures only how complete the analysis is in practice.
The Beta PDF for an analyzer that analyzes programs for a decidable \(P\) is easily defined because one can assume there is a procedure \(M\) that accepts \(P\) and always halts. So for a given program \(p\), the Beta distribution becomes \(\text{Beta}(1,0)\) (true) if M accepts \(p\) and \(\text{Beta}(0,1)\) (false) if \(M\) rejects \(p\). Unfortunately most of the properties of interest to program analyzers are undecidable, or worse, not even recursively enumerable (re.e.). When the property \(P\) of interest to an analyzer is undecidable, Bayesian inference can be done over trials involving different inputs to measure the analyzer's performance. For any input, it is possible to see if the analyzer succeeded or failed in determining whether the input has property \(P\) by manually inspecting the input. Manually inspecting every input is impractical but suppose it is done to some extent to learn a Beta distribution for the analyzer. Then the distribution would be used to extrapolate its performance on inputs not manually inspected.
This approach can be applied to Clang and UBSan. Suppose \(P\) is the set of all C programs with undefined behaviors. Then \(P\) is undecidable. In proof, a C program can simulate a semi-decision procedure for an undecidable yet \(r.e.\) set \(L\). Given an input, the program simulates the semi-decision procedure on that input and exhibits an undefined behavior if the procedure accepts the input. As the simulation does not require any undefined behaviors, the undefined behavior will be exhibited only if the procedure accepts the input. So any algorithm to decide whether a C program has an undefined behavior is an algorithm for \(L\), which doesn't exist.
Set \(P\) is re. and Clang/UBSan work together as a procedure to accept it. Assume they are sound, meaning that if they find an undefined behavior, then one actually exists. Further, they always halt in practice, so therefore they must be incomplete; otherwise \(P\) would be decidable. Completeness demands that whenever they are given code with an undefined behavior, they detect it. But this doesn't match what is demanded of them in the assurance case at \((s2)\). There it states that if Clang and UBSan reject the given source code (find _no_ undefined behaviors) then the code has none. This is a statement about their soundness in accepting the complement of \(P\). Accepting the complement of \(P\) though is not their goal.
However, the soundness statement is the contrapositive of a statement about their completeness in accepting \(P\), which is their goal. Therefore what is required of the tools at \((s2)\) can be quantified by measuring their completeness in accepting \(P\). This is done using Bayesian inference, as described in the next section.
### Bayesian updating of \(\operatorname{Beta}(\alpha_{s2},\beta_{s2})\)
Let \(p\) be the source code used to build \(Q\) at a trial. Then the rules for Bayesian updating of \(\operatorname{Beta}(\alpha_{s2},\beta_{s2})\) at the trial are given below:
1. \(\operatorname{Beta}(\alpha_{s2}+1,\beta_{s2})\) if \(\operatorname{Clang}\)/UBSan accept \(p\).
2. \(\operatorname{Beta}(\alpha_{s2},\beta_{s2}+1)\) if \(\operatorname{Clang}\)/UBSan reject \(p\) and \(p\) has an undefined behavior.
3. No update if \(\operatorname{Clang}\)/UBSan reject \(p\) and \(p\) has no undefined behaviors.
Rule (1) applies when \(p\) has an undefined behavior and \(\operatorname{Clang}\)/UBSan detect it, which is success. We know \(p\) has an undefined behavior because \(\operatorname{Clang}\)/UBSan accept \(p\) and we assumed they are sound. Rule (2) applies when \(\operatorname{Clang}\)/UBSan fail to detect an undefined behavior in \(p\). Rule (3) avoids updating altogether because its facts are actually not informative. To see this, suppose \(p\) has no undefined behaviors. Then \(\operatorname{Clang}\)/UBSan cannot accept \(p\) because they are sound. Since they always halt, they reject \(p\) and will never do so erroneously because if they did, it would mean they rejected \(p\) when \(p\)_has_ an undefined behavior, which it doesn't. Therefore, \(\operatorname{Clang}\)/UBSan always succeed when they reject an input that has no undefined behaviors. In fact, they would always succeed on such inputs even if they did nothing but reject _all_ inputs! These successes therefore should not count toward completeness even though the facts in rule (3) look like hypothesis \((s2)\).
### Test-driven Bayesian updating
Recall from Sec. 3 the restriction that main bodies have no inputs. As mentioned, the restriction can be lifted if the set of inputs remains fixed for each main body across toolchain trials. This is so for any unit testable component such as a C source file having unique function definitions. The set of inputs can be defined by test cases for unit and integration testing. Completness of \(\operatorname{Clang}\)/UBSan is measured relative to these test cases. It would be inaccurate to talk about its completeness without them. First consider unit testing.
Suppose _Units_ is a set of unit testable components and _Tests_ is a set of test inputs for the components. Not all test cases will be meaningful for every component. Let \(\operatorname{Beta}(\alpha,\beta)\) be a prior Beta distribution for \(\operatorname{Clang}\)/UBSan. Function \(H\), defined in Fig. 2, implements the 3 rules for Bayesian updates given in the previous section in the context of unit testing. The first case in the definition of \(H\) applies when \(\operatorname{Clang}\)/UBSan detects an undefined behavior in \(H(\textit{Units}\), \(\textit{Tests},\operatorname{Beta}(\alpha,\beta))=\)
\(((\textit{Units}-unit)\cup\{unit^{\prime}\},\textit{Tests},\operatorname{ Beta}(\alpha+1,\beta))\)
if \(unit\in\textit{Units}\), \(\textit{UBSan}\) detects an undefined behavior when \(unit\) runs on an input in _Tests_, and \(unit^{\prime}\) does not have that instance of the behavior
\(H(\textit{Units}\), \(\textit{Tests},\operatorname{Beta}(\alpha,\beta))=\)
\(((\textit{Units}-unit)\cup\{unit^{\prime}\},\textit{Tests}\cup\{test\}, \operatorname{Beta}(\alpha+1,\beta))\)
if \(unit\in\textit{Units}\), \(\textit{UBSan}\) rejects \(unit\) when it is run on every meaningful input in _Tests_, \(unit\) has an undefined behavior, \(test\) is an input that causes \(\textit{UBSan}\) to detect the behavior in \(unit\), and \(unit^{\prime}\) does not have that instance of the behavior
\(H(\textit{Units}\), \(\textit{Tests},\operatorname{Beta}(\alpha,\beta))=\)
\(((\textit{Units}-unit)\cup\{unit^{\prime}\},\textit{Tests},\operatorname{ Beta}(\alpha,\beta+1))\)
if \(unit\in\textit{Units}\), \(unit\) has an undefined behavior, there is no input to \(unit\) that causes \(\textit{UBSan}\) to detect the behavior, and \(unit^{\prime}\) does not have that instance of the behavior
\(H(\textit{Units}\), \(\textit{Tests},\operatorname{Beta}(\alpha,\beta))=(\textit{Units},\textit{ Tests},\operatorname{Beta}(\alpha,\beta))\) otherwise
### \(\operatorname{Clang}\)/UBSan update
Recall from Sec. 3 the restriction that main bodies have no inputs. As mentioned, the restriction can be lifted if the set of inputs remains fixed for each main body across toolchain trials. This is so for any unit testable component such as a C source file having unique function definitions. The set of inputs can be defined by test cases for unit and integration testing. Completness of \(\operatorname{Clang}\)/UBSan is measured relative to these test cases. It would be inaccurate to talk about its completeness without them. First consider unit testing.
Suppose _Units_ is a set of unit testable components and _Tests_ is a set of test inputs for the components. Not all test cases will be meaningful for every component. Let \(\operatorname{Beta}(\alpha,\beta)\) be a prior Beta distribution for \(\operatorname{Clang}\)/UBSan. Function \(H\), defined in Fig. 2, implements the 3 rules for Bayesian updates given in the previous section in the context of unit testing. The first case in the definition of \(H\) applies when \(\operatorname{Clang}\)/UBSan detects an undefined behavior in \(H(\textit{Units}\), \(\textit{Tests},\operatorname{Beta}(\alpha,\beta))=\)
\(((\textit{Units}-unit)\cup\{unit^{\prime}\},\textit{Tests},\operatorname{ Beta}(\alpha+1,\beta))\)
if \(unit\in\textit{Units}\), \(\textit{UBSan}\) rejects \(unit\) when it is run on every meaningful input in _Tests_, \(unit\) has an undefined behavior, \(unit\), and \(unit^{\prime}\) does not have that instance of the behavior
\(((\textit{Units}-unit)\cup\{unit^{\prime}\},\textit{Tests}\cup\{test\}, \operatorname{Beta}(\alpha+1,\beta))\)
if \(unit\in\textit{Units}\), \(\textit{UBSan}\) rejects \(unit\) when it is run on every meaningful input in _Tests_, \(unit\) has an undefined behavior, \(test\) is an input that causes \(\textit{UBSan}\) to detect the behavior in \(unit\), and \(unit^{\prime}\) does not have that instance of the behavior
\(((\textit{Units}-unit)\cup\{unit^{\prime}\},\textit{Tests},\operatorname{ Beta}(\alpha,\beta+1))\)
if \(unit\in\textit{Units}\), \(unit\) has an undefined behavior, there is no input to \(unit\) that causes \(\textit{UBSan}\) to detect the behavior, and \(unit^{\prime}\) does not have that instance of the behavior
\(H(\textit{Units}\), \(\textit{Tests},\operatorname{Beta}(\alpha,\beta))=(\textit{Units},\textit{ Tests},\operatorname{Beta}(\alpha,\beta))\) otherwise
Figure 2. Bayesian updates in the context of unit testing
iterated function applications. It converges to a fixed point of \(H\) if the number of undefined behavior instances among systems in the sequence strictly decreases. UBSan finds no undefined behaviors in the system under test of a fixed point because there are none. After some version of the system, we expect no more fixed points will be computed because each requires proving the absence of any undefined behaviors, which may require too much manual effort. Learning about UBSan is frozen at this point. Thereafter whenever UBSan finds no undefined behaviors in a new version of the system, relative to tests that contain at least those of the last fixed point computed, one will take the mean of the posterior Beta PDF of this last fixed point as the probability the system has no undefined behaviors.
While we expect the cost of computing fixed points will diminish as the system matures, it should be noted that not every test cycle must be completed with a fixed point. The tradeoff, however, will be knowing less about UBSan's completeness.
Note that \(H\) will credit UBSan with two successes if it detects the same undefined behavior in two different units, or two instances of it in the same unit regardless of the test cases used. If this behavior were the only type detected by UBSan then it might appear \(H\) can be "gamed" into boosting UBSan's performance, even when UBSan has limited capability, if \(H\) is always applied to units with only this type of undefined behavior. But the units to which it is applied come from a real system, call it \(S\), and if \(S\) comprises units with this behavior as the only possible type of undefined behavior then \(H\) computes a Beta PDF that accurately reflects UBSan's performance in the context of \(S\). We are not suggesting this performance be used to judge UBSan when building any system using the toolchain but rather just when building \(S\). So one could see multiple Beta PDFs for UBSan, one for each project using the toolchain, to reflect possible differences in its performance across projects.
Finally, as \(H\) grows the test suite for a system, UBSan's score improves even though UBSan's instrumentation algorithm has not changed. That's fine because new test cases lead to new execution paths to UBSan's instrumentation. But one could imagine the instrumentation algorithm changing over the lifetime of a toolchain. The third cases defining \(H\) in Figs. 2 and 3 handle the case when no input causes UBSan to detect an undefined behavior. \(H\) could record success here instead of failure if the instrumentation algorithm were adapted so that the behavior is detected for some input. If that input were not among the existing test cases then it would be added. Changes to the algorithm should be monotonic in that all previous successes of UBSan are preserved.
## 6. Future Work
A process reduction relies on a community of interest agreeing on a set of conditions _sufficient_ for implying a desired outcome. Outcomes are rarely specified with precision in guidelines for secure software development, if at all. They must be distilled. The implication will undergo iterations and should eventually converge to something that is widely agreed upon to hold but can only be disproved. It becomes a standard at this point and its contrapositive a basis for a process reduction. A working group could distill sufficient conditions for desired outcomes implicit in documents like that for code signing (Bordes et al., 2015) and NIST SP800-218 (Niss et al., 2018). There is precedent for this type of standardization. A similar effort was undertaken by the Overarching Properties Working Group to identify a sufficient set of properties for approving airborne systems (Bordes et al., 2015).
Software test suites are evaluated in different ways using metrics such as input space coverage, code coverage or kill ratios (Bordes et al., 2015). Assuming these metrics are useful, they should be represented as Beta distributions in our framework. For instance with mutation testing (Bordes et al., 2015), one might represent the successes and failures of a test suite by a BetaPDF where success is determined by whether the suite causes a mutation of the program under test to exhibit some observable difference in behavior when executed, for instance, terminate abnormally (Kolmogorov, 1995). The suite is said to kill the mutant if the difference in behavior is observed. The shape parameters of a BetaPDF for the test suite in this case would be mutants killed (\(\alpha\)) and not killed (\(\beta\)) by the suite.
The effectiveness of a test suite, not just a metric for it, should also be represented within a toolchain since the metric may not correlate well with finding faults (Bordes et al., 2015). Correlation should have its own BetaPDF. This implies performing Bayesian updates over time to learn how well the metric correlates to finding faults in a given system, much the way we learned about the completeness of Clang/UBSan for a given system over time. Learning for a class of systems rather than a single system may also be useful. For instance, Modified Condition/Decision Coverage (MC/DC) (Kolmogorov, 1995) has long been used as coverage criteria for tests involving safety-critical systems. Experience therefore must have shown that these coverage criteria correlate well with finding faults in such systems. Either way it may make sense to learn about correlation for specific systems rather than apply the results of more general empirical studies (Bordes et al., 2015; Bordes et al., 2015). This is another direction for future work.
## 7. Conclusion
Attacks on software supply chains have heightened awareness of the need to produce evidence that systems built by them are reliable and safe to execute. The SBOM described in (Kolmogorov, 1995) and qualitative efforts like SLSA (Kolmogorov, 1995) describe informal evidence that sits
Figure 3. \(H\) extended for integration testing
at the least rigorous end of an evidence spectrum. At the opposite end is the most rigorous evidence, best characterized by proof-carrying code (PCC) (Hogger et al., 2010). Mobile code security was studied extensively in the mid 1990's. It arose in large part from Java applets and to a lesser extent from DARPA's Active Networks program where routers could execute small programs within packets. In each of these cases, one has to guard against malicious executable code. The idea behind PCC was to couple a proof of some property about an application's executable with the executable. If the property were consistent with a recipient's security policy then the recipient would check the validity of the proof, and if valid would execute the code. Leveraging toolchain facts, we introduce a kind of quantitative evidence for software that sits somewhere between these two endpoints.
## 8. Acknowledgements
Thanks to Lucja Kot, Greg Nelson and Bill Bierman for references on mutation testing, and to Elishiva Zak for an update on an effort to evaluate test suites vis-a-vis mutation testing.
|
2305.07384 | Towards Detecting Inauthentic Coordination in Twitter Likes Data | Social media feeds typically favor posts according to user engagement. The
most ubiquitous type of engagement (and the type we study) is *likes*. Users
customarily take engagement metrics such as likes as a neutral proxy for
quality and authority. This incentivizes like manipulation to influence public
opinion through *coordinated inauthentic behavior* (CIB). CIB targeted at likes
is largely unstudied as collecting suitable data about users' liking behavior
is non-trivial. This paper contributes a scripted algorithm to collect suitable
liking data from Twitter and a collected 30 day dataset of liking data from the
Danish political Twittersphere #dkpol, over which we analyze the script's
performance. Using only the binary matrix of users and the tweets they liked,
we identify large clusters of perfectly correlated users, and discuss our
findings in relation to CIB. | Laura Jahn, Rasmus K. Rendsvig | 2023-05-12T11:24:26Z | http://arxiv.org/abs/2305.07384v1 | # Towards Detecting Inauthentic Coordination in Twitter Likes Data
###### Abstract.
Social media feeds typically favor posts according to user engagement. The most ubiquitous type of engagement (and the type we study) is _likes_. Users customarily take engagement metrics such as likes as a neutral proxy for quality and authority. This incentivizes like manipulation to influence public opinion through _coordinated inauthentic behavior_ (CIB). CIB targeted at likes is largely unstudied as collecting suitable data about users' liking behavior is non-trivial. This paper contributes a scripted algorithm to collect suitable liking data from Twitter and a collected 30 day dataset of liking data from the Danish political Twittersphere #dkpol, over which we analyze the script's performance. Using only the binary matrix of users and the tweets they liked, we identify large clusters of perfectly correlated users, and discuss our findings in relation to CIB.
N ovel digital data, political opinion dynamics, social media, coordinated inauthentic behavior, bot detection +
Footnote †: copyrighted: [leftmargin=*][l]l
+
Footnote †: copyrighted: [leftmargin=*][l]l
## 1. Introduction
Algorithmically curated social media feeds favor posts according to user engagement. The most ubiquitous type of engagement (and the type we study) is _likes_(Levy, 2017). A post-a tweet, a shared news article, a video, a meme, etc.--may be highlighted e.g. by being placed highly on users' news feeds. Users customarily take engagement metrics such as likes as a neutral proxy for quality and authority (Levy, 2017; Levy, 2017). This incentivizes _influence operations_ to misrepresent, mislead or manipulate opinion dynamics online (Levy, 2017). Such media manipulation tactics have been labeled _coordinated inauthentic behavior_ (CIB) (Levy, 2017; Levy, 2017; Levy, 2017; Levy, 2017). Influence operations and CIB may thus shape public opinion and political discourse through _attention hacking_, the act of exploiting platforms' content sorting algorithms to highlight certain information items to users. This highlights the societal need to address CIB-caused misrepresentation of political views and the spread of harmful low-quality content and misinformation in the online public sphere (Levy, 2017).
To effectively push narratives on social media, influence operations resort to _coordinated_ groups of accounts rather than individual accounts (Levy, 2017; Levy, 2017). This has, for example, led to the establishment of a marketplace for vendor-purchased engagement (Levy, 2017; Levy, 2017) and metric inflation through coordinated social bots. The behavior dictated by an influence operation is labeled _inauthentic_ as it may not reflect the personal beliefs of the instructed user accounts, as these accounts may be run by algorithmic amplifiers such as automated bots or humans according to a supplied protocol (Levy, 2017).
CIB targeted at one-click reactions such as likes is largely unstudied as collecting data about users' liking behavior around a specific political discourse is non-trivial due to the lack of access to platform data for researchers or severe API rate restrictions that prevent collecting comprehensive datasets. The first main contribution of this paper is a script to collect comprehensive data on liking users from Twitter. The second main contribution is a dataset collected with the script. The dataset contains a month-long survey of liking user behavior from the Danish political Twittersphere, collected through the hashtag #dkpol ("Demmark POLities"). Under this hashtag, citizens, organizations, politicians and journalists from across the political spectrum air, discuss and orientate themselves about current debates in Danish politics. It is _the_ centralized, place-to-be source of information on the debates of the day. The hashtag thus seems a likely candidate for inauthentic coordination, if one seeks to increase the Danish public sphere's attention on some topic. We use the dataset first to evaluate the effectiveness of the script, and second as basis for a case study of liking users behavior with the aim to determine if the simple liking data has sufficient structure to serve as an entry point for the detection of CIB. We argue that it does.
Using a running survey approach, the script retrieves IDs of the most recent liking users of tweets satisfy a specified text query (e.g. a keyword or hashtag of a chosen political debate), timing retrievals by taking into account Twitter set rate limits of the public v2 API for Academic Research Access. The script can retrieve far more comprehensive sets of liking user IDs than are available through the default public and commercial tools of the Twitter APIs and Decahose stream. To the best of our knowledge, the resulting data is the first to contain comprehensive collections of user-IDs of liking users. The dataset thus advances the specialized field of studying one-click reaction-based CIB.
The script's point of departure for data collection is the survey of an online _discourse_ around a _domain_ (e.g. a hashtag) instead of a survey of a preselected group of users. Hence, data collection does not require any prior knowledge about potentially coordinated users nor does subsequent data analysis necessarily require the retrieval of additional account data. When identifying coordination of likes given such concise data, one immediately grasps firstly which specific tweet(s) a potential influence operation is targeted at, and secondly which users are involved in the metric inflation (this is in contrast to existing methods for collecting retweeting user IDs, cf. Sec. 1.1 below). If desired, additional account information may then be rehydrated via public APIs. The focus of the collected data and following applications is thus rather on identifying the _effects_ of CIB inflating specific tweets. These effects may be more robust to changes in the evolution of algorithmic amplifiers, social bots and cyborgs, that with varying degrees of automation increasingly emulate authentic users. Our data and applications are not dependent on individual account features nor time-synchronous actions but only on the like behavior towards an observed tweet.
We analyse the dataset in a case study of #dkpol, mainly to illustrate that the liking behavior data has sufficient structure to serve as a point of entry for detection of CIB. Preprocessing the data points into a simple binary and sparse tweet/like matrix suffices to detect like-coordinated accounts without relying on textual, temporal, nor training data (see Sec. 3.2), a topic that has previously gone unstudied. We undertake two simple analyses: First, we group users by the toughest clustering criteria of complete equality of their like profiles. Under this very strict criteria, we identify several large perfectly correlated groups, including likes we purchased CIB and more perfectly correlated groups of users despite the users not being particularly active (one like suffices), so without any requirement that they have liked aggressively. Second, we show that these groups can be visualized using the first two dimensions in a dimensionality-reduced space using the first two eigenvectors of a Singular Value Decomposition of the tweet/like matrix.
Given a lack of ground truth, we cannot be sure the perfectly coordinated clusters we detect (other than the vendor-purchased groups) are artifacts of CIB. We do believe that the natural correlation is unlikely enough that the groupings raise red flags, warranting further inspection, out of scope of this case study. Our methods may thus serve as pre-studies for bot detection and the application of fact checkers (Sutton et al., 2016).
We make our resources available to the research community, including the raw datapoints complemented with timestamp data (tweet text must be rehydrated per Twitter data sharing policies) and pre-processed user-like data matrices, the scripts used for data collection, for data preprocessing, for evaluation of the completeness of a collected dataset, and for clustering and visualization. Data and scripts are available on Harvard Dataverse (Kennedy, 2017) and the data collection script is additionally available at the public GitHub repository _Get-Twitter-Likers-Data_(Kennedy, 2017).
### Related Work
Social media users have a plethora of available action types (Sutton et al., 2016), many of which may be used in coordinated fashions. E.g., users may coordinate using a specific hashtag, posting a specific URL, tweet, image or mention, or coordinate replies, shares or reactions to existing content. As coordination is not visible when inspecting accounts in isolation, research on CIB has turned to study the collective behavior of groups, with similarities between users serving as a proxy for coordination. Studies have analysed similarities between users posting similar _content_(Kennedy, 2017; Kennedy, 2017; Sutton et al., 2016; Sutton et al., 2016), users having similar _friends and followers_(Kennedy, 2017), and having similarly _timed activities_ (e.g., (Kennedy, 2017; Kennedy, 2017; Kennedy, 2017; Sutton et al., 2016; Sutton et al., 2016; Sutton et al., 2016)). Few studies have looked directly at coordination in one-click reactions such as liking.
Liking is a one-click engagement where users may select one option from a short pre-defined list as their'reaction' to a post, with users' choices typically summed and presented as a quantified metric beneath the item. Reactions include perhaps most famously Facebook's original 'Like', the hearts/likes on Instagram, TikTok and Twitter, and Reddit's up- and downvotes. Sharing and retweeting may also be taken as a one-click reaction on any of these platforms.
Importantly, these reactions inform the platforms' algorithmic content sorting, thus steering users' attention. With attention metrics such as likes being widely used as a proxy for quality and authority, manipulating like counts becomes incentivized for the sake of increased exposure, influence, and financial gain (Sutton et al., 2016). High engagement counts may be perceived as a trust signal about the content (Sutton et al., 2016) and as a positive crowd reaction aiding content to broadcast and to trend (Sutton et al., 2016). Once trending, high engagement counts in likes and shares make users more likely to engage with low-credibility content instead of fact-checking questionable posts (Kennedy, 2017). Scholars have stressed that to fight disinformation campaigns, it is less effective to look at the pushed content (e.g., hashtags, URLs, memes, etc.) and more effective to look at the coordinated content pushing _behaviors_(Sutton et al., 2016).
Related work on coordinated retweetingTo push stories online, retweeting and inflation of the retweet metric attracts manipulation. Several recent papers look at retweeting as a coordination dimension.
Dutta et al. (Sutton et al., 2016) investigate non-synchronized, collusive retweeters (\(n<1,500\)) involved with _blackmarket_ services.
Such collusive retweeters re-share the tweets of other blackmarket customers to earn credits. The authors use a human annotated dataset and supervised machine learning methods leveraging features such as, e.g., user activity or social network characteristics to distinguish between _customers_ and _genuine retweeters_, later extended to detect _paying customers_[19]. Building on these works, Arora et al. [8] analyze user representations to improve the performance of detecting blackmarket customers while Chetan et al. [13] develop an unsupervised approach to detect collusive blackmarket retweeters leveraging, for example, the merit of tweets and timing of retweets analyzed through a bipartite tweet-user graph.
Schoch et al. [46] study time-synchronous co-retweeting (and co-tweeting) as a trace of coordination to detect astroturfing campaigns given a dataset released by Twitter consisting of tweets by accounts that Twitter classified as being involved in hidden information campaigns. The authors filtered the data and only looked at campaigns with more than \(50,000\) tweets and users that tweeted at least \(10\) times in the observation period. They do so by analyzing timing and centralization of coordination. The approach rests on the assumption that it seems implausible that repeated co-retweeting and co-tweeting happens without centralized coordination (e.g., one actor controlling multiple accounts) in a small time window of \(1\) minute up to \(8\) hours. Increasing the temporal window beyond that yields higher false positive rates in flagging astroturfing accounts. The study builds a co-(re)tweeting graph by drawing an edge between two users that (re)tweet the same post within a minute, but only if this can be observed more than \(10\) times. While the authors rightfully claim that co-retweeters and co-tweeters can be rehydrated from a Twitter dataset, it remains a necessity that one has selected a list of users prior to dataset construction. Some knowledge over the presence of astroturfers is hence necessary a priori: Their approach presupposes to have a list of (suspicious) users instead of embarking on detection given an observable effect.
Similarly concerned with co-retweeting, Graham et al. [26] searches for evidence of bots in \(>25\) million retweets of \(>2.5\) million tweets, collected over the course of \(10\) days, containing COVID-related hashtags. The authors create a user-user '_bot-like' co-retweet network_ of \(>5,000\) Twitter accounts that _frequently_ co-retweet the same tweets within a time window as small as \(1\) second, followed by manual inspection of the connected components.
Pacheco et al. [44] take a high number of overlapping retweets (co-retweeting) as a coordination trace and construct a bipartite network between retweeting accounts and retweeted messages, filtering for accounts that logged at least \(10\) retweets. The authors represent users with TD-IDF weighted vectors containing the retweeted IDs. The weighting discounts the contributions of popular tweets. The projected co-retweet graph is then established via the cosine similarity between the account vectors. Using a hard threshold, they only keep the most suspicious \(0.5\%\) edges leaving them with a coordinated set of users. The analysis is conducted on an anonymized dataset from DARPA SocialSim containing identified Russian disinformation campaigns, collected from Twitter using English and Arabic keywords. Messages that were identified as coordinated are no longer publicly available.
Interested in how well network communities hide from coordination detection, Weber et al. [50] study retweets using a latent coordination network. When members of a group retweet each others' posts, detection of the involved accounts becomes easy, as the accounts are connected via an edge. The larger the detected coordinated community, the greater the likelihood that members would retweet other members. Notably, the authors find that large groupings of accounts in the Twitter curated dataset, believed to be involved in influence operations, hide well with low internal retweet ratios, and that also official political accounts seem to refrain from being involved in self-retweeting.
Adopting the network approach [41; 44; 50], Tardelli et al. [48] model _evolving_ coordinated retweet communities. This work explores that users may belong to different coordinated groups at different points in time. Using the Jaccard similarity measure, the authors compare influx and outflux into and out of communities at each time step. The resulting temporal networks and dynamic community detection identifies many coordinated communities and highlights the relevance of temporal nuances of coordination.
Instead of leveraging graph-based techniques, Mazza et al. [39] only require the timestamps of retweets and the retweeted tweets for each account, and not, e.g. full user timelines. Their work investigates temporal and synchronous retweeting patterns. The collected data spans short of \(10\) million Italian retweets from \(>1.4\) million distinct users collected over the course of two weeks. The collected data is filtered for human-like retweet activity between \(2\) and \(50\) times per day and excludes fully automated, benign retweet bots with high retweeting activity, resulting in a dataset with \(63,762\) distinct users. Manual annotation of a subset of the data \((1,000\) users) serves as a ground truth. Given a user and their retweet history, the authors first visualize different temporal retweet patterns by plotting the timestamp of the original tweet against the timestamp of the retweet in a scatterplot. With a granularity of seconds, the authors compress timestamp data into per user time series vectors containing time information if the user retweeted a given tweet at a given time, and \(0\) otherwise. The resulting series remains sparse as users usually only retweet once every few minutes.
To reduce sparsity, the data is then compressed employing a sequence compression scheme. Using automatic unsupervised feature extraction, the work exploits that synchronous and coordinated users will be grouped densely together in the feature space, in contrast to heterogeneous human behavior. The authors apply dimensionality reduction techniques and deep neural networks and eventually hierarchical and density-based clustering. Users that are clustered and not treated as noise (i.e., not clustered) are labeled as bots. Users clustered together are then thought of as bots acting in a coordinated and synchronous fashion.
Related work on coordinated liking.Despite likes being a commonly adopted and an easily manipulatable mechanism, research on CIB more narrowly targeted at likes is quite scarce:
Border-lining relevancy are studies on purchased likes not of posts, but of _pages_ and _followers_ on Facebook and Twitter [5; 10; 17; 30]. Studying page like or follower farms [5; 17], these works develop supervised classifiers using demographic, explicitly temporal, and social characteristics [5; 30]. Notably, Ikram et al. [30] find their bot classifier has difficulty detecting like farms that mimick regular like-spreading over longer timespans, i.e. deliver likes slowly, without high temporal synchronization, and with lower like counts per account.
Beutel et al. [10] study coordinated and time-synchronized attempts to inflate likes on Facebook pages. Their unsupervised method, developed with data from inside Facebook, detects ill-gotten likes from groups of users that coordinate to like the same page around the same time, leveraging temporal data explicitly. The authors follow a graph-based approach, draw a bipartite graph between users and pages noting down the time at which each edge was created. They then apply co-clustering looking for users liking the same pages at around the same time. Since [10]'s approach depends on timing and is designed to detect synchronous likes in a "single burst of time", [30] find that [10]'s approach, too, suffers large false positive errors in detecting liking accounts that mimick regular users and deliver likes more slowly.
While the Facebook like button is the same whether it regards a page or a post, page likes inflation differs in the mechanism from post like inflation. Liking a page on Facebook entails "following" the account, subscribing to new account posts. Thus, this kind of coordinated metric inflation may not catapult a single _post_ to the top of an algorithmically curated newsfeed but creates the illusion of a popular _account_.
Directly about reactions to posts is Torres-Lugo et al.'s [49] study of metric inflation through strategic deletions on Twitter. They analyze coordination in repetitive _(un)liking_ on _deleted_ tweets in influence operations that seek to bypass daily anti-flooding tweeting limits. From a collection point of view, looking at unlikes is a smart move, as this data is in fact available to purchase from Twitter. Alas, the approach is inapplicable to tweets that remain online, such as those central to CIB-based influence operations that push narratives through political astroturling [46].
Also in the related field of bot detection has the detection of bots designed to engage through reactions gone unstudied, perhaps due to data restrictions. For a systematic review of the bot detection literature, see [43].
### Empirical Problems
Group-based detection methods are promising "in the arms race against the novel social spambot" [14]. Yet empirical research meets challenges in this domain. The following three problems highlight the need for a feasible data collection script and findable datasets for researchers to develop and test methods to address CIB targeted at reactions online.
Time-sensitivity.First, empirical social media studies of coordinated online accounts remain problematic to replicate and reproduce due to time-sensitivity of the relevant data [37]. Attempts to collect the same data twice are likely to fail, as traces of coordination may be altered or deleted after an influence operation was concluded. While e.g. Twitter grants generous academic research access to historic tweets through their API, accounts involved in CIB may evade detection e.g. by changing handle, so they are no longer retrievable in their original appearance [49]. The shortcomings in data reproducibility make CIB/bot detection frameworks difficult to compare, as these typically require live data access [37].
Data availability.Second, data availability limits research [11; 25; 37; 45]. Large scale studies may simply be impossible due to data access restrictions [11; 37; 45]. Specifically data concerning users' reactions is very difficult for researchers to obtain: none of the currently existing datasets include it,1 Twitter's transparency reports do not include information of liking or retweeting users [4], and neither Meta, Twitter nor Reddit supply this data in necessary scope [11; 45].
Footnote 1: See e.g. Indiana University’s Bot Repository, a resourceful, centralized repository of annotated datasets of Twitter social bots [1].
Among the platforms with APIs for academic purposes, only Twitter releases user-IDs of (public) profiles that have liked or retweeted a given tweet. Twitter does not give direct access to _comprehensive_ lists of such IDs, but only releases the user-IDs of the 100 _most recent_ liking/retweeting users of any single post. Further restrictive, at most 75 such lists may be requested per 15 minutes. For some Twitter environments, these restrictions may be balanced by using a suitably timed algorithm, cf. below. For huge political hashtags like #MakeAmericaGreatAgain or #Brext where CIB-based influence operations may most be fea
data restrictions make it practically impossible to obtain a complete picture of liking and retweeting behavior. Twitter's commercial Decahose API stream lists 100% of liking users IDs, but only of a _random_ 10% sample of all tweets, making a targeted analysis of a specific political discourse impossible (Bartos et al., 2016).
_Ground truth._ Third, there is an issue with lacking ground truth as researchers have no access to the empirical truth about accounts engaged in coordinated inauthentic behavior. Qualified guesses can be made based on suspicious similarities in behavior or profile features, but _de facto_, it remains unknown whether two users' actions are authentically correlated or inauthentically coordinated, or how many (partially) automated accounts exist in a total population (Bartos et al., 2016; Krawczyk et al., 2017; Krawczyk et al., 2018; Krawczyk et al., 2018).
Specifically for reaction-based CIB, it seems infeasible to create a labeled dataset that even _approximates_ the ground truth: labeling accounts individually e.g. via crowd-sourcing or the well-established bot classifier _Botometer_ will likely fail as single accounts will often seem inconspicuous (Krawczyk et al., 2018). _Botometer_'s feature-based approach considers accounts one at a time and does therefore not pick up on group anomalies based on suspicious similarity (Krawczyk et al., 2017; Krawczyk et al., 2018). Especially when it comes to coordinated liking behavior, Bootmeter's feature "favourites_count" (the number of likes a user has delivered) predicts less bot-like behavior, the higher the count is (Krawczyk et al., 2018), thus undermining the attempt to identify coordinated liking. For purposes of studying liking behavior in concert at a collective level (Krawczyk et al., 2018; Krawczyk et al., 2018; Krawczyk et al., 2018), data availability restrictions make collective labeling impossible.
Instead of relying on (an approximation of) a ground truth, groups of users may be labeled as suspicious, e.g. in terms of graph structure (Bartos et al., 2016; Krawczyk et al., 2018), contextually validated via manual inspection and individual confession by the original poster (Krawczyk et al., 2018; Krawczyk et al., 2018), through NLP of the content promoted (Krawczyk et al., 2018; Krawczyk et al., 2018), or compared to behavior of experimental vendor-purchased metric inflation (Boot et al., 2016; Krawczyk et al., 2018), as we do in the case study in Sec. 3.
## 2. Data Collection
To collect a comprehensive dataset needed to identify coordinated inauthentic liking behavior, we scripted an algorithm that makes effective use of the data limits set by Twitter. Here, we aim to give an intuition of the implementation and workings of the data collection algorithm. We then present its pseudocode.
### Data Collection Script: Intuition
In short, the script surveys Twitter for tweets falling under a _textual query_ during a live _observation period_ (e.g. 30 days). During the observation period, with a fixed time interval \(p\) (e.g. every 5 min.), the script executes a _pull_. Each pull loop contains four steps:
1. It logs tweets posted since the last pull that satisfy the query, and their current number of likes (_like count_).
2. It updates the logged like count of previously logged tweets. Only tweets that are recent enough are tracked in this way (e.g., posted within the last 48 hours).
3. For each logged tweet, it compares the tweet's new like count to its like count _at the last pull where its liking users were requested_ (0 if the liking users have never been requested). Call the numerical difference between these two like counts the tweet's _delta_.
4. It requests the 100 most recent liking users of the top \(n\) tweets with the highest delta above a set threshold (e.g., has minimum 25 new likes).
At the end of the observation period and once every logged tweet is no longer tracked, the liking users of all logged tweets is requested a final time (in timed batches). The script also allows pulling retweeting users in the pull loop. The logic is the same. Pulling liking and retweeting users draws on separate pools of request resources.
To raise the chance of a complete data set--one that has not missed any liking users--it is preferable to set the tweet track time as long as possible, the pull interval \(p\) as short as possible, and the number of top \(n\) tweets checked to its maximum. Alas, this will often lead to request request shortage.
Twitter's request limits entail that the parameters of the script have to be balanced carefully. For example, a query with 10.000 new tweets a day, each tweet tracked for 24 hours at 5 minute intervals uses 8.640.000 tweet-requests over a 30 day period. Twitter allows 10.000.000. The same parameters but a query with 12.000 tweets/24h uses 10.3680.000 tweet-requests. Hence, the pull interval and the track time must balanced with respect to the query volume. Additionally, the pull interval (\(p\)) and the number of requests used per pull (\(n\)) must also be balanced with respect to the liking frequency _and_ the activity under the query. Given the 75 likers-requests available per 15 minutes, there are two extremes (if one plays it safe; see further below): a short pull interval of \(p=\frac{15\text{ min.}}{75}=12\) seconds, each pull getting the likers the top \(n=1\) tweet and a long pull interval of \(p=15\) min., each pull getting the likers of the top \(n=75\) tweets. The former lowers the risk of missing out on likers during rapid hours, but burns through many more tweet-requests per hour, counting against the 10.000.000 limit. Long pull intervals, on the other hand, raise the risk of missing put on liking users.
The script allows extending the Twitter request resources by the inclusion of multiple bearer tokens. If working in a team where multiple members have Academic Research access to Twitter, all their bearer tokens may be included. The script then cycles through them, using one per pull loop.
Finally, the pull loop is written in Python 3, and is run through a shell script that resumes it from the point of failure
in case of Twitter connection errors, e.g. caused by an over-use of requests or a network disruptions. This means the script allows _not_ playing safe with request resources, most notably with the pull interval \(p\) and the number of likers-requests used per pull, \(n\). Playing it unsafe allows for some flexibility. One may e.g. set \(p=3\) min. and \(n=30\) if one trusts that the actual distribution of tweets and likes is unlikely to break the request limit but wants to readily sacrifice more than the safe amount of requests in case of an activity surge.
### Script: Details and Pseudocode
The algorithm is parameterized by three time periods. First, \(observationtime\) is the length of data collection (e.g. 24 hours, or 1 month), without restriction: with properly set parameters, one can span 1 month, after which request limits reset, making it extendable. The \(observationtime\) starts at a point in time (\(startpoint\)). Second, \(pullinterval\) defines a sleep period between the conclusion of one pull and the initiation of the next. The shorter it is, the finer the temporal resolution and the lower the risk of missing any liking users, but also the higher the request usage. Third, \(tracktime\) specifies how long a tweet is monitored for new likes and retweets after it is posted (e.g., each tweet is tracked for 1 hour, or 48 hours). To collect full data for all tweets posted in \(observationtime\), the total scraping time amounts to \(observationtime+tracktime\).
The algorithm is split into two steps, Alg. 1 and Alg. 2, with Alg. 1 undertaking most of the work, and collects data from Twitter using the Academic Research access API (ARA). ARA provides significant data scraping resources to researchers that are, however, subject to rate limits and request caps specified by Twitter in an advance to manage server requests. Among others, but most notably, requesting liking users from ARA always returns the most recent 100 liking users of a given tweet in question. Furthermore, this request can only be made \(req.rate.lim=75\) times per 15 minutes. As tweets routinely get more that 100 likes in total, a dataset that contains an as complete as possible set of identifiable liking users must live-log liking users runningly.
This is accomplished in Alg. 1, which runs from \(startpoint\) to \(endpoint:=startpoint+observationtime+tracktime\). At \(endpoint\), Alg. 2 runs. It completes a final harvest of liking users by requesting the 100 most recent liking users from all logged tweets. This is especially relevant for those tweets with low like counts de-prioritized in Alg. 1.
Between \(startpoint\) and \(endpoint\), Alg. 1 performs a _pull_ every \(pullinterval\) seconds. A pull at time \(t\) outputs a dataframe \(L_{t}\) of tweet-IDs and their liking users. Further, it continuously outputs dataframes \(T_{t}\) that contain tweets, like count, retweet count, and meta-data including time of origin, text, posting user, language etc. Alg. 1 and Alg. 2 require the input parameters in Table 1.
## 3. Case Study: Data Collection and Analysis of the Danish Twittersphere
To study both the performance of the contributed script and the usefulness of the resulting dataset to address CIB, we analyze a case study of the Danish political Twittersphere.
### Dataset: Parameters, Completeness and Descriptive Statistics
The dataset used in this paper was collected using the described script, without manual intervention during its runtime. The text query was "#dkpol -is:retweet", meaning that the script sought tweets falling under #dkpol, excluding retweets. Two bearer tokens were used, doubling the request resources available. The observation period started the afternoon of May 25th, 2022 and was 30 days long. Tweets
\begin{table}
\begin{tabular}{|p{34.1pt} p{34.1pt}|} \hline \(keyword\) & Keyword(s) or hashtag(s). e.g. \#dkpol. \\ \(token\), \(|token|\) & ARA Twitter Authentication Bearer Token, number of tokens. More than 1 is possible. More raise request limits. \\ \(startpoint\) & Date and time to start data collection. Must be in the past. E.g now, minus 10 seconds. \\ \(observationtime\) & Observation period. E.g., 1 hour, or 60 days. \\ \(tracktime\) & How long to track each tweet for new likes. E.g., 48 hours., Longer periods use up rate limit more quickly. \\ \(pullinterval\) & Sleep interval between pull completion and next pull. E.g. 300 seconds. Shorter interval use up rate limit more quickly. \\ \(min.delta\) & How many new likes must a tweet have gotten before we request its liking users? To play safe, satisfy \(min.delta\) + \(min.delta\) \(\leq req.rate.lim\). \\ \(top.n\) & Determines from how many tweets to request likers per pull. To play safe, satisfy \(top-n\leq rlim\cdot\frac{pullinterval}{15\cdot 60sec}\cdot|token|\). \\ \(min.likes\) & Minimum like (retweet) count of tweets to be considered for final harvest. E.g. 1 or 10. \\ \(req.rate.lim\) & Twitter rate limit: 75 requests per 15 min. for liking and retweeting users each. \\ \hline \end{tabular}
\end{table}
Table 1. Input parameters for Algorithms 1 and 2.
```
1:Input:\(keyword,token,startpoint,observationtime,pullinterval,tracking,min.delta,top.n\)
2:Output:\(T_{t},L_{t}\) for \(t\in\mathit{pullpoints}:=\{t\leq endpoint:\ t=startpoint+k\cdotpullinterval\) for a \(k\in\mathbb{N}\}\)
3:if exists file \(log\)then
4: load \(log\) // to resume from error
5:else
6:\(log\leftarrow\varnothing\) // start empty dataframe with columns \(\mathit{tweet},\mathit{like\_count},\mathit{like\_count}.last\) to track tweets' like count now and last their likers were pulled
7:endif
8:while true do
9:if\(sys.time=t\) for some \(t\in\mathit{pullpoints}\) // if now is a time to pull then
10:\(T_{t},L_{t}\leftarrow\varnothing\) // start empty dataframes for tweets and their metadata, and for liking users
11:\(start=\begin{cases}startpoint\textbf{if }t-tracktime<startpoint\\ t-tracktime\textbf{else}\end{cases}\)
12:end = \(\begin{cases}t\textbf{if }t<startpoint+observationtime\\ startpoint+observationtime\textbf{else}\end{cases}\)
13:\(T_{t}\leftarrow\) get_tweets(\(keyword,start,end,token\)) // pull tweets (incl. \(like.count\)) under \(keyword\) posted between \(start\) and \(end\), auth. with \(token\)
14: save \(T_{t}\) // save to file with timestamp
15:\(log\leftarrow\) update_log_1(\(log,T_{t}\)) // For \(tweet\) in \(T_{t}\): if \(tweet\) is not in \(log\), append it with \(like.count\) from \(T_{t}\) and \(like.count.last=0\); else update \(tweet\)'s \(like.count\) in \(log\) to its \(like.count\) in \(T_{t}\)
16:\(candidates\leftarrow\) find_candidates(\(log,min.delta\)) // return list of all \(tweet\) in \(log\) for which \(delta:=like.count-like.count.last\geq min.delta\)
17: sort \(candidates\) by \(delta\) in descending order // introduce retrieval priority.
18:\(top\gets candidates[0:top\_n-1]\) // restrict to \(top\_n\) tweets with highest \(delta\).
19:for\(tweet\) in \(top\)do
20:\(L_{t}\leftarrow\) get_likers(\(tweet,token\)) // pull 100 most recent likers
21:\(log\leftarrow\) update_log_2(\(log,T_{t}\)) // update \(tweet\)'s \(like.count.last\) in \(log\) to its \(like.count\) in \(T_{t}\)
22:endfor
23: save \(L_{t}\) // save to file with timestamp
24:save \(log\) // save to file
25:else
26: break
27:endif
28:endwhile
```
**Algorithm 1** Main loop of algorithm to retrieve liking users from Twitter
In the following, we present the algorithm for selecting the users from Twitter to the users from Twitter. We use the algorithm for selecting the users from Twitter to the users from Twitter to the users from Twitter. We use the algorithm for selecting the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to Twitter to the users from Twitter to the users from Twitter to Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to Twitter to the users from Twitter to Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to the users from Twitter to Twitter to the users from Twitter to the users from Twitter to Twitter to the users from Twitter to Twitter
this is not necessarily a problem: temporal detection methods leveraging time-synchronous user behavior to detect coordination can easily identify such behavior. Negative deviation indicates that the script has collected more liking users than the _like count_ suggests. This happens when likes are retracted, the liking profiles are deleted,2 or a tweet attracted likes post tracktime, which we collected in the final harvest.
Footnote 2: These are both actions genuine users may take, but are also often observed with vendor-purchased metric inflation [49].
Second, we find that for high engagement tweets, the script performs well and collects most of the liking users. In contrast, for very low engagement tweets, the script is more prone to miss out on more than 10% of users. This is due to the algorithm prioritizing tweets that get traction by allocating requests to collect the growing sets of likers.
Third, and to complement the plots in Fig. 1, for 39.98% of 6702 tweets, the script collects exactly as many liking users as the like count suggests. For 93.7% of the 6702 tweets, the script collects numbers of likers that fall within 10 of the like count. If considering negative deviation only, in 96.6% of 6702 tweets, the script deviates negatively 10% or less. If considering positive deviation only, in 97.06% of 6702 tweets, the script deviates positively 10% or less. I.e., in 97% of cases, the script seemingly collects 90%+ of liking users.
### Analysis: Perfect Correlation
In this case study, we make use of very simple user data: a binary matrix containing a row for each tweet and column for each user, each cell marked 1 if the user liked the tweet, else 0. Again, the dataset contains temporal data as well, but we ignore it here, as we are mainly interested in seeking patterns in like behavior alone.
Assume we have observed \(n\) tweets. Let \(Likers_{k}\), \(k\leq n\), be the set of users observed to have liked tweet \(k\), so \(Likers=\cup_{k\leq n}Likers_{k}\) is the set of all observed liking users. With \(m=|Likers|\), we then compress our data to a binary \(n\times m\) matrix with entry values in \(\{0,1\}\), each row representing a tweet, each column a user. With this matrix called \(\mathbf{L}\), the entry \(\mathbf{L}_{i,j}=1\) if user \(i\) has liked tweet \(j\), and 0 else. Henceforth, we hence identify user \(i\) with the row \(\mathbf{L}_{*,i}\) that contains their like profile. In this case study, \(\mathbf{L}\) is of dimension \(13,243\times 47,714\).
We seek to group users as exhibiting coordinated liking behavior if their like profiles are sufficiently similar, according to some measure. Existing work routinely projects bipartite data structures (which \(\mathbf{L}\) is) onto a user-user similarity graph using a distance or similarity metric (e.g., [41, 44]) or develops algorithms to detect dense subgraphs to identify anomalous groups of nodes (e.g., [29, 47]). Here, we apply the strictest measure: we group two users if, and only if, they exhibit _exactly the same like behavior_. This is equivalent to grouping users that have cosine similarity 1, Jaccard similarity 1, or Hamming distance 0.
We apply this strictest measure as behavior labeled as coordinated will also be labeled as coordinated using any less discriminating measure. The approach thus is precautious with regard to labeling coordinated users. The method is not designed to identifying all coordinated inauthentic behavior in likes. There may very well be nuances and less than perfectly correlated inauthentic behavior. To answer whether a collection of tweet likes exhibits first signs of CIB, we propose the method only as valid for positive answers: if this strongly discriminatory methods finds such signs, then methods with lower bars for coordination should, too. If the method does not find such signs, we would deem it fallacious to take this as evidence that no CIB occurred.
To group users with identical like profiles, we worst case have to pair-wise compare all users, i.e. undertake \(\frac{47,713^{-4}-47,714}{2}\) comparisons. To avoid as many of these comparisons as possible, we sort users into bins: we initiate a list with one bin containing the first user. For every later user, we compare them with one user from each bin in the list of bins, checking larger bins first, and stopping to place them in the first bin that provides a perfect match. If no such bin exists, we add a new bin for the user in the end of the list. We find only \(25,806\) bins.
49.9% of users are sorted into bins of size 1. Filtering for bins of at least size 50 (as smaller bins are negligible in impact for CIB), we find 50 bins with \(13,018\) out of \(47,714\) users. Put differently, \(27.28\%\) of users are in a group with at least 49 others that share the exact same like behavior across all \(13,243\) tweets. These \(27.28\%\) like most often only 1 tweet,
Figure 1. Missed likes per tweet, as share of its maximal like count, arranged by like count in ascending order. Dots represent tweets. Labels “VP \(n\)” are on tweets for which we vendor-purchased \(n\) likes. Blue marks the tweets of the 50 largest bins of perfectly correlated likes (cf. Sec 3.2).
sometimes \(2\). In the largest bin, \(3,217\) users are perfectly correlated liking the same tweet.
Collected in their own bins, we find the users behind the likes we purchased from online vendors. We refer to Fig. 2 for an overview of the magnitude of bins.
We find several bins of users with perfectly identical liking behaviors unrelated to our purchases. We cannot conclude from _correlation_ to _coordination_ to state these bins contain users engaged in coordinated inauthentic behavior. We do find the larger bins suspicious and in warrant of further analysis, cf. the discussion in Sec. 4.
We find the larger bins suspicious as we find it unlikely that the correlation has arisen without coordination. E.g., rate the probability of each bin as being non-coordinated using the following charitable assumptions (charitable to favor the odds of large bins): Assume that the probability that any two users share the exact same like profile without being coordinated is \(c=.95\). For simplicity and charity, ignore that this probability attaches to every unordered pair of users in a bin, and let the probability that a bin \(B\) of size \(|B|\) occurred without coordination be \(P(B)=c^{|B|-1}\), i.e., the probability that \(|B|-1\) users pairwise and independently correlated with the same user \(i\) from \(B\). This probability drops drastically with the growth of \(B\):
\[\begin{array}{c|c|c|c|c|c|c}|B|=&2&10&50&60&75&100&200\\ \hline P(B)=&.95&.63&.08&.05&.02&.006&3.69\cdot 10^{-5}\end{array}\]
These (fictitious) probabilities do not mean that it is unlikely that e.g. 60 users liked the same tweet--but that it is unlikely that they all liked or did not like _all the same tweets_. Even under charitable conditions, bins larger than 60 quickly seem highly unlikely. We further discuss the implications of our results in Sec. 4.
_Singular Value Decomposition._ To visualize and locate the identified bins among all users, we turn to plotting the data in a dimensionality-reduced space: With dimensionality reduction, user behavior often exhibits a clustered structure, for example, separating bots and humans in labeled bot datasets [42, 53], disclosing synchronous clusters of retweeters [23] (later used in baseline experiments by [8, 13, 20]), revealing generally correlated groups such as polarized groups of users [51] among users writing Twitter Birdwatch notes, or coordinated clusters of agents as in [33] given computer-simulated data.
We calculate the singular value decomposition (SVD) \(\mathbf{X}=\mathbf{U}\mathbf{D}\mathbf{V}^{T}\) of the \(m\times m\) sample correlation matrix \(\mathbf{X}\) of the data in matrix \(\mathbf{L}\). We consider the first \(q=2\) dimensions' eigenvectors, i.e., the first two columns of the \(n\times p\) orthogonal matrix \(\mathbf{U}\) where \(n=p\), weighted with the corresponding eigenvalue collected in the diagonal \(p\times p\) matrix \(\mathbf{D}\)[28]. We plot the scatterplot of \(\mathbf{U}_{q}\mathbf{D}_{q}\) in Figure 3. In the plot, each dot represents a liking user. While we color-coded the users placed in the largest 50 bins, they may also be discerned through their darker shade that stems from many dots perfectly overlapping one another. The SVD and the scatterplot
Figure 3: Scatterplot of \(\mathbf{U}_{q}\mathbf{D}_{q}\). Top 50 perfectly correlated bins of users overlap perfectly with one another in clusters colored in blue. Bins of vendor-purchased likers are among the bottom left groups of clusters.
Figure 2: Bins with at least two users, the number of users in bins of each size, and the number of bins of each size. E.g., the left-most bar shows there are \(\sim 2000\) users (yellow bar) distributed over \(\sim 1000\) bins (solid blue line) of size \(2\) (dotted purple line), while the rightmost shows there are \(\sim 3217\) users distributed over \(1\) bin of size \(3217\). The number of bins of size \(n\) drops to \(1\) at \(n=48\).
thus picks up on correlation and the vendor-purchased metric inflation. As an alternative route, note that clustering on these first two eigenvectors (e.g. using a Gaussian Mixture Model as done in (Krishnan et al., 2017)) picks up on the inauthentically coordinated users we know of, too.
## 4. Concluding Remarks: Discussion & Ethical Considerations
_Data collection discussion._ The script we have presented here is designed to collect the IDs of liking (and/or retweeting) users of tweets that satisfy a selected textual query. As such, the script takes a _domain first_ perspective on data collection, rather than a _user first_ perspective as most other work designed to investigate coordinated inauthentic behavior.
The dataset presented in this paper is collected around the domain of the Danish political Twittersphere, found under #dkpol. For this domain, using the parameters described and two bearer tokens, the script had a reasonably low rate of missing liking users, and misses more than 10% of liking users in only 3% of cases when run continuously for 30 days. Such a targeted dataset cannot be obtained directly through any of Twitter's data access options.
In an international context, #dkpol is a small domain. With the same parameters and number of bearer tokens, the script would indubitably fare less well on much larger domains. For larger domains with more intense liking activity, it would be interesting to study the script's performance with more bearer tokens and far more aggressive pull parameters, such as much lower \(pullinterval\). As data retrieval from Twitter is not instantaneous (especially when it comes to updating the like count of a large batch of tweets), we suspect that a satisfactory data collection will involve multiple machines running the script in parallel, each tracking a subset of tweets assigned to them (e.g. using tweet ID _modulo k_ for \(k\) machines).
Another, and favorable, option for obtaining the data on one-click reactions would be if Twitter or other social media platforms made this data available to the research community. We hope that the case study in this paper--where even a crude and strict analysis raises red flags for CIB--may be used as an argument that one-click reaction data is relevant in the study of coordinated inauthentic behavior and thus in the arms race against online misinformation to ultimately put pressure on the social media industry to release data.
_Analysis discussion._ In our case study, the controlled CIB through vendor-purchased likes is grouped into distinct bins that we can match to our tweets. The coordination here is achieved through weak ties in our bipartite graph structure \(\mathbf{L}\). We complement, for example, Weber et al.'s (Weber et al., 2017) approach focused on coordination through strong ties. As (Weber et al., 2017) acknowledges as an open issue and we show, coordination may take place along weak ties. With our like-based approach, we provide first steps towards a measure to detect such. In contrast to existing work (e.g. (Werner et al., 2018)), the present like-based approach does _not_ need to filter the data for strongly tied communities, highly influential users and superspreaders, or very active or users that, e.g. like a minimum number of times within a short period. Without filtering, we are able to group users with such behaviors together.
Our analysis made use of vendor-purchased likes. Purchasing engagement metric inflations violates Twitter's platform manipulation and spam policy (Bradbury et al., 2017), which defines "platform manipulation as using Twitter to engage in bulk, aggressive, or deceptive activity that misleads others and/or disrupts their experience.". We created two Twitter accounts that in the name of the research center with which the authors are affiliated ('CIBS1' (@CIBS110) and 'CIBS2' (@CIBS22)) posted 6 tweets with text '_Research test tweet n/6. Apologies for spamming #dkpol._' for \(n=1,..,6\). We inflated the like count for these 6 tweets. We acknowledge that the coordinated inflation of these tweets might have disrupted the experience of Twitter users. To the best of our assessment, the amplification of these tweets does not comprise _harmful_ coordinated activity nor was it deceptive or commercially-motivated, but declared a research motivation. Ethically, we thus believe that the benefits of studying coordinated inauthentic behavior outweigh the minimal disruptions we have caused to Twitter users by violating Twitter's manipulation and spam policy.
Unrelated to our purchases, we further find and visualize several large groups of users with perfectly correlated, identical liking behaviors--similarly achieved through weak ties. We have no ground truth about whether the suspected accounts beyond our test are naturally correlated and not inauthentically coordinated, yet we believe that natural correlation is unlikely enough that such groupings are red flags for CIB, and warrant further inspection, out of scope of this case study. Our methods may thus serve as pre-studies for bot detection and the application of fact checkers (Werner et al., 2018). Further, the dataset and explorative case study may serve as a point of departure for future research to explore the correlation structures among liking users and the development of novel detection methods.
_Censorship._ Any flagging of behavior in public fora raises ethical concerns about censorship. The classification of reactions such as likes and retweets to tweets is no different. Generally, we find that the flagging of coordinated behavior used by inauthentic attention hackers is defendable, justified by the aim to combat misinformation online. We omit further
discussion of this point. However, in applying automated techniques, there is always a risk of misclassification. If a technique is used for censorship, this may lead to unrightful labeling. The methods for initial exploration proposed here may then risk unjustified labeling users due to behavioral correlation with strongly coordinated groups of users. We strongly recommend that the methods here are taken as a first step towards fact-checking content and users and not as a final verdict about specific individual users.
Data collection approval.Approval of data collection and processing of personal data in the research project was granted by the faculty secretariat of the university of Copenhagen. The approval emphasizes that the processing of personal data in the project is in accordance with the rules of the European General Data Protection Regulation, Regulation 2016/679 on the protection of natural persons with regard to the processing of personal data. That the study would be undertaken was made public on the authors' university websites.
Datasets and code availability.Dataset and code are made available for the research community [32], hosted on the archival repository Harvard Dataverse that provides a Document Object Identifier (DOI) for better findability. To comply with the Twitter terms, access to the data on Harvard Dataverse is granted when researchers actively agree to the Twitter Terms of Service, Privacy Policy and Developer Policy. The data collection code is also available on the public GitHub repository _Get-Twitter-Likers-Data_[31].
|
2310.03354 | Fictitious Cross-Play: Learning Global Nash Equilibrium in Mixed
Cooperative-Competitive Games | Self-play (SP) is a popular multi-agent reinforcement learning (MARL)
framework for solving competitive games, where each agent optimizes policy by
treating others as part of the environment. Despite the empirical successes,
the theoretical properties of SP-based methods are limited to two-player
zero-sum games. However, for mixed cooperative-competitive games where agents
on the same team need to cooperate with each other, we can show a simple
counter-example where SP-based methods cannot converge to a global Nash
equilibrium (NE) with high probability. Alternatively, Policy-Space Response
Oracles (PSRO) is an iterative framework for learning NE, where the best
responses w.r.t. previous policies are learned in each iteration. PSRO can be
directly extended to mixed cooperative-competitive settings by jointly learning
team best responses with all convergence properties unchanged. However, PSRO
requires repeatedly training joint policies from scratch till convergence,
which makes it hard to scale to complex games. In this work, we develop a novel
algorithm, Fictitious Cross-Play (FXP), which inherits the benefits from both
frameworks. FXP simultaneously trains an SP-based main policy and a counter
population of best response policies. The main policy is trained by fictitious
self-play and cross-play against the counter population, while the counter
policies are trained as the best responses to the main policy's past versions.
We validate our method in matrix games and show that FXP converges to global
NEs while SP methods fail. We also conduct experiments in a gridworld domain,
where FXP achieves higher Elo ratings and lower exploitabilities than
baselines, and a more challenging football game, where FXP defeats SOTA models
with over 94% win rate. | Zelai Xu, Yancheng Liang, Chao Yu, Yu Wang, Yi Wu | 2023-10-05T07:19:33Z | http://arxiv.org/abs/2310.03354v1 | # Fictitious Cross-Play: Learning Global Nash Equilibrium in Mixed Cooperative-Competitive Games
###### Abstract.
Self-play (SP) is a popular multi-agent reinforcement learning (MARL) framework for solving competitive games, where each agent optimizes policy by treating others as part of the environment. Despite the empirical successes, the theoretical properties of SP-based methods are limited to two-player zero-sum games. However, for mixed cooperative-competitive games where agents on the same team need to cooperate with each other, we can show a simple counter-example where SP-based methods cannot converge to a global Nash equilibrium (NE) with high probability. Alternatively, Policy-Space Response Oracles (PSRO) is an iterative framework for learning NE, where the best responses w.r.t. previous policies are learned in each iteration. PSRO can be directly extended to mixed cooperative-competitive settings by jointly learning team best responses with all convergence properties unchanged. However, PSRO requires repeatedly training joint policies from scratch till convergence, which makes it hard to scale to complex games. In this work, we develop a novel algorithm, _Fictitious Cross-Play_ (FXP), which inherits the benefits from both frameworks. FXP simultaneously trains an SP-based main policy and a counter population of best response policies. The main policy is trained by fictitious self-play and cross-play against the counter population, while the counter policies are trained as the best responses to the main policy's past versions. We validate our method in matrix games and show that FXP converges to global NEs while SP methods fail. We also conduct experiments in a gridworld domain, where FXP achieves higher Elo ratings and lower exploitabilities than baselines, and a more challenging football game, where FXP defeats SOTA models with over 94% win rate.
Mixed Cooperative-Competitive Games; Nash Equilibrium; Multi-Agent Reinforcement Learning. +
Footnote †: journal: Information Systems (AAMAS 2023). A. Ricci, W. Yech, N. Agmon, R. An (eds), May 29 - June 2, 2023, London, United Kingdom.
+
Footnote †: journal: Information Systems (AAMAS 2023). A. Ricci, W. Yech, N. Agmon, R. An (eds), May 29 - June 2, 2023, London, United Kingdom.
+
Footnote †: journal: Information Systems (AAMAS 2023). A. Ricci, W. Yech, N. Agmon, R. An (eds), May 29 - June 2, 2023, London, United Kingdom.
## 1. Introduction
Self-play (SP) has been the most popular paradigm for multi-agent reinforcement learning (MARL), where agents collect training experiences by playing against themselves and adopt single-agent RL algorithms for policy improvement by treating other agents as part of the environment. This framework has led to great advances in a wide range of scenarios, including fully cooperative games (Han et al., 2017; Xu et al., 2018), two-player competitive games (Xu et al., 2018; Xu et al., 2018), and even mixed cooperative-competitive games (Beng et al., 2018; Chen et al., 2018).
Despite these empirical successes, the theoretical convergence properties of SP are limited to two-player zero-sum games, where the average policies of no-regret algorithms in SP are guaranteed to converge to a Nash equilibrium (NE) (Chen et al., 2018). However, other settings, particularly the mixed cooperative-competitive games, are largely unstudied. Existing works often directly apply the MARL methods originally designed for two-player zero-sum games to more general settings and assume strong results can be still achieved.
Unfortunately, we show a simple counter-example where SP methods converge to a suboptimal joint policy that is exploitable by an adversary team. This is because agents in popular MARL algorithms treat both their teammates and opponents as part of the environment and optimize their own policies in a fully decentralized fashion. As a result, the team's joint policy is likely to converge to a _local_ NE where no single agent can improve the return by changing its policy unilaterally, but the team can _jointly_ change their policies to get a higher return towards a _global_ NE.
To inherit the convergence properties in two-player zero-sum games and to find global NE that is unexplitable by any adversary team, agents from the same team are supposed to cooperatively optimize their joint policy in mixed cooperative-competitive games. Policy-Space Response Oracles (PSRO) (Shi et al., 2018) is an alternative framework that generalizes the double oracle (DO) (Xu et al., 2018) algorithm and is guaranteed to converged to a NE in two-player games. PSRO maintains a population of policies and a distribution (i.e., meta-policy) over the policy pool. In each PSRO iteration, it trains the best response (BR) to the maintained mixed strategy according to the meta-policy and adds this BR policy as a new one to the policy pool. When applied to mixed cooperative-competitive games, each PSRO iteration solves a _fully cooperative_ game by playing against a _fixed_ opponent policy. Therefore, we can view each team of agents as a joint one and accordingly inherit all the convergence properties of PSRO from the two-player zero-sum setting. However, since
PSRO requires finding a joint best response in each iteration, in order to promote exploration and avoid being trapped in a local sub-optimum, the BR policy needs to be trained from scratch in every iteration. This can be particularly expensive and sample inefficient in complex multi-agent games. In addition, PSRO may have to fully explore the entire policy space before converging to an NE, resulting in a substantial large number of iterations in practice. Thus, despite its theoretical properties, PSRO has been much less utilized than SP in real-world applications.
In this work, we propose a new algorithm, Fictitious Cross-Play (FXP), for learning global NE in mixed cooperative-competitive games. FXP aims to bridge the gap of SP and PSRO by training an SP-based _main policy_ and a BR-based _counter population_ of policies. The main policy aims to produce the final global NE and is trained by a mixed strategy over self-play, fictitious play against its past versions, and cross-play against the counter population. The counter population aims to exploit the main policies and help them get out of local NEs by cross-play against past versions of main population. We remark that a majority of games played by FXP has a team of policies being fixed, leading to a cooperative learning nature, which helps shape the main policy towards the global NE. Meanwhile, since the main policy is still trained by self-play, FXP is able to empirically achieve much faster policy improvement than iterative BR-based methods.
We first show in matrix games that FXP quickly converges to the global NE while SP and PSRO fail within the same amount of training steps. Then we evaluate our algorithm on the gridworld MAgent Battle environment and achieves a much lower exploitability and a Elo rating over 200 points higher than six baselines. Finally, we scale up FXP to tackle the challenging 11-vs-11 multi-agent full game in the Google Research Football (GRF) (Garrett et al., 2017) environment. We compare the FXP agent with the hardest built-in AI, an imitation-learning agent (Garrett et al., 2017), and a PSRO-based agent (K
Similarly, for mixed cooperative-competitive games, we can define \(\text{BR}_{\text{team}}(\sigma_{t_{-i}})=\arg\max_{\pi_{t_{1}}\in\Pi_{t_{1}}}\mathbb{ E}_{\pi_{t_{-i}}-\sigma_{t_{-i}}}[U_{t_{1}}(\pi_{t_{1}},\pi_{t_{-i}})]\) to be the _team best response (team BR)_, where \(\sigma_{t_{-i}}=(\sigma_{-i,1},\cdots,\sigma_{-i,N})\) is the opponent team's joint mixed strategy and \(\Pi_{t_{1}}=\times_{n=1}^{N}\Pi_{t,n}\) is the set of all joint pure strategies of team \(i\in\{1,2\}\). We use _local Nash equilibrium (local NE)_ to refer to a mixed strategy that satisfies Equation (3) in mixed cooperative-competitive games, and use _global Nash equilibrium (global NE or team NE)_ to refer to a mixed strategy \(\sigma=(\sigma_{t_{1}},\sigma_{t_{2}})\) such that
\[\sigma_{t_{i}}=\text{BR}_{\text{team}}(\sigma_{t_{-i}}),\ \forall i\in\{1,2\}. \tag{4}\]
It is worth noting that a global NE is always a local NE, but a local NE is not necessarily a global NE. The goal of mixed cooperative-competitive games is to learn a global NE, and the metric to evaluate a mixed strategy profile \(\sigma\) is _team exploitability_\(e_{\text{team}}(\sigma)=\sum_{i\in\{1,2\}}U_{t_{-i}}(\text{BR}_{\text{team}}(\sigma_{t_{1}}), \sigma_{t_{1}})\), which can be roughly interpreted as the "distance" from \(\sigma\) to a global NE. Note that the local NE defined here is different from the term that refers to the locality in the action space of continuous games in other works like (Sandras et al., 2018).
### Extension to MARL
A Markov game (MG) (Sandras et al., 2018) defined as a tuple \((K,\mathcal{S},\mathcal{A},O,O,r,P,\gamma)\). Here, \(K\in\mathbb{R}\) is the number of agents, \(\mathcal{S}\) is the state space, \(\mathcal{A},O\) are the action space and observation space shared across all agents, and \(\gamma\in[0,1]\) is the discount factor. Given states \(s,s^{\prime}\in\mathcal{S}\) and joint action \(\mathbf{a}\in\mathcal{A}^{K}\), \(\mathbf{o}_{k}=O_{k}(s)\) and \(\mathbf{r}_{k}(s,\mathbf{a})\) are the local observation and reward of agent \(k\), and \(P(\mathbf{s},\mathbf{a},s^{\prime})\) is the transition probability from state \(s\) to \(s^{\prime}\) under joint action \(\mathbf{a}\). Each agent uses a policy \(\pi_{k}(\mathbf{a}_{k}|\mathbf{o}_{k})\) to produce its action \(\mathbf{a}_{k}\) from the local observation \(\mathbf{o}_{k}\), and the expected return of agent \(k\) under joint policy \((\pi_{k},\pi_{-k})\) is \(J_{k}(\pi_{k},\pi_{-k})=\mathbb{E}_{s^{\prime},\mathbf{a}^{\prime}}[\sum_{t}\gamma ^{t}r_{k}(s^{\prime},\mathbf{a}^{\prime})]\). Many popular MARL algorithms like MAPPO (Mair et al., 2017) follow the _decentralized learning_ framework, i.e., each agent optimizes the its return by treating other agents as part of the environment. Given other agents' joint policy \(\pi_{-k}\), these methods aim to find the optimal policy \(\pi_{k}^{*}\) w.r.t.
\[\pi_{k}^{*}=\operatorname*{arg\,max}_{\pi_{k}}J_{k}(\pi_{k},\pi_{-k}). \tag{5}\]
For complex games with prohibitively large policy space, MARL is often combined with _empirical game-theoretic analysis (EGTA)_ to construct a higher-level normal-form game, and apply game-theoretic analysis in this meta-game to guide the learning of new policies. In the normal-form meta-game, the pure strategies become _policies_ learned by MARL algorithms, the set of current policies \(\Pi\) is also called a _population_, and the mixed strategy \(\sigma\) is called a _meta-policy_. An _empirical payoff matrix_\(U\) can be constructed by simulating in the original game for all joint policy combinations. Since the population can get larger with more policies learned and is no longer fixed, we use \(\text{BR}(\sigma\Pi)\) to denote the BR of population \(\Pi\) with meta-policy \(\sigma\) and \(\text{BR}(\pi)\) to denote the BR of policy \(\pi\). Given a joint policy \(\pi=(\pi_{k},\pi_{-k})\), the utility function of agent \(k\) is its expected return in the original game \(U_{k}(\pi)=J_{k}(\pi_{k},\pi_{-k})\), and the BR of \(\Pi_{-k}\) with \(\sigma_{-k}\) becomes
\[\text{BR}(\sigma_{-k}\Pi_{-k})=\operatorname*{arg\,max}_{\pi_{k}}\mathbb{E}_{ \pi_{-k}-\sigma_{-k}}[J_{k}(\pi_{k},\pi_{-k})], \tag{6}\]
which is equivalent to Equation (5) by sampling joint policy \(\pi_{-k}\) according to the meta-policy \(\sigma_{-k}\) at the beginning of each episode. Therefore, we can use MARL algorithms as approximate BR and team BR oracles in the meta-game.
### Self-play
Self-play learns a single policy by training against itself. Using RL as the approximate BR oracle, SP starts with a randomly initialized policy and repeatedly updates the policy toward the BR of itself. SP is simple and efficient in learning. Fictitious Play (FP) extends SP by training a policy against its time-averaged policy \(\overline{\pi}^{FP}\) rather than \(\pi^{FP}\) itself, and the time-averaged policy of FP is guaranteed to converge to a NE. The pseudocode of SP is listed in Algorithm 1.
```
Input: Randomly initialized policy \(\pi^{SP}\) for many episodesdo Update \(\pi^{SP}\) toward \(\text{BR}(\pi^{SP})\) Output: Policy \(\pi^{SP}\)
```
**Algorithm 1**Self-Play (SP)
For mixed cooperative-competitive games, one can use MARL to find the approximate team BRs. However, with decentralized learning, each agent optimizes its own policy rather than the team one, easily yielding a suboptimal joint policy. Therefore, it is very likely that the SP policy converges to a local NE where no single agent can improve unilaterally, but the team policy can still get a higher return by jointly optimizing the policies towards a global NE. We present a concrete example with detailed analysis in Sec. 4.
Figure 1. Frameworks of SP, PSRO, and FXP. SP learns a single policy against itself. PSRO learns a policy population by iteratively adding a best response to the current population. FXP learns a main policy and a counter population. The main policy is trained by fictitious self-play and cross-play. The counter policies are learned against past versions of main policy.
### Policy-Space Response Oracles
Instead of training a single policy, PSRO iteratively trains a population of policies to find the NE of large games. PSRO starts with an initial population \(\Pi^{1}=\{\pi^{1}\}\) with a single random policy. In iteration \(t\), an empirical payoff matrix \(U\) is computed by simulations using policies in the current population \(\Pi^{t}\). The payoff matrix \(U\) is then used by a meta-solver to calculate the meta-policy \(\sigma\) of population \(\Pi^{t}\), and a new policy \(\pi^{t+1}\) is trained to be the BR of population \(\Pi^{t}\) with meta-policy \(\sigma\). The new policy is added to the population and PSRO continues to the next iteration. PSRO generalizes many algorithms by using different meta-solvers. FP can be regarded as an instance of PSRO with uniform solver which assigns equal probability to each policy. DO is also an instance of PSRO with Nash solver which uses the NE of the restricted game as the meta-policy. Other meta-solvers include projected replicator dynamics (PRD) solver (Krishna et al., 2017), rectified Nash solver (Krishna et al., 2017), \(\alpha\)-Rank solver (Krishna et al., 2017), etc. The pseudocode of PSRO is listed in Algorithm 2.
PSRO is guaranteed to converge to a NE in two-player games with proper meta-solvers, and can be directly extended to mixed cooperative-competitive games by using a team BR oracle. This is because in each iteration, the BR policy is trained against a mixture of _fixed_ policies yielding a fully cooperative learning problem with stationary opponents. However, to avoid struggling in poor local sub-optimum, PSRO has to train the policy from scratch in each iteration in order to find the global best response. In addition, PSRO may have to fully explore the policy space to cover all the strategy modes before converging to a global NE. Taking Rock-Paper-Scissors (RPS) as an example, PSRO has to cover all three modes to find the NE \((1/3,1/3,1/3)\). These issues make PSRO very inefficient in complex games with a huge policy space.
```
Input: Initial population with random policy \(\Pi^{1}=\{\pi^{1}\}\) for\(t=1,2,\cdots,T\)do Update payoff matrix \(U\) by game simulations \(\sigma\leftarrow\text{meta-solver}(U)\) for\(many\)episodesdo Update \(\pi^{t+1}\) toward BR\((\sigma\Pi^{t})\) \(\Pi^{t+1}\leftarrow\Pi^{t}\cup\{\pi^{t+1}\}\) Output: Population \(\Pi^{t+1}\) and meta-policy \(\sigma\)
```
**Algorithm 2**Policy-Space Response Oracles (PSRO)
## 4. A Motivating Example
Here we introduce an illustrative mixed cooperative-competitive game, i.e., a normal-form game with two competitive teams of \(N\) homogeneous agents. Each agent can choose from two actions \(0\) or \(1\). The utility function \(U\) has \(U(x,y)=-U(y,x)\) and satisfies
\[U(0_{N},1_{N})=C,\] \[U(0_{N},y)=\epsilon\sum_{i=1}^{N}y_{i}, \forall y\neq 1_{N},\] \[U(x,y)=\sum_{i=1}^{N}x_{i}-y_{i}, \forall x,y\neq 0_{N}.\]
Here the parameters \(C,\epsilon\) satisfy \(0<\epsilon\ll C\ll N\). When there is no ambiguity, we use \(\mathbf{0},\mathbf{1}\) to represent the joint policy that corresponding agents all act \(0\) or all act \(1\), respectively. Clearly, the game has a unique global NE \((\mathbf{0},\mathbf{0})\), and a local suboptimal NE \((\mathbf{1},\mathbf{1})\).
Let the learning policy and the opponent policy be \(\pi,\mu\), respectively. Thus for self-play, \(\mu^{t}=\pi^{t}\), and for PSRO and our counter policy, \(\mu\) is a fixed policy against which the best response is learned.
Definition 1 (Q-function).: _At each time \(t\), the Q-function \(Q_{t}^{t}(a_{i})=\mathbb{P}_{x_{i}\sim t_{i}^{-},y\sim\mu^{t}}U([a_{i},x_{-i}],\mathbf{y})\) is computed for each agent \(1\leq i\leq N\) and action \(k\in\{0,1\}\)._
### Self-play and Its Variants
We show that under decentralized learning, typical SP-based methods no longer converge to a global NE with a mild assumption.
Definition 2 (Preference Preservation).: _We say a learning process is preference preservation if the relative ratio of choosing action \(x\) and \(y\) keeps increasing when all the past observed Q-function of \(x\) is larger than \(y\), and the ratio updating rules are monotone with \(Q\). To be more specific,_
\[\forall\prime\leq t,Q_{t}^{t^{\prime}}(x)\geq Q_{t}^{t^{\prime}}(y)\Rightarrow \frac{\pi_{i}^{t+1}(x)}{\pi_{i}^{t+1}(y)}\geq\frac{\pi_{i}^{t}(x)}{\pi_{i}^{t} (y)} \tag{7}\]
_and \(\forall\tau\geq 0,1\leq i\leq N,x,y\in\Pi_{i},\exists\) monotone non-decreasing \(f_{i,x,y}^{t}\) such that_
\[\forall\prime\leq t,z\notin\{x,y\},\pi_{i}^{t^{\prime}}(z)=0\] \[\Rightarrow \frac{\pi_{i}^{t+1}(x)}{\pi_{i}^{t+1}(y)}=f_{i,x,y}^{t}\left( \frac{\pi_{i}^{t}(x)}{\pi_{i}^{t}(y)},\{Q_{x}^{t}-Q_{y}^{t}\}_{s=0}^{t}\right) \tag{8}\]
This property holds for many SP-based algorithms, including FSP (Krishna et al., 2017; Krishna et al., 2017), Follow the Regularised Leader (Krishna et al., 2017), Replicator Dynamics (Krishna et al., 2018), Multiplicative Weights Update (Krishna et al., 2017), Counter Factual Regret Minimization (Krishna et al., 2017), or any softmax variants of them. Although some of them are proved to converge to NE under two-player zero-sum games, we show in the following theorem that in the mixed cooperative-competitive game we proposed, none of them converge to the global NE \((\mathbf{0},\mathbf{0})\).
Theorem 4.1 ().: _Any algorithm with preference preservation will not produce a policy \(\pi\) converging to the global NE if the initialized policy \(\pi^{0}\) does satisfy_
\[\forall i,\pi_{-i}^{0}(\mathbf{0})\leq\frac{1}{N+1+2C+\epsilon}.\]
_When the policy is randomly initialized, there is at least a probability \(\phi 1-\exp\left(-\Omega(N)\right)\) that the above condition is satisfied and the policy does not converge to the global NE._
We list the proof in Appendix A. The obstacle of learning towards the global NE largely comes from the partial observation, as each agent only consider its local Q-function. Despite the challenge of cooperative learning, we will show that learning against a fixed opponent rather than the varying \(\pi^{t}\) does mitigate the problem.
### Playing Against a Fixed Opponent
In the learning of PSRO's best response, the opponent policy \(\mu\) is fixed. Although the opponent policy can be dependent on the
algorithm, our analysis is based on the opponent policy \(\mu\in\{\mathbf{0},\mathbf{1}\}\), since the game has only two local NEs \((\mathbf{0},\mathbf{0}),(\mathbf{1},\mathbf{1})\)
Definition 3 (Good Initialization).: _A good initialization \(\pi^{0}\) regarding a certain learning configuration enable the learned policy to converge to the global NE._
**Remark.** We omit the discussion of the existence of convergence or the convergence to other polices here, as at most cases the policy will converge to either \(\mathbf{0}\) or \(\mathbf{1}\).
Therefore, a better learning algorithm should have a larger set of good initialization. We now compare \(S_{\text{SP}}\) (self-play) with \(S_{\mu}\) (the fixed opponent \(\mu\in\{\mathbf{0},\mathbf{1}\}\)).
Theorem 4.2 ().: _For \(\mu\in\{\mathbf{0},\mathbf{1}\}\), when the same preference preserved algorithm is applied, we must have \(S_{\text{SP}}\subseteq S_{\mu}\). And, learning against fixed \(\mu\) strictly enlarges the good initialization set as \(S_{\mu}\backslash S_{\text{SP}}\neq\varnothing\)._
The proof is in Appendix A. Theorem 4.2 intuitively shows that cooperative learning with a fixed opponent can be much easier. Hence, PSRO will have a much higher chance to find a better joint policy than SP.
## 5. Method
By the motivating example, SP-based algorithms can fails to finding the global NE in mixed cooperative-competitive games because of decentralized learning. PSRO mitigates this issue by training against fixed opponents iteratively. However, PSRO can be very inefficient in complex games with a large policy space. Therefore, we aim to bridge the gap of SP and PSRO in this section.
### Fictitious Cross-Play
Fictitious Cross-Play (FXP) trains an SP-based main policy and a BR-based counter population. The main policy aims to find the global NE of the game and is trained by fictitious self-play and cross-play against the counter population. To prevent the main policy from local NEs, an auxiliary counter population is iteratively trained for the best responses to past versions of main policy. The counter population is able to find better joint policies to exploit the past main policies because it is trained against fixed opponents, leading to a fully cooperative learning problem. The learned counter policies are then used as opponents for main policy in cross-play, which helps it get out of local NEs towards the global NE. For ease of notations, we use main population to refer to the set of all past _checkpoints_ of the main policy.
FXP starts with randomly initialized policies \(\pi^{1}_{M},\pi^{1}_{C}\), and the initial main population and counter population are \(\Pi^{1}_{M}=\{\pi^{1}_{M}\},\Pi^{1}_{C}=\{\pi^{1}_{C}\}\). Consider the restricted game where the row player's policies are \(\Pi_{M}\) and the column player's policies are \(\Pi_{C}\), we denote the payoff matrix of this restricted game as \(U_{M\times C}\). Since the game is symmetric, we also have a joint population \(\Pi_{M+C}=\Pi_{M}\cup\Pi_{C}\), and the corresponding payoff matrix is denoted as \(U_{M+C}=U_{(M+C)\times(M+C)}\). In each iteration, a new main policy \(\pi^{t+1}_{M}\) and counter policy \(\pi^{t+1}_{C}\) are trained simultaneously against different opponents. The main policy is trained by self-play, fictitious play against the main population \(\Pi^{t}_{M}\), and cross-play against the counter population \(\Pi^{t}_{C}\). The probability of self-play is determined by a hyperparameter \(\eta\), and the meta-policy \(\sigma_{M+C}\) used to sample opponents from main and counter populations is computed by a meta-solver on payoff \(U_{M+C}\). Similarly, a meta-policy \(\sigma_{M}\) for the row player in the restricted game with payoff \(U_{M\times C}\) is computed, and the counter policy is train to be the best response of the main population \(\Pi^{t}_{M}\) with meta-strategy \(\sigma_{M}\). The new main and counter policies are added to their populations after convergence or a fixed number of training steps, and the payoff matrices \(U_{M+C},U_{M\times C}\) are updated by game simulations. The pseudocode of FXP is listed in Algorithm 3.
```
Input: Initial main population and counter population with random policy \(\Pi^{1}_{M}=\{\pi^{1}_{M}\},\Pi^{1}_{C}=\{\pi^{1}_{C}\}\) for\(t=1,2,\cdots,T\)do Update \(U_{M+C},U_{M\times C}\) by game simulations \(\sigma_{M+C}\leftarrow\text{meta-solver}_{M}(U_{M+C})\) \(\sigma_{M},\sigma_{C}\leftarrow\text{meta-solver}_{C}(U_{M\times C})\) formanyepisodesdo Update \(\pi^{t+1}_{M}\) toward \(\text{BR}(\pi\pi^{t+1}_{M}+(1-\eta)\sigma_{M+C}\Pi^{t}_{M+C})\) Update \(\pi^{t+1}_{M}\) toward \(\text{BR}(\sigma_{M}\Pi^{t}_{M})\) \(\Pi^{t+1}_{M}\leftarrow\Pi^{t}_{M}\cup\{\pi^{t+1}_{M}\}\) \(\Pi^{t+1}_{C}\leftarrow\Pi^{t}_{C}\cup\{\pi^{t+1}_{C}\}\) Output: Population \(\Pi^{t+1}_{M},\Pi^{t+1}_{C}\) and meta-policy \(\sigma_{M+C}\)
```
**Algorithm 3**Fictitious Cross-Play (FXP)
### Practical Implementation
For large real-world games, we combine FXP with neural networks and use a popular MARL method, such as MAPPO (Srivastava et al., 2014) as the approximate BR oracle. In iteration \(t\), we run the current main policy and counter policy against different opponents to collect training samples. When an episode starts, the opponent for main policy is set to itself with probability \(\eta\), otherwise is sampled from the joint population \(\Pi^{t}_{M+C}\) according to meta-policy \(\sigma_{M+C}\). Similarly, the opponent for counter policy is sampled from the main population \(\Pi^{t}_{M}\) according to meta-policy \(\sigma_{M}\). The main and counter policies are then updated using MARL algorithms based on these samples. This procedure is repeated for many episodes until convergence or a maximum number of steps. Then the policies are added to the main and counter population to continue to the next FXP iteration.
To accelerate training in complex games, we initialize the main policy \(\pi^{t+1}_{M}\) in iteration \(t+1\) using policy \(\pi^{t}_{M}\) from the previous iteration. This is much more efficient than training from scratch, since the current main policy is already a best response to most of the new target opponents. On the other hand, the counter policy in each iteration remains to be trained from scratch or from an _unconverged_ early checkpoint. This is to avoid the situation where both main and counter policies are trapped in the same local sub-optimum and fail to find an approximate best response.
In practice, when the population size is large, solving meta-policies can be computationally expensive for commonly used meta-solvers. For efficient training, we use prioritized sampling which assigns a score to each opponents and samples them with probabilities proportional to their scores. For main policy, we use the opponents' win rates as their scores
\[s_{\pi_{M}}(\pi)=P\left(\pi\text{ wins }\pi_{M}\right), \tag{9}\]
which makes the main policy focus on the hardest opponents and try to overcome them. For counter policy, since it is learned from scratch or from an early checkpoint, we set the opponents' scores to be the product of their win rate and lose rate
\[s_{\pi_{C}}(\pi)=P\left(\pi\text{ wins }\pi_{C}\right)\cdot P\left(\pi_{C} \text{ wins }\pi\right), \tag{10}\]
which favors policies of about the same level as the counter policy and forms a curriculum to learn from easy to hard.
### Connections to SP and PSRO
FXP can be regarded as an extension of both SP and PSRO with the hyperparameter \(\eta\) used as a trade-off between efficiency and convergence. If we set \(\eta=1\), the main policy becomes a pure self-play policy and has no interaction with its past versions or the counter population. The counter policy will become the BR of the time average of the SP policy with a uniform meta-solver. If we set \(\eta=0\), both main and counter policies are trained against fixed opponents, which is conceptually similar to PSRO. However, even when \(\eta=0\), FXP is different from PSRO in two ways. First, FXP's meta-policies in each iteration are adaptive by prioritized sampling, while the meta-policy of PSRO is fixed. Second, the main policy of FXP is trained continuously and never reset, i.e., restart training from scratch, while the new policy in each PSRO iteration is reset to a random policy and trained from scratch. Note that it is possible to turn off reset in PSRO by warmstarting a new policy from previous ones. However, PSRO requires a _global_ best response policy. Learning best responses with warmstart may easily get trapped in a local sub-optimum or a local NE and fail to sufficiently explore the policy space. We empirically find setting \(\eta=0.2\) works well in many environments and use its as the default value in FXP.
## 6. Experiment
In this section, we demonstrate the effectiveness of FXP in various mixed cooperative-competitive games. We first study matrix games, where the payoff and team exploitability can be calculated exactly. FXP converges to the global NE while other methods fail or use much more training steps. Then we use MAPPO (Marcus et al., 2016) as an approximate BR oracle and consider a gridworld environment MAgent Battle (Marcus et al., 2016). FXP achieves a lower team exploitability and a higher Elo rating than other MARL baselines for NE. Finally, with large-scale training, we use FXP to solve the challenging 11-vs-11 multi-agent full game in Google Research Football (GRF) (Krishnaman et al., 2017). We compare our methods with SOTA models including the hardest built-in AI, PSRO w. BD&RD (Krishnaman et al., 2017) agent, and Tikick agent (Tikik et al., 2017). FXP achieves over 94% win rate against available models with a significant goal difference. Experiments on the motivating example, more ablation studies, and training details can be found in Appendix B.
### Matrix Games
We introduce two mixed cooperative-competitive matrix games to visualize the learning dynamics of FXP, SP, PSRO and their variants and compare their performance.
_Team Rock-Paper-Scissors (team RPS) game._ This game extends the classic \(2\)-player zero-sum game Rock-Paper-Scissors (RPS) to a \(4\)-player team competitive setting. The \(4\) players are divided into \(2\) teams and play RPS between the teams. Each player can choose either action \(0\) or action \(1\). If both players in the same team choose action \(0\), then the team plays Rock; if both choose \(1\), the team plays Paper; otherwise, the team plays Scissors. Clearly, this game has a global NE where the team chooses Rock, Paper, Scissors with equal probability. It also has a local NE where both players in the team choose action \(1\) and the team always plays Scissors. This is because when all players other than self choose action \(1\), choosing action \(0\) would make the team play Paper, which is exploited by the opposing team's move Scissors. However, the \(2\) players can jointly change their actions from \(1\) to \(0\) to play Rock and exploit the Scissors.
We run SP, FSP (Krishnaman et al., 2017), \(\text{PSRO}_{\text{Uniform}}\)(Krishnaman et al., 2017), and FXP with uniform meta-solvers on the team RPS game and use policy gradient to optimize the policy for a same number of steps. The step count of FXP includes both main and counter policies for a fair comparison. The learning dynamics of each algorithm is shown in Figure 2. The red star in each subfigure is the global NE of team RPS game, the grey lines in SP and FSP subfigures are the traces of the training policies and the green lines are the traces of their time-averaged policies, the colored line in PSRO and FXP subfigures are the mixed policies of current populations. As shown in the figure, SP and FSP converge to the local NE of Scissors and get stuck there forever, PSRO cycles around the global NE and slowly converges to it, and FXP quickly converges to the global NE. We also run PSRO without reset on the game and it converges to the local NE as SP does. This shows that PSRO has to train policy from scratch in each iteration to avoid struggling in local NEs.
_Seek-attack-defend (SAD) game._ Now we propose a matrix game with a larger action space so that we can quantitatively compare different methods. A seek-attack-defend (SAD) game consists of two teams of \(N\) agents, each with the action space containing \(A+1\) seeking action \(\{0,1,2,...,A\}\) and two special actions \(\{attack,\textit{defend}\}\).
Figure 2. Learning dynamics of SP, FSP, PSRO without and with reset (i.e., train from scratch), and FXP in the team RPS game. FXP quickly converges to the global NE (red star). Each algorithm is trained for the same number of steps. We counts the steps for both main and counter policies in FXP for a fair comparison.
Each team seeks to obtain as much total reward as possible by cooperatively choosing seeking action \(\{0,1,2,...,A\}\). A reward-level \(L\) is defined as the minimum seeking action if all seeking actions differ by at most one. Otherwise, the reward-level \(L\) is equal to zero. After that, the total reward \(R\) is aggregated by all \(R_{x}\) of seeking action \(x\) s.t. \(L\leq x\leq L+1\). Therefore, teammates must learn to perform the same seeking action to receive the reward, and seek towards \(A\) as reward \(R_{x}\) gets higher as \(x\) increases (\(R_{0}=0,R_{i}<R_{i+1}\)).
Besides reward obtaining, the team must guard their rewards. If two agents of the other team use _attack_ action and none of the teammates _defend_ the reward, the team will lose all its reward. The final utility of SAD game is defined as the difference of the reward after attack and defense are considered. Therefore, each team must properly designate some agents to attack and defend while letting others seek the highest reward \(R_{A}\).
Here we show the learning curve of exploitability of five SP-based algorithms, including self-play (SP), fictitious self-play (FSP), follow the regularized leader (FoReLU) (Srivastava et al., 2014), Replicator Dynamics (Srivastava et al., 2014), multiplicative weights update (MWU) (Srivastava et al., 2014), counter factual regret minimization (CFR) (Beng et al., 2015). Although some of them are guaranteed to converge to NE in two-player zero-sum games, none of them converge to the global NE in SAD game, as shown in Figure 2(a). The reason behind that is the existence of a local NE that all teammates seek with the highest action \(A\), and SP-based algorithms almost always get trapped in this local NE.
Despite SP's poor performance, FXP and PSRO provide better solutions. We compare FXO with \(\text{PSRO}_{\text{Uniform}}\) and \(\text{PSRO}_{\text{Nash}}\). The results in Figure 2(b) show that both FXP and \(\text{PSRO}_{\text{Nash}}\) converge to global NE, and FXP consumes much smaller steps. (The training steps of FXP contain the cost of training counter policies for a fair comparison.) The warm-start versions of PSRO do not re-initialize the policy at the beginning of each iteration and thus degenerate to similar performance of SP.
The exploitability curves of (main) policies (NOT meta policies) in Figure 2(c) explain the advantage of FXP upon PSRO. FXP can utilize the knowledge of former policies and continue to get updated from the last iteration, while PSRO must learn skills (e.g., the cooperation of choosing the same seeking action) from scratch at each iteration. This advantage can be amplified more in larger-scale game where computing even one RL best response is non-trivial.
### MAgent Battle
MAgent Battle is a gridworld game where a red team of \(N\) agents fight against a blue team. At each step, agents can move to one of the 12 nearest grids or attack one of the 8 surrounding grids of themselves. Each agent has a maximum hp of 10, and lose 2 hp if is attacked by an opponent agent, and slowly recover 0.1 hp at the end of each step. An agent is killed if its hp goes to zero and will not respawn. The game terminates if all agents in the same team are killed or reaches a maximum number of steps. Agents in the same team get a reward of 0.1 or 10 if an opponent agent is attacked or killed, respectively. To make the game zero-sum between teams, agents are also penalized by 0.1 and 10 if an teammate or themselves are attacked or killed. A good strategy in this game is to cooperatively attack the same opponent with teammates and kill opponents one by one to build an advantage in the number of agents alive.
We run SP, FSP, Neural Replicator Dynamics (NeuRD) (Srivastava et al., 2014), \(\text{PSRO}_{\text{Nash}}\), \(\text{PSRO}_{\text{Uniform}}\). Online Double Oracle (ODO) (Beng et al., 2015), and FXP with MAPPO in the 3-vs-3 MAgent Battle game. Since the exploitability can not be exactly calculated in this game, we estimates the approximate exploitability of the final policies or population of different algorithms by training approximate BRs against them. We also use Elo ratings (Srivastava et al., 2014) to evaluate the relative strength of different agents. The averaged results over 3 seeds are shown in Table 1. Notably, FXP agents achieve the lowest exploitability and the highest Elo rating.
We also visualize the behaviours of agents trained by different algorithms in Figure 4. SP converge to a defensive policy which agent stays at the edge of the map and keeps attacking in the direction of opponents, but never move toward the opponents. This is a local NE because if only one agent tries to move and attack the opponents, it will face a dangerous 1-vs-3 situation and easily get killed. However, it is still possible to defeat the opponents by cooperatively attacking them with all teammates. On the other hand, PSRO agents are more aggressive because they always try to
Figure 3. Results on seek-attack-defend (SAD) games. Evaluation metric is exploitability, which is defined as the sum of (non-negative) improvement of replacing the current policy with three following strategies (three supports of the global NE): (1) all seeking \(A\); (2) 2 _attack_ + (\(N-2\)) seeking \(A\); and (3) 1 _defend_ + (\(N-1\)) seeking \(A\). Each step is either computing a best response or updating the Q-function, depending on the algorithm to be used.
exploit a fixed population and usually overfit to a specific attacking way. An global NE can be find if all possible attacking strategies are enumerated. However, even in this simple gridworld game, the policy space is enormous, making PSRO methods very inefficient. FXP agents learn an approximate global NE that is to wait and jointly attack. This policy exploits aggressive opponents by waiting and attacking first when the opponents are trying to get close enough to them. When facing defensive opponents, FXP agents sometimes wait forever till a tie, sometimes wait and then take the initiative to jointly attack the opponents.
### Google Research Football
Google Research Football (GRF) is a physics-based simulation environment adapted from popular football video games. Each agent controls a player in the game and has to learn how to dribble the ball, cooperate with teammates to pass the ball and overcome the opponents' defense to score goals. We consider the GRF 11-vs-11 full game, which simulates a 3000-step complete football game with standard rules. The long-time horizon, enormous policy spaces, and mixed cooperative-competitive nature make it a challenging problem for MARL algorithms. We use FXP with MAPPO to solve this problem and compare with existing SOTA models.
Because the game is too complex, it is impossible to exactly calculate or approximately estimate the exploitability of a policy or a population. As an alternative approach, we evaluate FXP and other models by playing against a set of unseen reference policies and compare their performance. We use GRF's built-in models with different levels as the reference policies and compare FXP with SOTA models including the hardest built-in AI, a PSRO-based agent, PSRO w. BD&RD (Krishnan et al., 2017), an imitation learning agent _Tikick_(Krishnan et al., 2017). Note that since the PSRO w. BD&RD (Krishnan et al., 2017) never release their code or model. We directly report the original numbers in their paper. The model of _Tikick_ is released and our evaluation result of _Tikick_ is consistent with the paper (Krishnan et al., 2017). The results are shown in Figure 5, where FXP achieves the largest goal difference against all reference policies. As a reference, GRF (Krishnan et al., 2017) also reports the performances of the BR policies by directly training against different level build-in AI. The BR policies achieve the average goal differences of 12.83, 5.54, 3.15 for easy, medium, hard respectively. We remark that, although our method has never seen the built-in models during training, FXP achieves a comparable results to BR policies, especially against medium and hard opponents.
Moreover, football is a non-transitive game like RPS, so good performance against certain opponents does not necessarily means a strong policy. We also carry out a tournament-style head-to-head evaluation between FXP and available models, including Tikick and built-in hard AI. The results are shown in Figure 6, where FXP achieves a dominating performance, with over 94% win rate and at least 2.7 more goals scored per game on average. We remark that the SOTA model Tikick performs both imitation learning on additional offline data and RL fine-tuning while FXP only adopts pure full RL training, which suggests the effectiveness of our algorithm.
## 7. Conclusion
In this work, we present a novel algorithm, Fictitious Cross-Play (FXP), to learn global NEs in mixed cooperative-competitive games. FXP trains an SP-based main policy for the global NE and mitigates the issue of getting stuck at local NEs by training a BR-based counter population to continuously exploit the main policy. Experiments in matrix games and gridworld games demonstrate that FXP converges to the global NE quickly and outperforms a series popular methods for NE. FXP also defeats the SOTA models in the Google Research Football environment with a dominant win rates. We hope FXP could bring useful insights to the community towards more effective MARL algorithms.
\begin{table}
\begin{tabular}{l c c} \hline \hline & Exploitability & Elo rating \\ \hline SP & 28.66 (0.80) & 782 \\ FSP & 21.21 (1.87) & 1627 \\ NeuRD & 26.72 (1.43) & 1143 \\ \(\text{PSRO}_{\text{Uniform}}\) & 24.63 (3.35) & 1495 \\ \(\text{PSRO}_{\text{Nash}}\) & 22.54 (1.65) & 1544 \\ ODO & 21.76 (2.19) & 1589 \\ \hline FXP & **10.62 (2.73)** & **1832** \\ \hline \hline \end{tabular}
\end{table}
Table 1. Exploitability and Elo rating of FXP agents and other MARL methods for NE in MAgent Battle game.
Figure 4. Visualization of learned behaviours by different methods in MAgent Battle. FXP learns an approximate global NE, i.e., wait for the chance to jointly attack.
Figure 5. Goal differences of FXP and other models against built-in AI of different levels.
Figure 6. Head-to-head win rate evaluation between FXP, Tikick and built-in hard AI in 11-vs-11 full game.
## Acknowledgments
This research was supported by National Natural Science Foundation of China (No.U19B2019, 62203257, M-0248), Tsinghua University Initiative Scientific Research Program, Tsinghua-Meituan Joint Institute for Digital Life, Beijing National Research Center for Information Science, Technology (BNRist), Beijing Innovation Center for Future Chips and 2030 Innovation Megaprojects of China (Programme on New Generation Artificial Intelligence) Grant No. 2021AAA0150000.
|
2308.01709 | How well can modified gravitational wave propagation be constrained with
strong lensing? | Strong gravitational lensing produces multiple images of a gravitational wave
(GW) signal, which can be observed by detectors as time-separated copies of the
same event. It has been shown that under favourable circumstances, by combining
information from a quadruply lensed GW with electromagnetic observations of
lensed galaxies, it is possible to identify the host galaxy of a binary black
hole coalescence. Comparing the luminosity distance obtained through
electromagnetic means with the effective luminosity distance inferred from the
lensed GW signal would then enable us to constrain alternative theories of
gravity that allow for modified GW propagation. Here we analyze models
including large extra spatial dimensions, a running Planck mass, and a model
that captures propagation effects occurring in a variety of alternative
theories to general relativity. We consider a plausible population of lenses
and binary black holes and use Bayesian inference on simulated GW signals as
seen in current detectors at design sensitivity, to arrive at a realistic
assessment of the bounds that could be placed. We find that, due to the fact
that the sources of lensed events will typically be at much larger redshifts,
this method can improve over bounds from GW170817 and its electromagnetic
counterpart by a factor of $\sim 5$ to $\mathcal{O}(10^2)$, depending on the
alternative gravity model. | Harsh Narola, Justin Janquart, Leïla Haegel, K. Haris, Otto A. Hannuksela, Chris Van Den Broeck | 2023-08-03T12:08:02Z | http://arxiv.org/abs/2308.01709v1 | # How well can modified gravitational wave propagation be constrained with strong lensing?
###### Abstract
Strong gravitational lensing produces multiple images of a gravitational wave (GW) signal, which can be observed by detectors as time-separated copies of the same event. It has been shown that under favourable circumstances, by combining information from a quadruply lensed GW with electromagnetic observations of lensed galaxies, it is possible to identify the host galaxy of a binary black hole coalescence. Comparing the luminosity distance obtained through electromagnetic means with the effective luminosity distance inferred from the lensed GW signal would then enable us to constrain alternative theories of gravity that allow for modified GW propagation. Here we analyze models including large extra spatial dimensions, a running Planck mass, and a model that captures propagation effects occurring in a variety of alternative theories to general relativity. We consider a plausible population of lenses and binary black holes and use Bayesian inference on simulated GW signals as seen in current detectors at design sensitivity, to arrive at a realistic assessment of the bounds that could be placed. We find that, due to the fact that the sources of lensed events will typically be at much larger redshifts, this method can improve over bounds from GW170817 and its electromagnetic counterpart by a factor of \(\sim 5\) to \(\mathcal{O}(10^{7})\), depending on the alternative gravity model.
## I Introduction
Since the first direct detection of gravitational waves (GWs) in 2015, the field of GW physics has been developing rapidly [1]. The network of two Advanced LIGO detectors [2] and one Advanced Virgo detector [3] has observed around 90 GW signals to date [4]. These observations have opened up several previously unexplored research directions. For example, they have led to enhanced tests of general relativity (GR) by providing access to the genuinely strong-field dynamics of spacetime [5], provided a new method for probing the expansion of the Universe [6], and contributed to a better understanding of the formation channels of the binaries and other astrophysical compact objects [7]. As the interferometers' sensitivities improve and new detectors such as KAGRA [8; 9; 10; 11] and LIGO-India [12] join the network, even more events will be observed.
The detector upgrades could enable the detection of new phenomena, such as the gravitational lensing of GWs [13; 14; 15]. The latter occurs when GWs experience deflection due to a massive object, known as the lens, in their path. Recent rate estimates suggest that GW lensing can become detectable at the rate of \(\mathcal{O}(1)\) per year with current detectors at design sensitivity [16; 17; 18; 19; 20; 21; 22]. If the Schwarzschild radius of the lens is much larger than the GW wavelength (i.e. when the geometric optics limit applies), it can split the observed GW signal into multiple copies, also referred to as the lensed images. This phenomenon is called the strong lensing of gravitational waves. The images reach the detector as repeated and time-separated copies of the GW signals that only differ in their amplitudes (due to being magnified/demagnified by the lens), overall phases (due to image inversion along one or two principal axes), and arrival times (as the images travel along trajectories of different length) [23; 24]. By contrast, if the size of the lens is comparable to the wavelength of the GW (referred to as the wave optics limit), the GW can undergo frequency-dependent modulation [25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35].
Gravitational lensing has several interesting applications in fundamental physics, cosmology, and astrophysics (for example see [36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 159; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 187; 189; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 224; 213; 225; 226; 227; 228; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 2444; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 2777; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 287; 289; 288; 289; 291; 280; 289; 281; 284; 286; 287; 289; 281; 285; 287; 288; 289; 292; 300; 30; 31; 329; 332; 333; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 629; 63; 64; 65; 65; 66; 67; 68; 69; 70; 72; 73; 75; 74; 75; 76; 77; 78; 79; 80; 82; 83; 84; 85; 86; 87; 88; 89; 910; 88; 89; 929; 94; 950; 96; 97; 98; 99; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 113; 114; 115; 117; 116; 1177; 118; 119; 121; 122; 123; 124; 125; 126; 127; 128; 129; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 150; 151; 153; 154; 155; 156; 157; 158; 159; 161; 170; 171; 172; 174; 175; 176; 1777; 178; 179; 181; 1932; 183; 184; 185; 186; 187; 189; 194; 1951; 196; 197; 198; 293; 294; 295; 296; 297; 297; 298; 299; 301; 298; 310; 299; 320; 333; 34; 35; 36; 37; 38; 39; 40; 41; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 54; 56; 57; 58; 59; 61; 62; 63; 64; 65; 66; 67; 68; 69; 71; 82; 69; 70; 73; 75; 76; 77; 78; 79; 83; 85; 87; 89; 90; 86; 87; 88; 89; 91; 92; 93; 94; 95; 96; 97; 98; 99; 107; 108; 119; 120; 121; 123; 124; 125; 126; 127; 128; 129; 133; 140; 141; 151; 152; 153; 156; 157; 158; 159; 162; 159; 176; 183; 197; 198; 199; 201; 199; 218; 223; 150; 150; 153; 157; 159; 199; 224; 151; 154; 156; 158; 159; 163;
we can observe the former multiple times with different detector orientations [64; 65; 66; 67; 68]. Especially with four detectable images1, we may be able to localize the source within \(\mathcal{O}(1)\) square degrees [53; 54]. When the GW source is lensed, we can expect that the electromagnetic (EM) radiation coming from its host galaxy is also lensed, as is widely assumed in cosmography studies [69; 70; 71; 72; 73]. A joint GW+EM analysis can help locate the source's host galaxy once its location is narrowed down to a few square degrees using only GW data. In this step, one reconstructs all the lenses in the region provided by the GW data to find which lens could best produce a GW quadruplet with properties similar to the ones observed; the galaxy that is undergoing lensing by this particular lens is then likely to be the host galaxy of the GW event. This method was proposed and studied in [53; 54]. Once the host galaxy is known, a dedicated spectroscopic or photometric follow-up can lead us to the redshift of the source. By combining the source's redshift with a cosmological model, we can estimate the source's luminosity distance in a way that is unaffected by the anomalous GW propagation [74]. In addition, we can have another, independent measurement of the source's luminosity distance from the GW data, which could be affected by anomalous propagation; by comparing the two distances the anomaly can be discovered or bounded.
Footnote 1: 30% of strongly lensed events are predicted to be quadruplets [22].
Let us denote by \(D_{L}^{\rm EM}\) the luminosity distance derived from the EM redshift measurements and a cosmological model, which we will refer to as the EM luminosity distance. Similarly, let us write \(D_{L}^{\rm GW}\) for the luminosity distance measured from the GW data when assuming an amplitude fall-off proportional to \(1/D_{L}^{\rm GW}\), and call it the GW luminosity distance. In GR, \(D_{L}^{\rm GW}\) and \(D_{L}^{\rm EM}\) coincide, but in alternative theories of gravity there can be a non-trivial relationship between the two. This relationship will be sensitive both to parameters associated with the deviation from GR, and to the cosmological parameters. For definiteness, in this work we will generally consider a spatially flat Friedmann-Lemaitre-Robertson-Walker (FLRW) Universe with cosmological constant and negligible radiation density, in which case the cosmological parameters are the Hubble constant \(H_{0}\), and the densities of matter and dark energy relative to the critical density, respectively denoted by \(\Omega_{m}\) and \(\Omega_{\Lambda}\). For the purposes of this study, we will fix \(\Omega_{m}\) and \(\Omega_{\Lambda}\) to their values from Planck 2018 [75], whereas \(H_{0}\) will be left free. Note that in the relationship between \(D_{L}^{\rm GW}\) and \(D_{L}^{\rm EM}\) there will be a degeneracy between the deviation parameters and \(H_{0}\)[60]. Thus, bounds on the deviation parameters will be determined by the prior information we have from previous measurements on \(H_{0}\), together with the measurement uncertainty on \(D_{L}^{\rm GW}\). For \(H_{0}\), we could in principle choose a fairly narrow prior range informed by the Planck [76], SHoES [77], or other previous measurements [78]. However, in our setting, information about \(H_{0}\) can be obtained from the difference in times of arrival of the GW images, together with lens reconstruction through electromagnetic means, as explained in detail in [79; 54]. Since the latter will typically lead to wider ranges for \(H_{0}\) compared to the previous \(H_{0}\) measurements, our predictions for the bounds one can obtain on the deviation parameters will be on the conservative side.
Studying modified propagation theories in the context of strongly lensed and localized GW events, especially from binary black hole (BBH) coalescences, is attractive, because such events can be detected at a higher redshift compared to binary neutron star (BNS) events. In the past, modified propagation theories have been tested using GW170817 [80; 81; 60; 82], a signal from a BNS inspiral with an identifiable EM counterpart [83; 84]. However, by cosmological standards, the GW170817 signal travelled only a small distance before it reached the detectors, and in modified propagation theories, the imprint of the deviation tends to accumulate with distance. Other methods have been proposed that exploit the population properties of BBH coalescences observed with GWs [85; 86; 87]; since BBHs can be detected out to larger distances, this enables considerably improved bounds over the ones from GW170817. Due to magnification, GWs from _lensed_ BBH events can potentially be seen out to redshifts \(z\sim 6\)[16], so that more stringent constraints can be expected also from this methodology. The aim of this paper is to quantify the gain from GW lensing for the different anomalous propagation scenarios considered.
The rest of the paper is structured as follows. In Sec. II, we recall the basics of GW lensing. Modified propagation theories are discussed in Sec. III, and our method for constraining anomalous propagation through lensing is described in Sec. IV. Results and comparisons with measurements on GW170817 and other techniques are presented in Sec. V. Finally, Sec. VI provides conclusions and future directions. We work in the geometric unit system so that the speed of light and the gravitational constant are set to unity.
## II Gravitational-wave lensing and distance measurements
To understand how strongly lensed GWs can be applied to test theories with modified GW propagation, here we briefly summarize the important elements of strong lensing (for a detailed overview of GW lensing, see [13], and to understand the localization aspects, see [53; 54]). We will assume that the GW is originating from a BBH coalescence and that it is strongly lensed by a galaxy, one of the most common configurations according to forecasts [16; 21]. In such a scenario, the geometric optics limit applies and multiple images
of the GWs are produced.
Strong lensing introduces a magnification \(\mu_{i}\), a time delay \(t_{i}^{d}\), and an overall complex phase shift \(\pi n_{i}\), called the Morse phase, to each image. They modify the waveform as
\[h_{L}^{i}(f;\vec{\theta},\mu_{i},t_{i}^{d},n_{i})=|\mu_{i}|^{1/2}e^{i2\pi ft_{i}^ {d}-i\pi n_{i}}h(f;\vec{\theta})\,, \tag{1}\]
where \(h_{L}^{i}\) is the waveform associated with the \(i^{th}\) lensed image, \(h(f;\vec{\theta})\) is the waveform in the absence of lensing, \(f\) is the frequency, and \(\vec{\theta}\) are the source parameters of the binary. The magnifications, time delays, and Morse phases can be calculated by solving the lens equation if we have information about the source position and lens properties.
If there is no complementary EM information available, it is not possible to disentangle the luminosity distance and magnifications just using GW data, as both only appear in the amplitudes of the images, and different images have different magnifications that are _a priori_ unknown. For a given image we usually absorb the magnification into an _effective_ GW luminosity distance \(D_{L}^{\rm eff,i}=D_{L}^{\rm GW}/\sqrt{|\mu_{i}|}\). However, when EM information is at hand the magnifications can, in in principle, be separately measured through lens reconstruction [88], at least for quadruply lensed events.
Suppose we have detected multiple images of a strongly lensed GW with a network of detectors. In this scenario, due to Earth's rotation in between the arrival of the different images, the same event is observed multiple times with different detector network orientations, allowing for high-accuracy sky localization [66; 64]. Since at least a portion of the host galaxy of the BBH coalescence must itself be lensed, one can then consider the strongly lensed galaxies in the sky error box obtained from the GW measurements [54]. For each of these one can use the lensed EM image fluxes to reconstruct the profile of the lens. By requiring consistency with the GW relative time delays, relative magnifications, and Morse phases, one can filter out incorrect lenses and in principle pinpoint the correct lens and host galaxy. From spectroscopic or photometric measurements, the redshift of the host galaxy can be obtained. Moreover, for quadruply lensed events, the relative time delays of the GW images together with the EM reconstruction of the now identified lens, enable measurement of the _absolute_ magnifications \(\mu_{i}\)[54]. Combined with GW measurements of \(D_{L}^{\rm eff,i}\) for the different images, this leads to a measurement of \(D_{L}^{\rm GW}\).
The details about the EM follow-up and its feasibility are documented in Hannuksela _et al._[54] and Wempe _et al._[53]. Here we consider a scenario where a quadruply lensed GW has already been detected and the host galaxy and lens have been identified and characterized, from which we obtain a measurement of \(D_{L}^{\rm GW}\) as well as a source redshift. By combining the redshift measurement with a cosmology we obtain \(D_{L}^{\rm EM}\). The two distance measurements, \(D_{L}^{\rm GW}\) and \(D_{L}^{\rm EM}\), are then used to test the modified propagation theories.
In this work, for definiteness we will assume a flat FLRW universe, in which case one has
\[D_{L}^{\rm EM}=\frac{(1+z_{s})}{H_{0}}\int_{0}^{z_{s}}\frac{dz^{\prime}}{E(z^ {\prime})}\,, \tag{2}\]
where \(z_{s}\) is the redshift of the host galaxy, and \(E(z)\equiv\sqrt{\Omega_{m}(1+z)^{3}+\Omega_{\Lambda}}\); here \(\Omega_{m}\) and \(\Omega_{\Lambda}\) are the matter and dark energy density parameters, and \(H_{0}\) is the Hubble constant.
To simulate strongly lensed GWs, we follow Wierda _et al._[16] and sample BBHs from a PowerLaw+Peak distribution [89], strongly lensed by a population of galaxy lenses following the SDSS galaxy catalogue [90]. Our network of detectors consists of the two Advanced LIGO interferometers [2], Advanced Virgo [3], KAGRA [11] and LIGO-India [12], all at design sensitivity. The noise curves of all detectors are implemented using the bilby.gw.detector module of the Bilby (version 1.2.1) software package [91]. The events with network signal-to-ratio (SNR) above 8 are considered detected [92]. We then estimate the parameters of the simulated events using Golum[64; 65], which gives us the effective/measured luminosity distances of each image \(D_{L}^{\rm eff,i}\) as well as the arrival times. Typically, lens modelling errors and substructure effects will lead to an error budget for the magnification estimates, with \(\sim 10\,\%\) standard deviation being a reasonable estimate [53; 54]. Thus, for each GW measurement, we assume that the magnification posterior derived from the EM band is given by \(p(\mu_{i}|\vec{d}_{\rm EM})=\mathcal{N}(\mu_{i}|\mu_{i}^{\rm true},\sigma_{\mu})\), where \(\vec{d}_{\rm EM}\) are the data associated with the EM observations, and \(\mathcal{N}(\mu_{i}|\mu_{i}^{\rm true},\sigma_{\mu})\) is a normal distribution centered around the true magnification value \(\mu_{i}^{\rm true}\) of each image \(i\), with a 10% standard deviation for \(\sigma_{\mu}\). Doing so allows us to disentangle the intrinsic \(D_{L}^{\rm GW}\) and magnification from \(D_{L}^{\rm eff,i}\). For the remainder of the discussion, we assume that the intrinsic GW luminosity distance, \(D_{L}^{\rm GW}\), has been estimated through this procedure.
## III Modified propagation theories
As explained above, our tests of modified theories of gravity will be based on a comparison between the reconstructed \(D_{L}^{\rm GW}\) and the luminosity distance \(D_{L}^{\rm EM}\) obtained by electromagnetic means.2 In the specific modified gravity models we consider - large extra dimensions,
\(\Xi\)-paramaraterization, and varying Planck mass - there is a non-trivial relationship between these two quantities, which will depend on the parameter(s) related to the deviation from GR and on the cosmological parameters. Let us briefly recall what these relationships look like for our three models.
### Large extra spatial dimensions
In theories of gravity with large extra dimensions, there is the possibility of some energy of the GWs leaking into them [81; 98; 99], while EM radiation is confined to the usual three spatial dimensions. This would make the detected signal appear weaker, leading to larger measured values for \(D_{L}^{\rm GW}\) than would otherwise be the case. For definiteness, we will work with the following simple phenomenological ansatz for the relation between \(D_{L}^{\rm GW}\) and \(D_{L}^{\rm EM}\), based on conservation of integrated flux [59]:
\[D_{L}^{\rm GW}=(D_{L}^{\rm EM}(z_{s},H_{0}))^{\frac{D-2}{2}}, \tag{3}\]
where \(D\) is the number of spacetime dimensions and \(z_{s}\) is the source redshift. We will allow \(D\) be a real number, with the GR value \(D=4\) as a fiducial value. An illustration of the effect of extra dimensions on a GW waveform is given in the top panel of Fig. 1.
### \(\Xi-\)parameterization
Another parameterization was proposed in [61], where the link between \(D_{L}^{\rm GW}\) and \(D_{L}^{\rm EM}\) is expressed as
\[D_{L}^{\rm GW}=D_{L}^{\rm EM}(z_{s},H_{0})\left[\Xi_{0}+\frac{1-\Xi_{0}}{(1+z _{s})^{n}}\right]\,. \tag{4}\]
The free parameters of the model are \((\Xi_{0},n)\). This parameterization is phenomenological in nature, but as shown in [62] it can be related to a large class of modified gravity theories, including Horndeski [100] theories, Degenerate Higher Order Scalar-Tensor theories (DHOST) [101], and theories with nonlocally modified gravity [102; 103; 104]. When \(z\ll 1\), \(D_{L}^{\rm GW}\simeq D_{L}^{\rm EM}\). Therefore, similar to the extra dimension theories, we expect to observe a departure from GR only at large distances (\(z\gtrsim 1\)). For GR, \(\Xi_{0}=1\) and \(n\) is degenerate. In Fig. 1, middle panel, one can see an illustration of the effect of this modified propagation theory on the observed GW signal.
### Time-varying Planck mass
A time-varying Planck mass is another possible cause for modified GW propagation. Following [60], the relation between \(D_{L}^{\rm GW}\) and \(D_{L}^{\rm EM}\) can be expressed as
\[D_{L}^{\rm GW}(z)=D_{L}^{\rm EM}(z_{s},H_{0})\times\\ \exp\left(\frac{c_{M}}{2\Omega_{\Lambda}}\ln\frac{1+z_{s}}{( \Omega_{m}(1+z_{s})^{3}+\Omega_{\Lambda})^{1/3}}\right), \tag{5}\]
where \(c_{M}\) is a constant that relates the rate of change of the Planck mass with the fractional dark energy density in the Universe; for details, see [60] and references therein. For GR, \(c_{M}=0\). The bottom panel of Fig. 1 illustrates the change in a GW signal in the non-GR case.
Figure 1: The effect on the frequency domain GW signal in each of the modified propagation models assuming different amounts of deviations from GR denoted by different colours. In these examples, the GW source is assumed to be at \(\sim 5\) Gpc and the rest of the source parameters are similar to those of GW150914 [1]. For the running Planck mass model, the deviation is absolute since one has \(c_{M}=0\) in GR. For large extra dimensions and \(\Xi\)-parameterization, we consider percentage deviation in the parameters \(D\) and \(\Xi_{0}\), taking the fiducial values to be \(D=4\) and \(\Xi=1\), respectively. For the \(\Xi\)-parameterization, we arbitrarily choose \(n=1\) here, though in our subsequent analyses it will be a free parameter.
\begin{table}
\begin{tabular}{c|c|c} Theory & Parameter & Priors \\ \hline Large extra dimension & \(D\) & Uniform(3, 5) \\ \hline \(\Xi\)-parameterization & \(\Xi_{0}\) & Log Uniform(0.01, 100) \\ & \(n\) & Uniform(0, 10) \\ \hline Running Planck mass & \(c_{M}\) & Uniform(-150, 150) \\ \end{tabular}
\end{table}
Table 1: Deviation parameter(s) for each theory and the corresponding prior probability distributions used in our analyses.
Method
In this section we provide a more detailed outline of our method to measure the parameters characterizing the deviation for each case discussed in Sec. III.
We want to measure the deviation parameters given the GW data \(\vec{d}_{\rm GW}\) and the EM data \(\vec{d}_{\rm EM}\) data associated with a strongly lensed GW with quadruple images whose host galaxy has been determined. Let us denote the deviation parameters in all generality by \(\vec{\theta}_{\rm MGR}\). What we want to obtain is \(p(\vec{\theta}_{\rm MGR},H_{0}|\vec{d}_{\rm GW},\vec{d}_{\rm EM})\), the posterior probability distribution of the deviation parameters and the Hubble constant given the observed data. (As explained in the Introduction, other cosmological parameters are given definite values.) Using Bayes' theorem, we can write
\[p(\vec{\theta}_{\rm MGR},H_{0}|\vec{d}_{\rm GW},\vec{d}_{\rm EM})\] \[=\frac{p(\vec{\theta}_{\rm MGR},H_{0})\,p(\vec{d}_{\rm GW},\vec{d }_{\rm EM}|\vec{\theta}_{\rm MGR},H_{0})}{Z} \tag{6}\]
where \(H_{0}\) is the Hubble constant; \(p(\vec{\theta}_{\rm MGR},H_{0})\) the prior probability distribution for \(\vec{\theta}_{\rm MGR}\) and \(H_{0}\); \(p(\vec{d}_{\rm GW},\vec{d}_{\rm EM}|\vec{\theta}_{\rm MGR},H_{0})\) the likelihood function; and \(Z\) the evidence, whose value follows from the requirement that the posterior probability distribution be normalized. The prior distributions for \(\vec{\theta}_{\rm MGR}\) are specified in Table 1. As explained in the Introduction, for \(H_{0}\) we could in principle choose a relatively narrow prior range based on the Planck [76], SHoES [77], or other existing measurements [78]. Instead we make the more conservative choice of using as a prior the posterior distribution for \(H_{0}\) obtained from the differences in time of arrival of the GW images, together with lens reconstruction through electromagnetic means. For details we refer to [54; 79]; here we confine ourselves to recalling that what is obtained from observations is the so-called time delay distance \(D_{\Delta t}\), which is related to \(H_{0}\) through
\[D_{\Delta t}(z_{l},z_{s},H_{0})=\frac{\int_{0}^{z_{s}}dz^{\prime}/E(z^{\prime} )}{\int_{z_{l}}^{z_{s}}dz^{\prime}/E(z^{\prime})}D_{L}^{\rm EM}(z_{s},H_{0})\,. \tag{7}\]
Here \(z_{l}\) and \(z_{s}\) are respectively the lens and the source redshift, and \(E(z)\equiv\sqrt{\Omega_{m}(1+z)^{3}+\Omega_{\Lambda}}\). If \(D_{\Delta t}\) is measured, we can estimate \(D_{L}^{\rm EM}\) since we assume that \(z_{l}\) and \(z_{s}\) are known from the EM follow-up observations. Using the \(D_{L}^{\rm EM}\) measurement, \(H_{0}\) can be estimated through Eq. (2). \(D_{\Delta t}\) can be measured by performing lens reconstruction; however, owing to the computational complexity and cost, we skip the lens construction step and directly pick a value for the observed luminosity distance \(D_{L,obs}^{\rm EM}\) from a Gaussian distribution centred at the true value of \(D_{L}^{\rm EM}\) and with standard deviation \(\sigma=0.1D_{L}^{\rm EM}\), allowing us to incorporate offsets in the measurement. Next, we assume a Gaussian distribution around \(D_{L,obs}^{\rm EM}\) with a 10% standard deviation which serves as our posterior distribution for \(D_{L}^{\rm EM}\). Using the samples of this distribution together with Eq. (2), we construct the prior for \(H_{0}\). The 10% standard deviation used in the previous step is motivated by the results of Hannuksela _et al._[54].
To calculate the likelihood \(p(\vec{d}_{\rm GW},\vec{d}_{\rm EM}|\vec{\theta}_{\rm MGR},H_{0})\), we first express it as
\[p(\vec{d}_{\rm GW},\vec{d}_{\rm EM}|\vec{\theta}_{\rm MGR},H_{0})\] \[=\int d\vec{\theta}\,dz_{s}\,p(\vec{d}_{\rm GW}|\vec{\theta})\,p (\vec{d}_{\rm EM}|z_{s})\] \[\qquad\times\,p(\vec{\theta}|z_{s},\vec{\theta}_{\rm MGR},H_{0})\, p(z_{s}|\vec{\theta}_{\rm MGR},H_{0})\,, \tag{8}\]
where \(\vec{\theta}\) denotes the GW source parameters, \(p(\vec{d}_{\rm GW}|\vec{\theta})\) and \(p(\vec{d}_{\rm EM}|z_{s})\) are the likelihoods of the GW and EM data respectively, and \(z_{s}\) is the source redshift. \(p(\vec{\theta}|z_{s},\vec{\theta}_{\rm MGR},H_{0})\) and \(p(z_{s}|\vec{\theta}_{\rm MGR},H_{0})\) are the priors on the GW source parameters and redshift.
Since we assume that the host galaxy has been localized, the true source redshift \(z_{s}\) is known, and \(p(\vec{d}_{\rm EM}|z_{s})\) becomes a Dirac delta function centered on it, reducing Eq. (8) to
\[p(\vec{d}_{\rm GW},\vec{d}_{\rm EM}|\vec{\theta}_{\rm MGR},H_{0})\] \[=\int d\vec{\theta}\,p(\vec{d}_{\rm GW}|\vec{\theta})p(\vec{ \theta}|z_{s},\vec{\theta}_{\rm MGR},H_{0})\,p(z_{s}|\vec{\theta}_{\rm MGR},H_{0 })\,. \tag{9}\]
To estimate the GW likelihood \(p(\vec{d}_{\rm GW}|\vec{\theta})\), we perform Bayesian parameter inference using nested sampling [105], at least for the first image. Subsequently we use Golum[64; 65] to speed up Bayesian parameter inference for the other images. Golum can rapidly analyse lensed images by using the posterior samples of the first image as prior for the subsequent images, as most of the parameters for each of the four images are expected to be the same, apart from relative magnifications, rigid phase offsets, and differences in time of arrival.
Once we have the GW likelihood, we perform the integration over the \(\vec{\theta}\) for all parameters except the luminosity distance \(D_{L}^{\rm GW}\), yielding
\[p(\vec{d}_{\rm GW},\vec{d}_{\rm EM}|\vec{\theta}_{\rm MGR},H_{0})\] \[=\int dD_{L}^{\rm GW}\,p(\vec{d}_{\rm GW}|D_{L}^{\rm GW})\,p(D_{L }^{\rm GW}|z_{s},\vec{\theta}_{\rm MGR},H_{0})\] \[\qquad\times\,p(z_{s}|\vec{\theta}_{\rm MGR},H_{0})\,. \tag{10}\]
The prior \(p(D_{L}^{\rm GW}|z_{s},\vec{\theta}_{\rm MGR},H_{0})\) reduces to a Dirac delta function as we exactly know \(D_{L}^{\rm GW}\) given the values of \(z_{s}\), \(\vec{\theta}_{\rm MGR}\), \(H_{0}\) and the modified gravity model (Eqs. (3), (4) and (5)). Therefore, integrating with respect to \(D_{L}^{\rm GW}\) leads to
\[p(\vec{d}_{\rm GW},\vec{d}_{\rm EM}|\vec{\theta}_{\rm MGR},H_{0})=p(\vec{d}_{ \rm GW}|D_{L}^{\rm GW})p(z_{s}|\vec{\theta}_{\rm MGR},H_{0}). \tag{11}\]
Substituting Eq. (11) into Eq. (6) we can obtain the posterior distributions for \(\vec{\theta}_{\rm MGR}\) and \(H_{0}\).
In what follows, we assume binary black hole coalescences with component mass distributions drawn from the PowerLaw+Peak in [89]. Our GW waveform model is IMRPhenomXPHM [106], with black hole spin magnitudes distributed uniformly between 0 and 1, and spin directions uniformly on the sphere. The distribution of the redshifts of the BBH and the galaxy lenses (modelled as singular power law isothermal ellipsoids with external shear) is obtained from Wierda _et al._[16]. The fiducial values of \(\vec{\theta}_{\rm MGR}\) are equal to their GR values. The fiducial value of the Hubble constant is \(H_{0}=67.4\) km s\({}^{-1}\) Mpc\({}^{-1}\), and \(\Omega_{m}=0.315\). The lensed GWs were analyzed using Golum [64, 65] and Dynesty [107] to produce the \(D_{L}^{\rm GW}\) posteriors along with other source parameters. Our detector network consists of two LIGO [2], the Virgo [3], the KAGRA [11], and the LIGO-India [12] detectors where the detection threshold on the network SNR is 8. Results obtained using lensed events will be compared with what can be obtained from the GW observation of the BNS merger GW170817 together with its host galaxy identification [108]. For GW170817, we use the \(D_{L}^{\rm GW}\) posterior sample from the corresponding data release [109]. For this event we cannot construct the prior on \(H_{0}\) for GW170817 using the method which we used for lensed events; therefore we use Planck 2018 [75] results when analyzing it.
## V Results
Before diving into the full parameter estimation results, we first look into how the relative difference \(\Delta=|D_{L}^{\rm GW}-D_{L}^{\rm EM}|/D_{L}^{\rm EM}\) varies as a function of \(z_{s}\) and \(\vec{\theta}_{\rm MGR}\), to help us understand how large the imprint of various deviations will be. Values for \(\Delta\) are indicated by the color coding in Fig. 2. Here \(D_{L}^{\rm EM}\) is calculated for a range of values for redshift (horizontal axis), and \(D_{L}^{\rm GW}\) is computed using Eqs. (3)-(5) for a variety of (relative) deviation parameters (vertical axis). If \(\Delta\) is small (red regions), there may only be a negligible imprint in the departure from GR even if the deviation parameter differs significantly from its GR value. In the blue regions, we have a better chance of observing a deviation from GR if it is present.
The green vertical line shows the measured redshift of the host galaxy of GW170817 (\(z\simeq 0.009783\)[110, 111]). For the extra dimensions model, the line is mainly in the blue region, making the imprint of the deviation relatively large even for relatively small departures from the fiducial value of \(D=4\). However, for the given ranges of the \(\Xi_{0}\) and \(c_{M}\) parameters, GW170817 stays mostly in the red regions, making it more difficult to find the corresponding deviations from GR. In the latter two cases, higher redshifts than that of GW170817 are needed to have significantly better bounds on \(\Xi_{0}\) and
Figure 2: The fractional difference \(\Delta\equiv|(D_{L}^{\rm GW}-D_{L}^{\rm EM})/D_{L}^{\rm EM}|\) (color) between \(D_{L}^{\rm GW}\) and \(D_{L}^{\rm EM}\) as a function of source redshift (horizontal axis) and deviation parameter (vertical axis). The \(\delta D/D\) (top panel) and \(\delta\Xi_{0}/\Xi_{0}\) (middle) refer to changes in respectively \(D\) and \(\Xi_{0}\) relative to their fiducial values \(D=4\) and \(\Xi_{0}=1\) (with \(n=1\) for the latter case), whereas for \(c_{M}\) (bottom panel) we use the value of the parameter itself. In the blue (red) regions the impact of the deviation parameter on the relation between \(D_{L}^{\rm GW}\) and \(D_{L}^{\rm EM}\) is larger (smaller). At the redshift of GW170817 (green vertical line), for the \(\Xi\)-parametrization and varying Planck mass, \(\Delta\) is smaller than at high redshifts, already suggesting that strong lensing measurements, which access the high-redshift regime, are likely to lead to better constraints on these deviation parameters. On the other hand, the effect of extra dimensions is less sensitive to redshift, and measurements of \(D\) are not expected to improve as much as for the other two cases.
\(c_{M}\), and this is what GW lensing will provide.
In Fig. 3, we present the results obtained from a detailed simulation, as explained in the previous section. We consider a total of 55 GW events for the analysis. Each dot in Fig. 3 corresponds to a simulated strongly lensed GW event with quadruple images, at a given source redshift (horizontal axis), analyzed as described in Sec. IV. The true values of deviation parameters are set equal to their GR values. The vertical axis indicates the 90% confidence intervals for relative deviations in \(D\) (top) and \(\Xi_{0}\) (center), and for the absolute deviation in \(c_{M}\) (bottom), as the latter parameter is zero in GR. Since in the \(\Xi\)-parameterization, the parameter \(n\) is unconstrained when \(\Xi_{0}\) equals its fiducial value of 1, we do not show results for it here, though it was treated as a free parameter in our measurements. Finally, the color coding shows the combined SNR from the four images, i.e. the quadrature sum of the SNRs of the individual images. Also included are results from GW170817.
The results are in qualitative agreement with Fig. 2. In particular, for \(\Xi_{0}\) and \(c_{M}\) the advantage of being able to access higher redshifts is clearly in evidence, with bounds improving over those of GW170817 by factors of up to \(\mathcal{O}(10)\) and \(\mathcal{O}(100)\), respectively. By contrast, the bounds on \(D\) improve by up to a factor of \(\sim 5\). The differences in improvement can be explained by the qualitative predictions of Fig. 2 where \(\Delta\) follows a steep gradient for \(\Xi_{0}\) (center) and \(c_{M}\) (bottom) but a shallow one for \(D\) (top).
We note that for the strongly lensed events in our catalog, the combined SNR from the four images tends to be higher than that of GW170817, which can also improve the measurement accuracy on \(D_{L}^{\rm GW}\) and \(\vec{\theta}_{\rm MGR}\). Indeed, the measurement of the parameters is done using combined information from the different images, increasing the effective SNR used to infer the parameters values. However, in Fig. 3 we observe that lensed events with SNR similar to GW170817 (which has \(\mathrm{SNR}\simeq 32.4\)[83]) can measure the \(\vec{\theta}_{\rm MGR}\) more accurately compared to the latter as the lensed events are placed at high redshifts. Therefore, an increment in the distance made accessible by strong lensing is indeed the dominating factor in the improvement of measurement accuracies.
For the \(\Xi\)-parameterization, bounds we obtain from our simulated lensed events are consistent with the results of Finke et al. [63]. Let us also make a comparison with existing bounds from actual measurements. We have already mentioned the improvements of bounds from lensing with respect to measurements done with GW170817. In Mastrogiovanni et al. [80], bounds were obtained for the three models considered here, by combining information from GW170817 and its EM counterpart with information from the BBH signal GW190521, in the latter case assuming that a particular EM flare observed by the Zwicky Transient Factory (ZTF) [112] was associated with the BBH merger. Since GW190521 originated at a redshift of \(\simeq 0.8\)[113], adding this event brings the bounds on deviation parameters closer to
Figure 3: 90% confidence intervals for measurements of \(\delta D/D\), \(\delta\Xi_{0}/\Xi_{0}\) (defined as in Fig. 2) and \(c_{M}\). The dots refer to results from quadruply lensed events, whose source redshifts can be read off from the horizontal axis; in each case the colors indicate the combined signal-to-noise ratios (SNRs) from the four images. The triangle indicates bounds from GW170817. Even lensed events with combined SNR similar to that of GW170817 (which was \(\simeq 32.4\)) yield considerably better constraints on deviation parameters, again underscoring the benefit of being able to access the high-redshift regime.
what we find for lensed events; for example, they report \(\delta\Xi_{0}/\Xi_{0}\lesssim 3-10\) depending on assumptions made, to be compared with the bounds in Fig. 3.3 When specific alternative theories of gravity are assumed, studies based on the Cosmic Microwave Background and large structure formation can lead to bounds on \(c_{M}\) that are similar to the ones for lensed events; see e.g. [115] and the discussion in [60]. Finally, methods have developed that exploit the observed population properties of binary black hole coalescences using gravitational wave data only, in terms of e.g. redshift and mass distributions [85; 86; 87]. Depending on the assumptions made, these can be competitive with bounds on anomalous GW propagation that we project for lensed GW events with host galaxy identification.
Footnote 3: However, it should be noted that the association of GW190521 with the EM flare of [112] is by no means conclusive; see e.g. [114].
## VI Conclusions and future directions
Strong lensing of GWs could be detected in the near future, and there are various applications to be developed thanks to the additional information it can provide. Here we have focused on the fact that, under favorable circumstances, a quadruply lensed GW event together with EM observations can enable the identification of the host galaxy of a BBH event. In turn, this opens up the possibility of constraining alternative theories of gravity that predict anomalous GW propagation, by comparing the luminosity distance \(D_{L}^{\rm EM}\) that is obtained electromagnetically with the luminosity distance \(D_{L}^{\rm GW}\) obtained from the GW if the amplitude of the latter is assumed to be proportional to \(1/D_{G}^{\rm GW}\). Three heuristic relationships between \(D_{L}^{\rm GW}\) and \(D_{L}^{\rm EM}\) were considered, motivated by large extra spatial dimensions, a variable Planck mass, and the so-called \(\Xi\)-parameterization which captures anomalous propagation effects in a variety of alternative theories.
To study what kinds of constraints can be put on these non-GR models using lensed GW events, we set up an extensive simulation, making use of realistic lens and BBH source populations to arrive at plausible distributions for the properties of quadruply lensed events. We performed Bayesian inference on each of the simulated GW events to obtain posterior density distributions for their parameters. Due to the associated computational complexity and cost, we did not directly perform lens reconstruction, but instead assumed Gaussian probability distributions for image magnification measurements used in the reconstruction of \(D_{L}^{\rm GW}\), as well as for reconstructed electromagnetic luminosity distances, with widths informed by current astrophysical expectations [53; 54]. The latter aspect is something we aim to treat in more depth in a future study. Similarly, the relation between \(D_{L}^{\rm GW}\) and \(D_{L}^{\rm EM}\) involves cosmological parameters; in this work we only let \(H_{0}\) be a free parameter, but the effect of uncertainties in the other parameters is also worth investigating. On the other hand, in this study we used as a prior on \(H_{0}\) the posterior density distribution obtained from time delay measurements and lens reconstruction, which is typically considerably wider than the ranges for \(H_{0}\) obtained from either Planck or SHoES [54]. Because of the degeneracy between \(H_{0}\) and the deviation parameters, bounds on the latter are to a large extent set by the prior range of \(H_{0}\)[60], which pushes our constraints on alternative theories towards the conservative side.4
Footnote 4: When analyzing the lensed events with a prior from Planck 2018 [75] we obtain bounds that are a factor of \(\sim 2\) tighter.
Comparing with results from GW170817 and its EM counterpart (for which we did use the much more narrow \(H_{0}\) prior from Planck 2018 [75]), we clearly see the effect of strongly lensed GWs from BBH typically originating from much higher redshifts. The latter improves the measurability of anomalous propagation, since it increases with distance. In the case of extra dimensions, modest gains by up to a factor of \(\sim 5\) are seen, but for the \(\Xi\)-parameterization this becomes \({\cal O}(10)\), and for \(c_{M}\) as much as \({\cal O}(100)\).
Previous GW-based measurements on anomalous propagation models [60; 61; 80; 81; 82] have utilized GW170817 with its EM counterpart (and GW190521 under the assumption that an EM flare seen by ZTF was an EM counterpart to this BBH event). Until the advent of third-generation GW observatories such as Einstein Telescope [116; 117; 118] and Cosmic Explorer [119; 120], GW signals from binary neutron star inspirals will only be seen to redshifts \(z\ll 1\)[121], and the definitive identification of transient EM counterparts to stellar mass BBH events may remain elusive. Other methods based on the population properties of binary black holes inferred from GW data alone have been shown to considerably improve over bounds from multimessenger observations of GW170817 [85; 86; 87]. What we have demonstrated here is that a single fortuitous discovery of a quadruply lensed GW event in conjunction with EM observations of lensed galaxies may give access to the high-redshift regime, again enabling significantly stronger constraints on models of anomalous GW propagation.
###### Acknowledgements.
H.N., J.J., K.H., and C.V.D.B. are supported by the research programme of the Netherlands Organisation for Scientific Research (NWO). L.H. is supported by the Swiss National Science Foundation grant 199307, as well as the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie
grant agreement No 945298-ParisRegionFP. She is a Fellow of Paris Region Fellowship Programme supported by the Paris Region, and acknowledges the support of the COST Action CA18108. The authors are grateful for computational resources provided by the LIGO Laboratory and supported by the National Science Foundation Grants No. PHY-0757058 and No. PHY-0823459. This research has made use of data, software and/or web tools obtained from the Gravitational Wave Open Science Center ([https://www.gw-openscience.org](https://www.gw-openscience.org)), a service of LIGO Laboratory, the LIGO Scientific Collaboration and the Virgo Collaboration. LIGO is funded by the U.S. National Science Foundation. Virgo is funded by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale della Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by Polish and Hungarian institutes.
|
2308.06066 | The Transient Outgassed Atmosphere of 55 Cancri e | The enigmatic nature of 55 Cancri e has defied theoretical explanation. Any
explanation needs to account for the observed variability of its secondary
eclipse depth, which is at times consistent with zero in the visible/optical
range of wavelengths -- a phenomenon that does not occur with its also variable
infrared eclipses. Yet, despite this variability its transit depth remains
somewhat constant in time and is inconsistent with opaque material filling its
Hill sphere. The current study explores the possibility of a thin, transient,
secondary atmosphere on 55 Cancri e that is sourced by geochemical outgassing.
Its transient nature derives from the inability of outgassing to be balanced by
atmospheric escape. As the outgassed atmosphere escapes and is replenished, it
rapidly adjusts to radiative equilibrium and the temperature fluctuations cause
the infrared eclipse depths to vary. Atmospheres of pure carbon dioxide or
carbon monoxide produce sufficient Rayleigh scattering to explain the observed
optical/visible eclipse depths, which vanish in the absence of an atmosphere
and the presence of a dark rocky surface. Atmospheres of pure methane are ruled
out because they produce insufficient Rayleigh scattering. Upcoming
observations by the James Webb Space Telescope will potentially allow the
atmospheric temperature and surface pressure, as well as the surface
temperature, to be measured. | Kevin Heng | 2023-08-11T11:02:27Z | http://arxiv.org/abs/2308.06066v2 | # The Transient Outgassed Atmosphere of 55 Cancri E
###### Abstract
The enigmatic nature of 55 Cancri e has defied theoretical explanation. Any explanation needs to account for the observed variability of its secondary eclipse depth, which is at times consistent with zero in the visible/optical range of wavelengths--a phenomenon that does not occur with its also variable infrared eclipses. Yet, despite this variability its transit depth remains somewhat constant in time and is inconsistent with opaque material filling its Hill sphere. The current study explores the possibility of a thin, transient, secondary atmosphere on 55 Cancri e that is sourced by geochemical outgassing. Its transient nature derives from the inability of outgassing to be balanced by atmospheric escape. As the outgassed atmosphere escapes and is replenished, it rapidly adjusts to radiative equilibrium and the temperature fluctuations cause the infrared eclipse depths to vary. Atmospheres of pure carbon dioxide or carbon monoxide produce sufficient Rayleigh scattering to explain the observed optical/visible eclipse depths, which vanish in the absence of an atmosphere and the presence of a dark rocky surface. Atmospheres of pure methane are ruled out because they produce insufficient Rayleigh scattering. Upcoming observations by the James Webb Space Telescope will potentially allow the atmospheric temperature and surface pressure, as well as the surface temperature, to be measured.
planets and satellites: atmospheres
## 1 Introduction
Since the discovery of its transits in 2011 (Demory et al., 2011), the super Earth 55 Cancri e is one of the most enigmatic objects in the study of exoplanetary atmospheres. Searches for hydrogen (Ehrenreich et al., 2012) and helium (Zhang et al., 2021) in its atmosphere have been unsuccessful, which is not inconsistent with its bulk mass density of \(6.4^{+0.8}_{-0.7}\) g cm\({}^{-3}\)(Demory et al., 2016). Models of its interior structure suggest the possibility of volatiles being present (Dorn et al., 2017; Crida et al., 2018), but to date an unambiguous detection of gaseous species in its atmosphere remains elusive (Jindal et al., 2020) including a search for metals using high-resolution spectroscopy from the ground (Keiles et al., 2023). A detection of hydrogen cyanide (HCN) was previously claimed (Tsiaras et al., 2016), but from a chemical perspective it remains difficult to reconcile with the non-detection of hydrogen (Ehrenreich et al., 2012).
A Spitzer Space Telescope phase curve of 55 Cancri reveals a strong dayside (\(\approx 2700\) K) to nightside (\(\approx 1400\) K) brightness temperature contrast at 4.5 \(\mu\)m and a \(41\pm 12\) degree offset in the peak of its phase curve (Demory et al., 2016). (See also Mercier et al., 2022.) This Spitzer phase curve has inspired several theoretical studies that have attempted to explain and/or predict its atmospheric composition (Angelo & Hu, 2017; Hammond & Pierrehumbert, 2017; Mahapatra et al., 2017; Miguel, 2019), although none of the predicted atmospheric chemistries have been definitively corroborated by the observations.
A key hint to understanding the nature of 55 Cancri e lies in the variability of its secondary eclipses, both in the optical/visible (Meier Valdes et al., 2022, 2023; Demory et al., 2023) and infrared (Demory et al., 2016; Tamburo et al., 2018) range of wavelengths. At times, the optical/visible secondary eclipses become consistent with zero (Demory et al., 2023; Meier Valdes et al., 2023), but this phenomenon does not occur with the infrared secondary eclipses (Demory et al., 2016; Tamburo et al., 2018). Yet, its primary transit depths remain somewhat constant (Tamburo et al., 2018; Meier Valdes et al., 2023). Star-exoplanet interactions have been ruled out (Morris et al., 2021). By analogy with the torus around Io (a moon of Jupiter), it has been suggested that a torus of spectroscopically active gas and dust could account for the observed variability of the secondary eclipses
of 55 Cancri e (Meier Valdes et al., 2023).
In addition to the generally short sublimation timescales of dust (Meier Valdes et al., 2023), a key puzzle of this interpretation is how the presence of a torus could be simultaneously consistent with the somewhat constant transit depths and variable eclipse depths. Using the exoplanetary properties from Demory et al. (2016) and the stellar properties from Crida et al. (2018), the Hill radius of 55 Cancri e is estimated to be about \(7.4R_{\oplus}\) or about \(3.9R\) where \(R=1.91R_{\oplus}\) is the Spitzer \(4.5~{}\mu\)m radius of the exoplanet (Demory et al., 2016). This radius corresponds to a transit depth of about 334 parts per million (ppm), which is consistent with the 29 transit depths measured by the CHEOPS space telescope (Meier Valdes et al., 2023). By contrast, the presence of a torus around 55 Cancri e, which would fill its Hill sphere, would result in a transit depth of about 4995 ppm, which is firmly ruled out by the CHEOPS observations. It is worth noting that the Hill radius of Io is about 6 times its radius and its torus has been observed to fill and even overflow its Hill sphere (e.g., Schneider & Trauger, 1995; Steffl et al., 2004).
One way out of this conundrum is to assume that spectroscopically active material is present only near secondary eclipse (where the dayside of the exoplanet is in full view), but somehow vanishes at primary transit (which probes its nightside and teminators). Yet another way out is to assume that the material is spectroscopically active only at secondary eclipse, but becomes inert or transparent at primary transit. From the perspective of Occam's Razor, it is challenging to explain how this may occur over the 29 contemporaneous transits and eclipses of 55 Cancri e observed by CHEOPS (Meier Valdes et al., 2023).
These recently reported observations by the CHEOPS space telescope (Meier Valdes et al., 2022, 2023; Demory et al., 2023), as well as upcoming eclipse observations by the James Webb Space Telescope (JWST)1, motivate a qualitatively different approach of interpreting the data of 55 Cancri e. In the current study, it is suggested that the reported observations collectively describe a thin, transient, outgassed atmosphere that does not survive over long timescales because atmospheric escape dominates the outgassing flux.
Footnote 1: JWST Cycle 1 Proposals #1952 and #2084.
## 2 Toy Model of Transient Secondary Atmosphere
### Theory
As a proof of concept, I will construct the simplest possible model that includes the salient features needed to explain multi-wavelength observations (Figure 1). The simplicity of the model is also motivated by the complexity of the processes that one needs to model. The starting point is a secondary atmosphere sourced by geochemical outgassing (e.g., Tian & Heng, 2023). While the outgassing chemistry may be solved by considering the thermodynamics of mixed phases and non-ideal gases (French, 1966; Gaillard & Scaillet, 2014; Tian & Heng, 2023), predicting the outgassing _flux_ is challenging as it requires an understanding of the interior geodynamics of the exoplanet including its tectonic regime (Stamenkovic & Seager, 2016). The outgassing is balanced by a process that is notoriously difficult to model even for the planets of the Solar System: atmospheric escape (Shizgal & Arkos, 1996). It is the balance between the outgassing and escape fluxes that allows one to calculate the atmospheric surface pressure (e.g., Liggins et al., 2020). Instead of introducing several free parameters into the problem, I _choose_ to encode our incomplete knowledge of these processes into a single free parameter: the atmospheric surface pressure \(P\).
Figure 1: Schematic of multi-wavelength model that includes a hot rocky surface (heated by starlight) and a thin, transient atmosphere sourced by geochemical outgassing. The left and right panels show the model in its “bare rock” and thin-atmosphere phases, respectively. The atmosphere is transient because outgassing and atmospheric escape fail to balance out each other and the timescale for adjustment to radiative equilibrium is likely to be shorter than an orbital period (see text for discussion).
Furthermore, it is assumed that the atmospheric escape flux exceeds the outgassing flux such that an equilibrium between the two processes is never reached. Such an assumption is consistent with the erratic temporal variability of the observed secondary eclipses, which is not observed to correlate with any exoplanetary property (Meier Valdes et al., 2023). Using dimensional analysis, the energy-limited mass flux2 of atmospheric escape is
Footnote 2: Technically, this is the “mass luminosity”.
\[\dot{M}\sim\frac{\pi R^{2}F_{X}}{E_{g}}=\frac{L_{X}R^{3}}{4GMR_{\star}^{2}} \sim 10^{10}\ \textrm{g s}^{-1}, \tag{1}\]
where \(F_{X}\) is the X-ray flux of the star, \(L_{X}=4\times 10^{26}\) erg s\({}^{-1}\) is its X-ray luminosity (Ehrenreich et al., 2012), \(E_{g}=GM/R\) is the specific gravitational potential energy, \(G\) is the universal gravitational constant, \(M=8.08\ M_{\oplus}\) is the mass of 55 Cancri e (Demory et al., 2016) and \(R_{\star}=0.958\ R_{\odot}\) is the stellar radius (Crida et al., 2018). This escape flux is \(\sim 1000\) times higher than the CO\({}_{2}\) outgassing flux of modern Earth (Plank & Manning, 2019). While a predictive theory for estimating the outgassing flux on 55 Cancri e remains elusive (Meier et al., 2023), it seems implausible that such a high outgassing flux could be attained. To balance atmospheric escape, the mass of atmosphere outgassed in an orbital period (\(t_{\rm orbit}=0.74\) day) is \(\dot{M}t_{\rm period}\approx 8\times 10^{14}\) g. This implies an atmospheric pressure of \(\dot{M}t_{\rm period}g/4\pi R^{2}\approx 0.1\ \mu\)bar, which is orders of magnitude lower than the surface pressures assumed in this study. The only general conclusion one can draw from the non-balancing of the two processes is that it produces stochasticity in the outgassing and atmospheric escape fluxes, which plausibly lead to temperature fluctuations.
The transient, outgassed atmosphere sits above a rocky surface, which is heated by starlight from 55 Cancri. The equilibrium temperature of the exoplanet is about 1965 K. In the absence of an atmosphere, the rocky surface attains a temperature of \(T_{s}=1965\) K--at least, on the dayside of 55 Cancri e. The thermal conduction timescale3 associated with the
Figure 2: Scattering photospheric pressures (top left panel), single-scattering albedos (top right panel), absorption cross sections (bottom left panel) and absorption photospheric pressures (bottom right panel) associated with pure carbon dioxide (CO\({}_{2}\)), carbon monoxide (CO) and methane (CH\({}_{4}\)) atmospheres. For the absorption cross sections, a temperature of 2700 K and a pressure of 0.1 bar are assumed.
rocky surface is assumed to be much longer than the survival time of the atmosphere, such that it does not have enough time to adjust to the atmospheric temperature. The surface is assumed to radiate like a blackbody.
Footnote 1: The surface temperature is assumed to be \(T_{\rm magma}=T_{\rm magma}\), where \(T_{\rm magma}\) is the temperature of the atmosphere, and \(T_{\rm magma}\) is the temperature of the atmosphere.
The outgassed atmosphere originates from a magma chamber beneath the surface that is associated with its own temperature \(T_{\rm magma}\). Under Earth-like conditions, \(T_{\rm magma}=1600\) K; see Tian & Heng (2023) for a discussion of the appropriate values to assume for \(T_{\rm magma}\). Initially, the outgassed atmosphere has \(T\approx T_{\rm magma}\). Given enough time, the atmosphere adjusts its temperature to a value that is consistent with the greenhouse warming effect of the outgassed species. This radiative adjustment time is (Showman & Guillot, 2002)
\[t_{\rm rad}\sim\frac{c_{P}P}{\sigma_{\rm SB}gT^{3}}\approx 2\times 10^{3}\;{\rm s }\;\left(\frac{P}{0.1\;{\rm bar}}\right)\;\left(\frac{T}{1600\;{\rm K}}\right) ^{-3}, \tag{2}\]
where \(c_{P}\sim 10^{7}\) erg g\({}^{-1}\) K\({}^{-1}\) is the specific heat capacity at constant pressure and \(\sigma_{\rm SB}\) is the Stefan-Boltzmann constant. The preceding estimate was made for CO\({}_{2}\) molecules and is a factor \(\sim 2\)-3 higher for CO and CH\({}_{4}\); the composition dependence enters through the specific heat capacity. If the atmospheric surface pressure is \(\sim 0.1\) bar, then \(t_{\rm rad}\) is a fraction of the \(0.74\) day \(\approx 6\times 10^{4}\) s orbital period of 55 Cancri e. It implies that, as soon as the atmosphere is outgassed, the adjustment of the atmospheric temperature to radiative equilibrium occurs well within an orbital period.
The key parameter controlling the chemistry of the outgassed atmosphere is the pressure associated with the magma \(P_{\rm magma}\). If the magma chamber is located close to the surface, then one may assume \(P_{\rm magma}\approx P\)(Gaillard & Scaillet, 2014). If it is located deep beneath the surface such that \(P_{\rm magma}\gg P\), then the system inherits an additional free parameter. An additional unknown, which is difficult to calculate from first principles, is the oxidation state of the mantle. It is parametrised by the oxygen fugacity \(f_{\rm O_{2}}\)(e.g., Tian & Heng, 2023). In the conceivable parameter space of \(P_{\rm magma}\), \(T_{\rm magma}\) and \(f_{\rm O_{2}}\), the plausible carbon- and oxygen-bearing species are carbon dioxide (CO\({}_{2}\)), carbon monoxide (CO), methane (CH\({}_{4}\)) and water (H\({}_{2}\)O) (e.g., Gaillard & Scaillet, 2014; Tian & Heng, 2023). In particular, a methane-dominated atmosphere is only produced for reduced (poorly oxidized) mantles, reduced magma temperatures (compared to that of Earth) and \(P_{\rm magma}\gtrsim 10\) bar (Tian & Heng, 2023). Water is not considered further, because it is neither detected by the Wide Field Camera 3 of the Hubble Space Telescope (Tsiaras et al., 2016) nor from the ground using high-resolution spectroscopy (Jindal et al., 2020).
Let the flux from the dayside of 55 Cancri e and the star be \(F\) and \(F_{\star}\), respectively. The orbital semi-major axis of 55 Cancri e is \(a=0.0154\) AU and the ratio of its radius to the stellar radius is \(R/R_{\star}=0.0187\)(Demory et al., 2016). It follows that the monochromatic secondary eclipse depth is
\[\frac{F}{F_{\star}}=\left(\frac{R}{R_{\star}}\right)^{2}\frac{B_{\lambda}\left( T_{s}\right)\mathcal{T}+\left(1-\mathcal{T}\right)B_{\lambda}\left(T\right)}{B_{ \lambda}\left(T_{\star}\right)}+\left(\frac{a}{R}\right)^{2}A_{g}, \tag{3}\]
where \(B_{\lambda}\) is the Planck function, \(A_{g}\) is the geometric albedo and
\[\mathcal{T}=e^{-\tau} \tag{4}\]
is the transmission function. The effective temperature of the star is \(T_{\star}=5174\) K (Crida et al., 2018).
In equation (3), the first term accounts for thermal emission from the rocky surface, which is attenuated by the transient outgassed atmosphere, as well as from the atmosphere itself. The term associated with \(B_{\lambda}(T)\) is a well-known expression used to estimate the brightness temperature (e.g., Cowan & Agol, 2011; Tamburo et al., 2018). In the limit of an opaque atmosphere (\(\mathcal{T}=0\)), the thermal emission is purely blackbody in nature (with no spectral features). The second term in equation (3) accounts for reflected light from the outgassed atmosphere. The optical depth associated with the attenuation is
\[\tau=\frac{\kappa P}{g}, \tag{5}\]
where \(g=10^{3.33}\approx 2138\) cm s\({}^{-2}\) is the surface gravity of 55 Cancri e (Demory et al., 2016). The absorption opacity \(\kappa\) is trivially converted to a cross section \(\sigma\) via \(\kappa=\sigma/m\), where \(m\) is the mass of the molecular species considered.
For Rayleigh scattering by molecules, the geometric albedo is given by (Heng et al., 2021)
\[A_{g}=\frac{\omega}{16}+\frac{\epsilon}{2}+\frac{\epsilon^{2}}{6}+\frac{ \epsilon^{3}}{24}, \tag{6}\]
where \(\epsilon=(1-\gamma)/(1+\gamma)\) is the bihemispherical reflectance (Hapke, 1981) and \(\gamma=\sqrt{1-\omega}\). The single-scattering albedo \(\omega\) is constructed from the absorption (\(\sigma\)) and scattering (\(\sigma_{\rm scat}\)) cross sections,
\[\omega=\frac{\sigma_{\rm scat}}{\sigma+\sigma_{\rm scat}}. \tag{7}\]
If only single scattering is considered (i.e., multiple scattering is ignored), then the geometric albedo is instead given by (Heng et al., 2021)
\[A_{g}=\frac{3\omega}{16}. \tag{8}\]
A more accurate approach is to derive an analytical expression for the geometric albedo that is a function of the scattering optical depth, which would allow an entire continuum of surface pressures to be rigorously considered. The geometric albedo should naturally vanish as the scattering optical depth becomes zero. While this development is necessary for future work, it is beyond the scope of the current study.
In summary, the toy model described here has only three free parameters: the surface temperature \(T_{s}\), the atmospheric
temperature \(T\) and the atmospheric surface pressure \(P\). Each of the aforementioned chemical species will be considered in turn. By fitting this model to data using Bayesian inference methods, one may derive posterior distributions of these three parameters. This is beyond the scope of the current study. Instead, the intention is to elucidate the influence of each parameter in an intuitive way.
### Cross sections
Indispensable ingredients for even a toy model are the cross sections associated with the absorption and scattering of radiation. The Rayleigh scattering cross sections for CO, CO\({}_{2}\) and CH\({}_{4}\) are taken from Sneep and Ubachs (2005) and Thalman et al. (2014). Unfortunately, a literature search for the Rayleigh scattering of sulfur dioxide (SO\({}_{2}\)) was in vain. The spectroscopic line lists and partition functions are taken from the following references: Li et al. (2015) for CO; Yurchenko et al. (2020) for CO\({}_{2}\); Yurchenko and Tennyson (2014) and Yurchenko et al. (2017) for CH\({}_{4}\). These quantities were converted into opacities (cross sections per unit mass) using the HELIOS-K opacity calculator (Grimm and Heng, 2015) and made publicly available via the DACE database (Grimm et al., 2021).4
Footnote 4: [https://dace.unige.ch/](https://dace.unige.ch/)
## 3 Results
In a self-consistent radiative transfer model, equation (3) is iterated with the first law of thermodynamics (energy conservation) such that radiative equilibrium is attained. The outcome of such an approach is a temperature-pressure profile that is consistent with the assumed chemical abundances and emergent spectrum. In atmospheric retrieval models of exoplanets, this approach is not typically implemented, i.e., the chemical abundances and temperature-pressure profile are not self-consistent. This is the phenomenological approach that we will adopt when computing our toy model. As already explained, the intention is to demonstrate the influence of each parameter rather than fit for its values from matching data. As an illustration, the atmospheric temperature is assumed to be \(T=2700\) K, which is inspired by the Spitzer 4.5 \(\mu\)m brightness temperature reported by Demory et al. (2016). The elucidated concepts are qualitatively independent of this assumption.
### Scattering and absorption photospheres
The expression for the geometric albedo stated in equation (6) assumes a so-called "semi-infinite atmosphere", where the scattering optical depth spans the range from zero to infinity (Chandrasekhar, 1960; Hapke, 1981). Physically, this corresponds to a scattering atmosphere that transitions from being transparent to opaque, rather than the spatial distance going to infinity in one direction. To check this assumption, I examine the scattering photospheric pressure,
\[P_{\rm scat}\sim\frac{g}{\kappa_{\rm scat}}, \tag{9}\]
where \(\kappa_{\rm scat}\) is the scattering opacity. The top left panel of Figure 2 shows calculations of \(P_{\rm scat}\) for various assumed atmospheric compositions. At wavelengths relevant to the spectral energy distribution of 55 Cancri (\(\gtrsim 0.55~{}\mu\)m), the scattering optical depth becomes unity at \(\sim 0.1\) bar. This suggests that as long as atmospheric surface pressures \(\gtrsim 0.1\) bar are considered then the use of equation (6) is not unreasonable. Otherwise, one ignores multiple scattering as an approximation and uses equation (8) instead.
The top right panel of Figure 2 shows the single-scattering albedos. The absorption cross sections are shown in the bottom left panel of Figure 2. The (absorption) photospheric pressure,
\[P_{\rm photosphere}\sim\frac{g}{\kappa}, \tag{10}\]
is shown in the bottom right panel of Figure 2. The comparatively transparent spectral windows in between the absorption bands of CO means that the single-scattering albedo easily reaches unity within these windows. It also means that \(P_{\rm photosphere}\gg 1\) bar within the same spectral windows, implying that starlight may easily reach the surface of an exoplanet with a \(\sim 0.1\) bar CO atmosphere.
By contrast, methane is a good absorber in the infrared range of wavelengths and does not have deep spectral windows in between its absorption bands. An atmosphere with even \(\sim 1\) mbar of methane may easily absorb infrared radiation from the surface of 55 Cancri e.
### Emission spectra
Figure 3: Emission spectra (wavelength-dependent secondary eclipse depths) for \(P=0.1\) bar, \(T=2700\) K and \(T_{s}=1965\) K. The gray dotted curves represent “bare rock” models with various assumed values for the geometric albedo of the rocky surface (\(A_{g,s}\)). The \(A_{g,s}=0\) curve essentially corresponds to a 1965 K blackbody. For Rayleigh scattering, \(A_{g}=0.77\) when \(\omega=1\). The black dashed curve corresponds to a 2700 K blackbody.
Figure 3 shows examples of emission spectra from 0.4 to 28 \(\mu\)m, which cover the range of wavelengths probed by CHEOPS, TESS, Spitzer and JWST. For illustration, these models assume an atmospheric surface pressure of \(P=0.1\) bar, an atmospheric temperature of \(T=2700\) K and a surface temperature of \(T_{s}=1965\) K. To guide our intuition, I first compute "bare rock" emission spectra where the rocky surface is assumed to have a constant geometric albedo \(A_{g,s}\). Even with \(A_{g,s}=1\), the secondary eclipse depth is only 28 ppm, although this depends on the assumed value of \(T_{s}\). The \(A_{g,s}=0\) curve essentially corresponds to a blackbody curve with a temperature of 1965 K.
All of the computed emission spectra have spectral features that are bracketed by the 1965 K and 2700 K blackbody curves. At wavelengths that are opaque to radiation, only the \(T=2700\) K atmosphere is visible to the observer. At wavelengths that are transparent to radiation, the observer sees the \(T_{s}=1965\) K rocky surface. These qualitative insights remain even when other values are assumed for \(T\) and \(T_{s}\).
As already anticipated, methane absorbs more strongly in the infrared range of wavelengths compared to CO and CO\({}_{2}\). Its emission spectrum is featureless because it probes the thin but opaque atmosphere. Methane scatters weakly in the optical/visible range of wavelengths with the total eclipse depth (which includes thermal emission) being less than 5 ppm for wavelengths shorter than 0.8 \(\mu\)m.
With its transparent spectral windows, carbon monoxide produces an emission spectrum rich with features that rise above the continuum by as much as 60 ppm. In the optical/visible range of wavelengths (0.4-0.8 \(\mu\)m), it produces a maximum eclipse depth of 22 ppm.
Carbon dioxide is intermediate between CO and CH\({}_{4}\), producing an emission spectrum that has a few spectral features and a maximum eclipse depth of 21 ppm from 0.4-0.8 \(\mu\)m. It is noted that these optical/visible eclipse depth estimates depend on the value of \(T\) assumed as hotter atmospheres contribute more thermal emission at these wavelengths and thus produce larger eclipse depths.
### Eclipse depths
To facilitate comparison to observations, the bandpass-integrated eclipse depth \(D\) needs to be computed for each model emission spectrum. Some attention to detail is needed when integrating equation (3) over wavelength as one needs to include the filter response function \(f_{\lambda}\) of the various space telescopes5. The thermal emission component of the eclipse depth is
Footnote 5: [http://svo2.cab.inta-csic.es/theory/fps/](http://svo2.cab.inta-csic.es/theory/fps/)
\[D_{\rm th}=\left(\frac{R}{R_{\star}}\right)^{2}\frac{\int\left[B_{\lambda} \left(T_{s}\right)\mathcal{T}+\left(1-\mathcal{T}\right)B_{\lambda}\left(T \right)\right]S_{\lambda}\,d\lambda}{\int B_{\lambda}\left(T_{\star}\right)S_{ \lambda}\,d\lambda}, \tag{11}\]
where \(S_{\lambda}=\lambda f_{\lambda}\) for photon counters (CHEOPS) and \(S_{\lambda}=f_{\lambda}\) for energy counters (TESS and Spitzer). The reflected light component of the eclipse depth requires even more attention to detail, because a bandpass-integrated geometric albedo is similar6 to a Bond albedo (e.g., Heng et al., 2021) and requires an extra weighting factor of the stellar spectral energy distribution,
Footnote 6: Geometric and spherical albedos are wavelength-dependent quantities that are intrinsic to the scattering surface or atmosphere. They do not depend on the stellar properties. By contrast, the Bond albedo depends on both the intrinsic scattering properties _and_ the spectral energy distribution of the star. In other words, the exact same exoplanet orbiting stars of different spectral types will have different values of the Bond albedo.
\[D_{\rm r}=\left(\frac{a}{R}\right)^{2}\frac{\int A_{g}B_{\lambda}\left(T_{ \star}\right)S_{\lambda}\,d\lambda}{\int B_{\lambda}\left(T_{\star}\right)S_{ \lambda}\,d\lambda}. \tag{12}\]
The bandpass-integrated eclipse depth is \(D=D_{\rm th}+D_{\rm r}\).
There is a rich debate on how the treatment of the spectral energy distribution of the 55 Cancri star affects the interpretation of the secondary eclipses of 55 Cancri e (Crossfield, 2012). To check the sensitivity of the computed eclipse depths to this issue, the PHOENIX model spectrum of 55 Cancri used by Demory et al. (2023) and Meier Valdes et al. (2023) is used. The preceding expressions for \(D_{\rm th}\) and \(D_{\rm r}\) are generalised by substituting \(B_{\lambda}(T_{\star})\) with \(F_{\star}/\pi\), where \(F_{\star}\)
\begin{table}
\begin{tabular}{l c c c} \hline \hline Bandpass & Pure CO\({}_{2}\) & Pure CO & Pure CH\({}_{4}\) \\ \hline CHEOPS (blackbody) & 14.8 & 21.4 & 4.0 \\ CHEOPS (SED) & 14.8 & 21.5 & 4.1 \\ TESS (blackbody) & 11.0 & 21.3 & 7.8 \\ TESS (SED) & 11.2 & 21.4 & 7.9 \\ Spitzer 4.5 \(\mu\)m (blackbody) & 101.5 & 88.6 & 101.4 \\ Spitzer 4.5 \(\mu\)m (SED) & 107.0 & 93.4 & 107.0 \\ \hline \hline \end{tabular} Note: All of these estimates assume a surface pressure of \(P=0.1\) bar. “SED” means a PHOENIX spectrum of 55 Cancri was used to compute the eclipse depth.
\end{table}
Table 1: Bandpass-integrated eclipse depths (ppm)
Figure 4: Same as Figure 3, but for \(P=1\) mbar. Since this atmosphere is optically thin to Rayleigh scattering, the treatment of multiple scattering is ignored; see equation (8).
is the PHOENIX model spectrum (in flux, rather than intensity, units).
Table 1 states the computed eclipse depths in the CHEOPS, TESS and Spitzer 4.5 \(\mu\)m bandpasses. Only the Spitzer eclipse depths are sensitive to whether a blackbody or PHOENIX model is used for the spectral energy distribution of the star. The computed TESS eclipse depths are consistent with the \(15\pm 4\) ppm and \(8\pm 5\) ppm measurements reported by Meier Valdes et al. (2022). To within two standard deviations, the computed Spitzer eclipse depths are consistent with the \(154\pm 23\) ppm measurement of Demory et al. (2016) and the non-zero values reported in Table 4 of Tamburo et al. (2018). Also to within two standard deviations, the computed CHEOPS eclipse depths for pure CO or CO\({}_{2}\) atmospheres are consistent with almost all of the non-zero values shown in Figure 3 of Meier Valdes et al. (2023). A more definitive comparison requires JWST spectra to break the degeneracy in assumed composition that Spitzer data alone cannot provide.
### Change in infrared transit depth
The pressure corresponding to the wavelength-dependent transit chord is (Heng & Kitzmann, 2017)
\[P_{\rm transit} \sim P_{\rm photosphere}\sqrt{\frac{H}{R}}\] \[\approx 0.055P_{\rm photosphere}\left(\frac{T}{2700\ {\rm K}}\right)^{1/2}\left(\frac{m}{28\ {\rm amu}}\right)^{-1/2}, \tag{13}\]
where \(H\) is the (isothermal) pressure scale height and the gravity and exoplanetary radius are taken from Demory et al. (2016). The preceding estimate focuses on CO, because it has a larger pressure scale height compared to that of CO\({}_{2}\) and the photospheric pressure is as low as \(P_{\rm photosphere}\sim 0.1\ \mu\)bar (bottom right panel of Figure 2). The pressure associated with the transit chord thus reaches as high in altitude as \(\sim 1\) nbar. For a \(P=0.1\) bar atmosphere, the difference between the transit radius and the surface of the exoplanet is about \(\delta\approx 18H\). This corresponds to a _maximum_ change in transit depth of \(2R\delta/R_{\star}^{2}\approx 38\) ppm, which is not inconsistent with the variation in the Spitzer 4.5 \(\mu\)m transit depths reported by Demory et al. (2016) and Tamburo et al. (2018).
## 4 Discussion
I now suggest an alternative interpretation of the observations reported by Demory et al. (2016, 2023); Demory et al. (2018); Demory et al. (2018) and Meier Valdes et al. (2023). When the optical/visible secondary eclipse depths are consistent with being zero, this is interpreted as the observations probing mainly the bare rocky surface of the dayside of 55 Cancri e. To be consistent with zero optical/visible eclipse depth, the geometric albedo of the surface must be close to zero.
Stochasticity in the outgassing and atmospheric escape fluxes plausibly lead to fluctuations in the global spatial distribution of the atmosphere (which I have not modelled) and the atmospheric temperature. As the outgassed atmosphere starts to accumulate on the dayside, this increases the optical/visible eclipse depth because of Rayleigh scattering of starlight _and_ atmospheric thermal emission. In this scenario, pure CH\({}_{4}\) atmospheres are ruled out because they produce insufficient Rayleigh scattering, although such atmospheres would easily produce a featureless blackbody spectrum in the infrared. CO and CO\({}_{2}\) atmospheres are also opaque at 4.5 \(\mu\)m and the emission spectrum at this wavelength probes the \(T=2700\) K blackbody curve.
The fluctuating temperatures shift the upper envelope of the infrared emission spectrum, shown in Figure 3, up and down. This produces variable infrared eclipse depths that are consistent with those reported by Demory et al. (2016) and Tamburo et al. (2018). It is plausible that the changing atmosphere also causes erratic fluctuations in the optical/visible phase curves, which have been observed by CHEOPS (Meier Valdes et al., 2023). The infrared eclipse depth never becomes zero in this scenario, because even in the absence of an atmosphere it probes the rocky surface.
Upcoming JWST observations will potentially be able to distinguish between different atmospheric chemistries. If the atmospheric surface pressure is less than 0.1 bar (see Figure 4 for computed emission spectra corresponding to \(P=1\) mbar), then the spectral features will become more distinct. However, the spectral features may easily be muted by the presence of clouds, hazes or condensates. Nevertheless, fitting even a featureless spectrum to a blackbody curve will allow one to infer the atmospheric temperature, independent of the surface pressure. If spectral features are observed, then fitting a blackbody curve to the lowest fluxes will allow one to infer the surface temperature. A model fit should be performed within a Bayesian retrieval framework with the fitting parameters being the atmospheric temperature (\(T\)), surface pressure (\(P\)) and surface temperature (\(T_{s}\)).
The current model makes falsifiable predictions that may be tested by simultaneous optical/visible and JWST observations. _It is crucial that these observations are taken at the same time, because the atmospheric escape and radiative adjustment timescales are expected to be shorter than the orbital period._ When 55 Cancri e is in its "bare rock" phase, the optical/visible eclipse depth should be close to zero while the infrared emission spectrum should probe the temperature of the rocky surface. When an outgassed atmosphere is present on the dayside, the optical/visible eclipse depth probes Rayleigh scattering by the atmosphere while the infrared emission spectrum should probe the atmospheric temperature and composition. In between these two phases, the infrared and optical/visible eclipse depths are expected to fluctuate as the atmospheric temperature adjusts to radiative equilibrium (from its original outgassed value).
It is conceivable that the molecules of the outgassed atmosphere will eventually be broken up into their constituent
atoms and ions (e.g., carbon and oxygen), which may be observable as exospheric species via ultraviolet spectroscopy.
|
2304.11817 | Active Probing and Influencing Human Behaviors Via Autonomous Agents | Autonomous agents (robots) face tremendous challenges while interacting with
heterogeneous human agents in close proximity. One of these challenges is that
the autonomous agent does not have an accurate model tailored to the specific
human that the autonomous agent is interacting with, which could sometimes
result in inefficient human-robot interaction and suboptimal system dynamics.
Developing an online method to enable the autonomous agent to learn information
about the human model is therefore an ongoing research goal. Existing
approaches position the robot as a passive learner in the environment to
observe the physical states and the associated human response. This passive
design, however, only allows the robot to obtain information that the human
chooses to exhibit, which sometimes doesn't capture the human's full intention.
In this work, we present an online optimization-based probing procedure for the
autonomous agent to clarify its belief about the human model in an active
manner. By optimizing an information radius, the autonomous agent chooses the
action that most challenges its current conviction. This procedure allows the
autonomous agent to actively probe the human agents to reveal information
that's previously unavailable to the autonomous agent. With this gathered
information, the autonomous agent can interactively influence the human agent
for some designated objectives. Our main contributions include a coherent
theoretical framework that unifies the probing and influence procedures and two
case studies in autonomous driving that show how active probing can help to
create better participant experience during influence, like higher efficiency
or less perturbations. | Shuangge Wang, Yiwei Lyu, John M. Dolan | 2023-04-24T04:43:11Z | http://arxiv.org/abs/2304.11817v1 | # Active Probing and Influencing Human Behaviors Via Autonomous Agents
###### Abstract
Autonomous agents (robots) face tremendous challenges while interacting with heterogeneous human agents in close proximity. One of these challenges is that the autonomous agent does not have an accurate model tailored to the specific human that the autonomous agent is interacting with, which could sometimes result in inefficient human-robot interaction and suboptimal system dynamics. Developing an online method to enable the autonomous agent to learn information about the human model is therefore an ongoing research goal. Existing approaches position the robot as a passive learner in the environment to observe the physical states and the associated human response. This passive design, however, only allows the robot to obtain information that the human chooses to exhibit, which sometimes doesn't capture the human's full intention. In this work, we present an online optimization-based probing procedure for the autonomous agent to clarify its belief about the human model in an active manner. By optimizing an information radius, the autonomous agent chooses the action that most challenges its current conviction. This procedure allows the autonomous agent to actively probe the human agents to reveal information that's previously unavailable to the autonomous agent. With this gathered information, the autonomous agent can interactively influence the human agent for some designated objectives. Our main contributions include a coherent theoretical framework that unifies the probing and influence procedures and two case studies in autonomous driving that show how active probing can help to create better participant experience during influence, like higher efficiency or less perturbations.
## I Introduction
It is imperative for robots to behave reactively in a human-present environment because all safety specifications ought to be met. An autonomous vehicle, for instance, should yield to a human vehicle trying to nudge in front of it [1, 2]; a reconnaissance drone should avoid adversarial behaviors. Robots, however, are usually not designed to behave purely in a reactive manner because it makes them too conservative. Consider a scenario of autonomous driving (Fig. 1) where the human vehicle is traveling in the outer lane (lower), but at a fast enough speed that it's more efficient to switch to the inner lane (upper). Many human drivers don't have the awareness to switch lanes because they are usually egocentric, even subconsciously, in that they would rather remain in their current lane unless blocked by some other vehicles. Such human egocentricity and strict infrastructure preconditions render purely communication-based approaches, like vehicle signaling or V2X [3, 4, 5, 6, 7], fruitless in addressing these inefficiencies. Some works, therefore, proposed interaction-based approaches, like game-theoretical influence [8, 9], wave stabilization [10, 11, 12, 13], and herding [14, 15, 16, 17, 18, 19], that use autonomous agents to influence human agents physically. In Fig. 1, for instance, the autonomous vehicle would block the fast human vehicle, influencing it to drive in the inner lane.
Since such influence is exerted in close proximity, the autonomous agent needs an accurate human model. Although generally reasonable, models produced from offline techniques may not capture characteristics specific to the human agent with whom the robot interacts closely. For instance, in Fig. 1, the autonomous vehicle is interested in, rather precisely, the human vehicle's desired travel velocity, in which each human differs from another.
Existing online approaches tackle this problem by positioning the autonomous agent as a passive observer, in which it observes the environmental states and their associated human response and then chooses the model that best explains this correlation. The issue with this design is that the autonomous agent is passive, so it only has access to information that the human agent chooses to exhibit. Hence, the autonomous agent can only make decisions based on the human information that's readily available. In Fig. 1 for instance, a passive autonomous vehicle would presume the human vehicle intends to travel at most as fast as itself, whereas in reality the human could want to drive faster, only to be blocked by the autonomous vehicle.
In this work, we enable autonomous agents to leverage their actions to estimate the human internal model by actively interacting with the human to reveal more information. Rather than relying on passive observations, the autonomous agent can account for the fact that the human will react to its actions, so the autonomous agent can "probe", i.e., select actions that will trigger human reactions that will best challenge its initial belief. By probing iteratively, the
Fig. 1: Both vehicles currently travel to the right in the outer lane (lower). Autonomous vehicle (yellow) intends to influence human vehicle (orange) with intention to drive fast to inner lane (upper).
autonomous agent converges to an increasingly accurate human model. Then, based on the probed information, the autonomous agent can actively influence other agents for some designated objectives, like higher efficiency or better driving experience. We propose our approach under some very mild assumptions, making it transferable to various human-robot interaction scenarios. Our key contributions include: 1) a coherent theoretical framework that unifies the probing and influencing procedures; 2) a proven solvable trajectory-planning optimization; 3) two case studies as application examples in the domain of autonomous driving with numerical simulations used to demonstrate the precision of probing results and efficacy in creating better participant driving experience during influence.
## II Related Work
To exert influence on humans, an autonomous agent would have to interact with different human agents in close proximity, who are heterogeneous agents that differ significantly in their internal models, to which the autonomous agent does not have direct access [20, 21, 22]. Such an internal model might characterize human's intentions, preferences, objectives, strategies, etc. Works in robotics and perception have focused on estimating these internal models using algorithms based on observations of human's actions, such as intent-driven behavior prediction [23, 24, 25, 26, 27, 28, 29, 30], inverse reinforcement learning (IRL) [31, 32, 33, 34, 35], hidden model prediction [36, 37], affective state estimation [38], and activity recognition [39]. Although the human model derived from the above methods performs generally reasonably, it might not capture specific characteristics of the human agent that the autonomous agent is interacting with. The autonomous agent, therefore, needs an online procedure to learn the model specially tailored to the human agent that the autonomous agent is interacting with.
Some online approaches frame this problem as a Partially Observable Markov Decision Process (POMDP) [40, 41, 42], in which the autonomous agent parameterizes the human's intent through a model, inferred through Markovian or Bayesian estimation of the hidden parameters of the internal models from observations of the physical states of the world [43, 44, 45, 46]. In this paradigm, the autonomous agent is mainly a reactive agent in the environment to observe, which sacrifices the robot's agency to initiate action to actively reveal information about the human.
Some existing works enable active probing for interactive motion planning by incorporating a heuristic active information gathering objective, e.g., information entropy, into the autonomous agent's trajectory optimization framework for human value function parameter estimation [47, 48]. Building upon this work, we allow the autonomous agent to optimize the information radius, i.e., the cohesion between two beliefs, relative to its latest belief of the human model. Then, instead of having a fixed reference belief as in [48], the autonomous agent aims to maximize the information radius relative to a dynamic reference, its current belief, at every time iteration.
## III Theory
### _Human-Robot Joint Dynamics_
For all notations below, we use subscripts to denote the time step and superscripts to capture the attributes' ownership (human or robot). In a human-robot joint system, we define the state vector as \(s_{t}\in\mathbb{R}^{n}\), the robot's input vector as \(u_{t}^{\mathcal{R}}\in\mathbb{U}^{\mathcal{R}}\subseteq\mathbb{R}^{m^{ \mathcal{R}}}\), confined to admissible control space \(\mathbb{U}^{\mathcal{R}}\), the human's input vector as \(u_{t}^{\mathcal{H}}\in\mathbb{U}^{\mathcal{H}}\subseteq\mathbb{R}^{m^{ \mathcal{R}}}\), confined to admissible control space \(\mathbb{U}^{\mathcal{H}}\), and finally the discrete-time control-affine dynamics of the joint system as
\[s_{t+1}=f(s_{t})+M^{\mathcal{R}}(s_{t})u_{t}^{\mathcal{R}}+M^{\mathcal{H}}(s_{ t})u_{t}^{\mathcal{H}} \tag{1}\]
where \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) captures the non-linear autonomous dynamics and \(M^{\mathcal{R}}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n\times m^{\mathcal{R}}}\) and \(M^{\mathcal{H}}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n\times m^{\mathcal{H}}}\) are state-dependent input transformation matrices for robot and human respectively [47].
### _Belief Update_
The autonomous agent possesses a belief of \(\varphi\) that characterizes the human agent's utility function \(r_{\varphi}^{\mathcal{H}}:\mathbb{R}^{n}\rightarrow\mathbb{R}\). For driving scenarios, a typical \(\varphi\) could characterize the desired velocity of the human vehicle, and a typical \(r_{\varphi}^{\mathcal{H}}\) would include features like safety and speed. We generalize the autonomous agent's belief by proposing a non-parametric representation which can approximate a wider range of distributions. At time \(t\), belief of \(\varphi\) is defined as \(bel_{t}\) with finite domain space \(\Phi\).
The autonomous agent updates this belief via a particle-filtering recursion [49]
\[bel_{t+1}(\varphi)\propto bel_{t}(\varphi)\cdot p(u_{t}^{\mathcal{H}}|s_{t},u_ {t}^{\mathcal{R}},\varphi),\;\forall\varphi\in\Phi \tag{2}\]
where the conditional probability is obtained through a softmax operation based on the Boltzmann model of exponential likeliness of human actions with greater utility [29, 34]
\[p(u_{t}^{\mathcal{H}}|s_{t},u_{t}^{\mathcal{R}},\varphi)=\frac{e^{e^{\mathcal{ M}}_{\varphi}\left(f(s_{t})+M^{\mathcal{R}}(s_{t})u_{t}^{\mathcal{R}}+M^{ \mathcal{H}}(s_{t})u_{t}^{\mathcal{H}}\right)}}{\sum_{\tilde{u}_{t}^{\mathcal{ R}}\in\mathbb{U}^{\mathcal{H}}}e^{\frac{\mathcal{M}}{\varphi}\left(f(s_{t})+M^{ \mathcal{R}}(s_{t})u_{t}^{\mathcal{R}}+M^{\mathcal{H}}(s_{t})\tilde{u}_{t}^{ \mathcal{H}}\right)}} \tag{3}\]
in which \(\mathbb{U}^{\mathcal{H}}\) is discretized for softmax normalization. The complete belief update algorithm is shown in algorithm 1.
```
Input:\(bel_{t},s_{t},u_{t}^{\mathcal{R}},u_{t}^{\mathcal{H}}\)
1:\(\eta\gets 0\)
2:for all\(\varphi\in\Phi\)do
3:\(r\gets e^{r_{\varphi}^{\mathcal{H}}\left(f(s_{t})+M^{\mathcal{R}}(s_{t})u_ {t}^{\mathcal{R}}+M^{\mathcal{H}}(s_{t})u_{t}^{\mathcal{H}}\right)}\) {Boltzmann}
4:\(\tilde{r}\leftarrow\sum_{\tilde{u}_{t}^{\mathcal{R}}\in\mathbb{U}^{\mathcal{H}}} e^{r_{\varphi}^{\mathcal{H}}\left(f(s_{t})+M^{\mathcal{R}}(s_{t})u_{t}^{ \mathcal{R}}+M^{\mathcal{H}}(s_{t})\tilde{u}_{t}^{\mathcal{H}}\right)}\)
5:\(bel_{t+1}(\varphi)\gets bel_{t}(\varphi)\cdot\frac{r}{\tilde{r}}\) {belief update}
6:\(\eta\leftarrow\eta+bel_{t+1}(\varphi)\)
7:endfor
8:for all\(\varphi\in\Phi\)do
9:\(bel_{t+1}(\varphi)\leftarrow\frac{bel_{t+1}(\varphi)}{\eta}\) {belief normalization}
10:endfor
11:return\(bel^{t+1}\)
```
**Algorithm 1** Belief Update
### _Probing_
The motivation behind probing is to allow the autonomous agent to actively interact with the human agent to reveal more information that was previously unavailable, meaning that the autonomous agent should choose actions that best challenge its current belief at every time step. Quantitatively, the autonomous agent chooses actions that maximize the information radius between its current belief and the projected belief if such actions are to be executed.
We use Jensen-Shannon divergence (JSD) as a measure of information radius to quantify the cohesion between two beliefs, \(bel_{a}\) and \(bel_{b}\)[50, 51]
\[D_{\mathrm{JS}}[bel_{a},bel_{b}]=\frac{D_{\mathrm{KL}}\left[bel_{a}:\overline{bel }_{a,b}\right]+D_{\mathrm{KL}}\left[bel_{b}:\overline{bel}_{a,b}\right]}{2} \tag{4}\]
where \(D_{\mathrm{KL}}\) is the Kullback-Leibler divergence (KLD) [52, 53] and \(\overline{bel}_{a,b}\) is the arithmetic mixture of \(bel_{a}\) and \(bel_{b}\)
\[D_{\mathrm{KL}}[bel_{a}:\overline{bel}_{a,b}]=\underset{\varphi\sim bel_{a}}{ \mathbb{E}}\log\left(\frac{2\cdot bel_{a}(\varphi)}{bel_{a}(\varphi)+bel_{b}( \varphi)}\right) \tag{5}\]
At state \(s_{t}\), the autonomous agent predicts how the human agent, characterized by \(\varphi\), will react to its action \(u_{t}^{\mathcal{R}}\) using
\[Q(s_{t},u_{t}^{\mathcal{R}},\varphi)=\underset{\tilde{u}_{t}^{\mathcal{R}} \in\mathbb{U}^{\mathcal{R}}}{\arg\max}\ r_{\varphi}^{\mathcal{H}}(f(s_{t})+M^ {\mathcal{R}}(s_{t})u_{t}^{\mathcal{R}}+M^{\mathcal{H}}(s_{t})\tilde{u}_{t}^{ \mathcal{H}}) \tag{6}\]
We solve the probing problem using Model Predictive Control (MPC), in which the autonomous agent chooses a sequence of actions that optimizes the JSD between the current belief and the projected belief at finite horizon \(T\)
\[\underset{u_{0:T-1}^{\mathcal{R}}}{\max} \underset{\varphi\sim bel_{0}}{\mathbb{E}}\sum_{t=0}^{T-1}D_{ \mathrm{JS}}[bel_{0},bel_{t+1}]-D_{\mathrm{JS}}[bel_{0},bel_{t}]\] (7a) s.t. \[s_{0}=s_{t},bel_{0}=bel_{t} \tag{7b}\] \[u_{t}^{\mathcal{H}}=Q(s_{t},u_{t}^{\mathcal{R}},\varphi)\] (7c) \[s_{t+1}=f(s_{t})+M^{\mathcal{R}}(s_{t})u_{t}^{\mathcal{R}}+M^{ \mathcal{H}}(s_{t})u_{t}^{\mathcal{H}}\] (7d) \[bel_{t+1}(\varphi)\propto bel_{t}(\varphi)\cdot p(u_{t}^{\mathcal{ H}}|s_{t},u_{t}^{\mathcal{R}},\varphi) \tag{7e}\]
To ensure solvability, we prove that \(D_{\mathrm{JS}}\), which maps to \([0,\infty)\) in theory, is upper-bounded in optimization (7).
Proof.: **Boundedness**:
We first make a slight assumption that \(bel_{0}\) is bounded and has compact support, hence
\[\sup_{\varphi\in\Phi}bel_{0}(\varphi)<\infty\wedge\inf_{\varphi\in\Phi}bel_{0 }(\varphi)>0 \tag{8}\]
which helps to substantiate the boundedness of KLD [54]. We will initialize the belief such that condition (8) is satisfied in section IV.
We hypothesize inductively that \(\forall a\in\{0,\ldots,T-1\}\), \(\sup_{\varphi\in\Phi}bel_{a}(\varphi)<\infty\). Since \(p(u_{t}^{\mathcal{H}}|s_{t},u_{t}^{\mathcal{R}},\varphi)\) maps to an image of \((0,1)\), using condition (8) as base case, we have
\[\sup_{\varphi\in\Phi}bel_{a}(\varphi)<1<\infty,\ \forall a\in\{0,\ldots,T\} \tag{9}\]
By similar induction technique, we have
\[\inf_{\varphi\in\Phi}bel_{a}(\varphi)>0,\ \forall a\in\{0,\ldots,T\} \tag{10}\]
Hence, we have extended condition (8) to
\[\sup_{\varphi\in\Phi}bel_{a}(\varphi)<\infty\wedge\inf_{\varphi\in\Phi}bel_{a }(\varphi)>0,\ \forall a\in\{0,\ldots,T\} \tag{11}\]
Therefore, \(\forall a\in\{0,\ldots,T\}\), \(\exists\bar{s}=\sup_{\varphi\in\Phi}bel_{a}(\varphi)\) such that \(0<\bar{s}<\infty\). Similarly, \(\forall a,b\in\{0,\ldots,T\}\), \(\exists\bar{\mathbf{i}}=\inf_{\varphi\in\Phi}bel_{a}(\varphi)+bel_{b}(\varphi)\) such that \(0<\mathbf{i}<\infty\).
Therefore, by equation (5), we have \(\forall a,b\in\{0,\ldots,T\}\)
\[D_{\mathrm{KL}}[bel_{a}:\overline{bel}_{a,b}] =\underset{\varphi\sim bel_{a}}{\mathbb{E}}\log\left(\frac{2\cdot bel _{a}(\varphi)}{bel_{a}(\varphi)+bel_{b}(\varphi)}\right) \tag{12}\] \[\leq\underset{\varphi\sim bel_{a}}{\mathbb{E}}\sup_{\varphi\in \Phi}\log\left(\frac{2\cdot bel_{a}(\varphi)}{bel_{a}(\varphi)+bel_{b}(\varphi)}\right)\] \[\leq\log(2\cdot\bar{s})-\log(\mathbf{i})<\infty\]
By symmetry, \(D_{\mathrm{KL}}[bel_{b}:\overline{bel}_{a,b}]<\infty\) can be easily proved using the same technique, which together concludes the boundedness of JSD.
We adopt a dynamic-programming-based approach to optimize equation (7), while other quasi-Newton methods, like the BFGS algorithm [55, 56, 57, 58], are also applicable. Although the computational complexity grows exponentially with respect to the state dimension, the high parallelizability of equation (7d) and (7e) can attenuate the curse of dimensionality. Moreover, we argue that successfully reasoning about human-robot interactions over a short horizon does not require a full-fidelity model of the joint dynamics, so highly informative insights can still be obtained tractably via approximation. We define the value function of executing \(n\) consecutive controls starting from time \(k\) as
\[V(k,n)=\underset{\varphi\sim bel_{0}}{\mathbb{E}}\sum_{t=k}^{k+n-1}D_{ \mathrm{JS}}[bel_{0},bel_{t+1}]-D_{\mathrm{JS}}[bel_{0},bel_{t}] \tag{13}\]
The value function on the horizon therefore satisfies
\[V(0,T) =\underset{\varphi\sim bel_{0}}{\mathbb{E}}\sum_{t=0}^{k-1}D_{ \mathrm{JS}}[bel_{0},bel_{t+1}]-D_{\mathrm{JS}}[bel_{0},bel_{t}] \tag{14}\] \[+\underset{\varphi\sim bel_{0}}{\mathbb{E}}\sum_{t=k}^{T-1}D_{ \mathrm{JS}}[bel_{0},bel_{t+1}]-D_{\mathrm{JS}}[bel_{0},bel_{t}]\] \[=V(0,k)+V(k,T-k),\ \forall k\in\{0,\ldots,T\}\]
which shows that the path-dependency fits a Bellman optimality equation [59]. The optimal value function and control policy can therefore be obtained in polynomial time by backtracking the Hamilton-Jacobi-Bellman (HJB) equation [60]
\[V(t,T-t)=\max_{u_{t}^{\mathcal{R}}\in\mathbb{U}^{\mathcal{R}}}\left\{V(t,1)+V(t+1,T-t-1)\right\} \tag{15}\]
Following this policy, the autonomous agent interactively probes the human agent and gradually converges its belief until the change of JSD is too small. The autonomous agent then chooses \(\hat{\varphi}\), which could be a linear combination of all \(\varphi\in\Phi\) weighted by their \(bel(\varphi)\) or simply the most likely \(\varphi\in\Phi\), as the human model parameter.
### _Influence_
We characterize an influence as a sequence of atomic objectives, each with a utility function, that accounts for a major influence if all executed in order, and we delegate the responsibility of planning these atomic objectives to some high-level planner. For each objective, we incorporate \(\hat{\varphi}\) into the utility function for both robot and human.
\[\max_{u_{0:T-1}^{\mathcal{R}}} \sum_{t=0}^{T-1}r_{\hat{\varphi}}^{\mathcal{R}}(s_{t+1})\] (16a) s.t. \[s_{0}=s_{t} \tag{16b}\] \[u_{t}^{\mathcal{H}}=Q(s_{t},u_{t}^{\mathcal{R}},\hat{\varphi})\] (16c) \[s_{t+1}=f(s_{t})+M^{\mathcal{R}}(s_{t})u_{t}^{\mathcal{R}}+M^{ \mathcal{H}}(s_{t})u_{t}^{\mathcal{H}} \tag{16d}\]
Similarly, this optimization problem can be solved using HJB recursion in polynomial time.
## IV Simulation
In this section, we present two car-following-based scenarios in which probing and influencing can be used to facilitate better participant experience and optimality for human drivers. Both scenarios start with the human vehicle following the autonomous vehicle.
### _Ground Truth_
To generate the ground truth trajectories for the human-driven vehicle, we use the intelligent driver model (IDM) [61, 62, 63], which is known to accurately imitate human driving behaviors.
\[u^{\mathcal{H}}=u_{\max}\left[1-\left(\frac{v^{\mathcal{H}}}{v_{\mathrm{des} }}\right)^{4}-\left(\frac{d_{\mathrm{des}}}{x^{\mathcal{R}}-x^{\mathcal{H}}} \right)^{2}\right] \tag{17}\]
in which
\[d_{\mathrm{des}}=d_{\min}+\tau_{\mathrm{gap}}\cdot v^{\mathcal{H}}-\frac{v^{ \mathcal{H}}\cdot(v^{\mathcal{H}}-v^{\mathcal{R}})}{2\sqrt{a_{\max}\cdot b_{ \mathrm{pref}}}} \tag{18}\]
where superscripted notations are system dynamics and subscripted notations are constant parameters. We assume that humans will maintain their driving style, so the constant parameters are static over time. Without loss of generality, we also use IDM to model other background vehicles in the environment. We simulate all vehicles as mass points.
### _Exploitation and Exploration_
To balance exploitation and exploration, the autonomous vehicle alternates between \(5\,\mathrm{s}\) of passive observation and \(5\,\mathrm{s}\) of active probing. We also set the MPC horizon to \(5\,\mathrm{s}\). Thanks to the boundedness of JSD, we can add a safety objective, \(\lambda\cdot r_{safe}^{\mathcal{R}}(s_{t+1})\), on the autonomous agent's optimization to enforce some safety features, and we choose \(\lambda\) empirically.
### _Human Model_
The autonomous vehicle models the human underlying utility using a combination of features, namely desired headway and desired velocity. For each scenario, we choose \(|\Phi|=30\) such that each \(\varphi\in\Phi\) maps to a distinct desired velocity or desired headway, and we initialize them to a uniform distribution, which satisfies condition (8).
### _Scenario 1: Influence fast drivers to switch lane_
Consider a two-lane highway (Fig. 1(a)) with an inner lane (left) and an outer lane (right). Here, we cause the autonomous vehicle to actively probe the desired velocity of the human vehicle. If the human vehicle exhibits the intention to travel at a high velocity, the autonomous vehicle will perform a series of maneuvers to help the human vehicle merge to the inner lane in the widest gap between the background vehicles. While approaching the widest gap, the autonomous vehicle slows down to block the human vehicle (Fig. 1(b)), and the human vehicle switches lanes shortly after that (Fig. 1(c)).
We choose the IDM parameters as \(u_{\max}=$0.73\,\mathrm{m}\mathrm{/}\mathrm{s}^{2}$\), \(b_{\mathrm{pref}}=$1.67\,\mathrm{m}\mathrm{/}\mathrm{s}^{2}$\), \(v_{\mathrm{des}}=$25\,\mathrm{m}\mathrm{/}\mathrm{s}$\), \(\tau_{\mathrm{gap}}=$1.5\,\mathrm{s}$\), and \(d_{\min}=$2\,\mathrm{m}$\). We start the car-following scenario with relative headway of \(100\,\mathrm{m}\), the autonomous vehicle and the human vehicle both traveling at \(20\,\mathrm{m}\mathrm{/}\mathrm{s}\). We also included a passive observing approach to compare as a baseline. Fig. 3 is a snapshot of the belief from two approaches taken every \(10\,\mathrm{s}\).
By \(50\,\mathrm{s}\), the active approach's peak happens at \(\varphi_{19}\), which maps to a desired velocity of \(23.56\,\mathrm{m}\mathrm{/}\mathrm{s}\), which is close to the IDM parameter, \(v_{\mathrm{des}}\), of \(25\,\mathrm{m}\mathrm{/}\mathrm{s}\). In comparison, the passive observation baseline peaks at \(\varphi_{16}\) that maps to \(19.86\,\mathrm{m}\mathrm{/}\mathrm{s}\), which is very far from the ground truth. This is because the passive approach suffers from no exploration to trigger human reaction, so the autonomous vehicle will assume the human vehicle intends to travel only as fast as itself.
Leveraging this probed information, the autonomous vehicle can set a cutoff, \(23\,\mathrm{m}\mathrm{/}\mathrm{s}\) in our simulation for instance, to influence the humans with high desired velocity to drive
Fig. 2: Phase 1: Autonomous vehicle maintains velocity. Phase 2: Autonomous vehicle brakes to block the human vehicle. Phase 3: Human vehicle merges due to blocking. All vehicles travel upwards.
in the inner lane. According to Fig. 4, the influence brought about a 20.04% increase in the human vehicle's velocity, whereas the passive approach wouldn't be able to initiate the influence procedure at all because it does not try to elicit information that the human is not providing, hence the autonomous vehicle becomes more and more wrongly convinced that the human vehicle intends to travel only as fast as \(19.86\,\mathrm{m/s}\). According to Fig. 5, the influence introduces a bounded perturbation, about \(15.68\,\mathrm{m/s}\) of cumulative absolute control, on average background vehicles, which could be easily attenuated with autonomous vehicles using flow stopper techniques [10, 64].
### _Scenario 2: Helping human to switch lane_
Consider a scenario like Fig. 5(a), in which the lane the autonomous and human vehicle currently occupy is about to end, either due to traffic, construction, or lane merge. Both vehicles, therefore, have to switch to the left lane, which is occupied by some background vehicles. Assume the headway gaps between the background vehicles are too narrow for humans while traveling at such a high speed. Fortunately, autonomous vehicles are capable of performing the switching. The autonomous vehicle, therefore, helps the human vehicle to switch lanes by first probing the desired headway of the human vehicle around a specific velocity, in this case \(20\,\mathrm{m/s}\). The autonomous vehicle will then switch lanes and slow down to create enough gap based on the probed headway (Fig. 5(b)). Finally, the human vehicle can merge into the lane with ease (Fig. 5(c)).
We choose the IDM parameters as \(u_{\mathrm{max}}=$0.73\,\mathrm{m/s^{2}}$\), \(b_{\mathrm{pref}}=$1.67\,\mathrm{m/s^{2}}$\), \(v_{\mathrm{des}}=$20\,\mathrm{m/s}$\), \(\tau_{\mathrm{gap}}=$1.5\,\mathrm{s}$\), and \(d_{\mathrm{min}}=$2\,\mathrm{m}$\). Similarly, we initialize the road condition to the same
Fig. 4: Velocity Deviation
Fig. 5: Cumulative Absolute Control
Fig. 3: Belief Snapshot
Fig. 6: Phase 1: Autonomous vehicle merges first. Phase 2: Autonomous vehicle slows down to create gap for human vehicle. Phase 3: Human vehicle merges. All vehicles travel upwards.
condition as the previous scenario, and we include a passive observing approach to compare as a baseline.
Fig. 7 is a snapshot of the belief from two approaches taken every \(10\,\mathrm{s}\). By \(70\,\mathrm{s}\), the probability for the active approach peaks at \(\varphi_{4}\), which maps to a desirable headway around \(48.27\,\mathrm{m}\), whereas that of passive approach peaks at \(\varphi_{9}\), which maps to a desirable headway around \(108.62\,\mathrm{m}\). For reference, according to data from the Next Generation Simulation for US Highway 101 [65], the average headway for cars traveling around \(20\,\mathrm{m/s}\) is about \(42.18\,\mathrm{m}\). Although not absolutely precise, the active approach generates a much more accurate profile than the passive approach does.
Based on the probed information, the autonomous vehicle can proceed to create a gap for the human vehicle. For comparison, we simulated a baseline where the autonomous vehicle is passive during the information gathering process, so the autonomous vehicles would have to slow down to create a wider gap, inducing larger perturbations on the background vehicles. According to Fig. 9, the cumulative absolute control for all three types of vehicles in the active approach is significantly lower than that in the passive approach. The reductions in perturbation are respectively \(40.36\%\), \(14.33\%\), and \(37.66\%\) for autonomous, human, and background vehicle. According to Fig. 8, the active approach generates less extreme velocity deviation for all three types of vehicles in general, which helps to reduce the intensity and propagation of traffic wave [66].
Moreover, our baseline is under the assumption that the autonomous vehicle would overtake under this scenario. Without active probing, the autonomous vehicle is more likely to behave quite conservatively, so it will most likely wait until all of the background vehicles have passed to switch lanes. This subjects the autonomous and human vehicles to almost a complete stop and a wait time that depends on the number of consecutive closely spaced background vehicles behind, meaning that the deviation will continue to increase if there is no large gap. Our active probing and influencing approach, on the other hand, is agnostic to this condition because the autonomous vehicle creates its own lane-change opportunity.
## V Conclusions
In this work, we present an active probing approach for an autonomous agent to actively interact with a human agent to reveal information about a human's underlying utility and internal model. Our simulation results in autonomous driving demonstrate how the gathered information can be leveraged to increase driver experience and overall optimality compared to a passive learning baseline method. Future work could adopt learning-based methods to replace the heuristic probing objective with a more efficient and scenario-specific objective. It could also be worthwhile to relax the static human model assumption, hence empowering the autonomous agent to actively learn the human's adaptation policy.
## VI Acknowledgement
This work was supported by the CMU Argo AI Center for Autonomous Vehicle Research.
Special thanks to Mrinal Verghese and Bhaskar Krishnamachari for their insightful suggestions, Rachel Burcin for her hospitality, and the 2022 Robotics Institute Summer Scholars for their company.
Fig. 8: Velocity Deviation
Fig. 7: Belief Snapshot
Fig. 9: Cumulative Absolute Control |
2307.08816 | Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates | Discrete optimization belongs to the set of $\mathcal{NP}$-hard problems,
spanning fields such as mixed-integer programming and combinatorial
optimization. A current standard approach to solving convex discrete
optimization problems is the use of cutting-plane algorithms, which reach
optimal solutions by iteratively adding inequalities known as \textit{cuts} to
refine a feasible set. Despite the existence of a number of general-purpose
cut-generating algorithms, large-scale discrete optimization problems continue
to suffer from intractability. In this work, we propose a method for
accelerating cutting-plane algorithms via reinforcement learning. Our approach
uses learned policies as surrogates for $\mathcal{NP}$-hard elements of the cut
generating procedure in a way that (i) accelerates convergence, and (ii)
retains guarantees of optimality. We apply our method on two types of problems
where cutting-plane algorithms are commonly used: stochastic optimization, and
mixed-integer quadratic programming. We observe the benefits of our method when
applied to Benders decomposition (stochastic optimization) and iterative loss
approximation (quadratic programming), achieving up to $45\%$ faster average
convergence when compared to modern alternative algorithms. | Kyle Mana, Fernando Acero, Stephen Mak, Parisa Zehtabi, Michael Cashmore, Daniele Magazzeni, Manuela Veloso | 2023-07-17T20:11:56Z | http://arxiv.org/abs/2307.08816v2 | # Towards Accelerating Benders Decomposition via
###### Abstract
Stochastic optimization (SO) attempts to offer optimal decisions in the presence of uncertainty. Often, the classical formulation of these problems becomes intractable due to (a) the number of scenarios required to capture the uncertainty and (b) the discrete nature of real-world planning problems. To overcome these tractability issues, practitioners turn to decomposition methods that divide the problem into smaller, more tractable sub-problems. The focal decomposition method of this paper is Benders decomposition (BD), which decomposes stochastic optimization problems on the basis of scenario independence. In this paper we propose a method of accelerating BD with the aid of a surrogate model in place of an \(\mathcal{NP}\)-hard integer master problem. Through the acceleration method we observe \(30\%\) faster average convergence when compared to other accelerated BD implementations. We introduce a reinforcement learning agent as a surrogate and demonstrate how it can be used to solve a stochastic inventory management problem.
Suggested keywords, stochastic optimization, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition, Benders decomposition decomposition, Benders decomposition decomposition, Benders decomposition, Benders decomposition decomposition, Benders decomposition decomposition, Benders decomposition decomposition, Benders decomposition decomposition, Benders decomposition decomposition, Benders decomposition decomposition, Benders decomposition decomposition, Benders decomposition decomposition, Benders decomposition decomposition, Benders decomposition decomposition, Benders decomposition decomposition, Benders decomposition decomposition, Benders decomposition decomposition, Benders decomposition decomposition, Benders decomposition decomposition, Benders decomposition decomposition decomposition, Benders decomposition decomposition, Benders decomposition decomposition, Benders decomposition decomposition, Benders decomposition decomposition decomposition, Benders decomposition decomposition decomposition, Benders decomposition decomposition decomposition, Benders decomposition
solutions.
Our proposal introduces a surrogate model to quickly generate solutions to the discrete MP rather than relying on the MIMP. This surrogate generates fast solutions to unseen problems after learning the loss of decisions in similar stochastic environments. At varying rates, the MIMP is still run to retrieve the certificate of optimality offered by BD. In total, our contributions are:
* A generalized method of accelerating BD that retrieves optimal solutions to stochastic optimization problems while drastically reducing run times.
* A solution selection method that uses cuts from BD sub-problems to inform selection of future surrogate MP solutions; offering a further unification of the surrogate MP within the BD framework.
* A worked inventory management problem with detailed implementation of the acceleration method. We offer an explicit Benders formulation, and leverage an RL model as our surrogate MP.
* Experiments showing a \(30\%\) reduction in run-time vs alternative acceleration methods.
## 2 Background
A widely used form of stochastic optimization is Sample Average Approximation (SAA). SAA aims to approximate loss over the distribution of possible scenarios using simulation. In SAA, \(R\) scenarios are simulated, with each simulation yielding its own deterministic sub-problem with a loss function \(f(x,w,D_{r})\), where \(x\) is a set of global decisions (universal across all scenarios), \(w\) is a cost vector, and \(D_{r}\) is a set of scenario-specific parameters. The total loss of the problem is then computed as an average of the loss across all scenarios,
\[\ell(x)=\frac{1}{R}\sum_{\forall r\in R}f(x,w,D_{r}) \tag{1}\]
Despite success in a number of optimal planning domains, the struggles of scaling SO problems are well documented. For example, (Gendreau et al., 1996) note that when solving stochastic vehicle routing problems, practitioners commonly resort to comparing heuristics as exact methods become intractable. To combat scalability issues, decomposition methods are commonly employed to solve large-scale SO problems. Here we introduce the principles of Benders decomposition. Consider an SAA problem of the form:
\[\min_{x,y}c^{T}x+\frac{1}{R}\sum_{\forall r\in R}w^{T}y_{r} \tag{2}\]
subject to
\[Ax=b \tag{3}\] \[Bx+D_{r}y_{r}=g,\qquad\forall r\in R\] (4) \[x\in\mathbb{Z},y_{r}\in\mathbb{Z}^{+},\qquad\forall r\in R \tag{5}\]
where \(x\) is again our set of global decisions, \(A\), \(b\), and \(B\) are parameters that define constraints on \(x\), \(c\) is the cost of global decisions, \(D_{r}\) and \(g\) are scenario-specific parameters, \(y_{r}\) is a set of decisions made independently within each scenario, and \(w\) is a cost applied to each scenario-specific decision. In this formulation, \(w^{T}y_{r}\) is equivalent to (1). The first step of BD is to separate the global decision variables \(x\) and scenario-specific decision variables \(y_{r}\). This leaves us with a master problem
\[\{\min_{x,\theta}c^{T}x+\frac{1}{R}\sum_{\forall r\in R}\theta_{r}:Ax=b,x\in \mathbb{Z}^{+}\} \tag{6}\]
and a collection of \(R\) sub-problems, where for each \(r\in R\) we have
\[\{\min_{y_{r}}w^{T}y_{r}:D_{r}y_{r}=g-Bx^{*},y_{r}\in\mathbb{R}^{+}\} \tag{7}\]
The sub-problems accept a fixed \(x^{*}\) based on the solution to (6), and are solved to obtain optimal sub-problem decisions \(y_{r}\). Note that BD introduces a set of auxiliary variables \(\theta_{r},\forall r\in R\) to the master problem (6). This auxiliary variable, frequently called the recourse variable, is responsible for tracking an approximation of the sub-problem loss that has been moved to (7). Let us assume the sub-problem is always feasible. This is not a necessary assumption, but simplifies the following description of BD.
Note that integrality on \(y_{r}\) has been relaxed in the sub-problem. This relaxation is necessary for Benders decomposition, and only possible when a) the sub-problem variables were not discrete to begin with or b) the decomposition results in a totally-unimodular sub-problem structure. Taking the dual of the sub-problem, we get:
\[\{\max_{q_{r}}q_{r}^{T}(g-Bx^{*}):q_{r}^{T}D_{r}\leq w\} \tag{8}\]
The dual sub-problem has three essential properties. First, through strong duality the optimal value of (8) is equivalent to the optimal value of (7) at \(x^{*}\). Second, the objective function (8) is linear with respect to the master problem decisions \(x\). And lastly, with the optimal dual values of \(q_{r}^{*}\) we can establish
\[\{min_{y_{r}}w^{T}y_{r}:D_{r}y_{r}=g-Bx\}\geq\\ q_{r}^{*T}(g-Bx),\forall x\in\mathbb{R},\forall w\in\mathbb{R} \tag{9}\]
via weak duality. With these traits established, we see that the optimal dual SP objective \(q_{r}^{*T}(g-Bx)\) can be included as a valid constraint on \(\theta_{r}\) in the MIMP. These constraints serve as a sub-gradient approximations of the SP loss. For each SP solution, we can update the MIMP with the valid constraint of \(\theta_{r}\geq q_{r}^{*T}(g-Bx)\) and re-solve for a new \(x\). This process is repeated until the SP's do not offer any strengthening constraints on \(\theta_{r}\), indicating convergence and full approximation of SP loss. Figure 1 offers a visual representation of this process.
### Reinforcement Learning
Reinforcement Learning (RL) offers a powerful approach to solving combinatorial problems. (Delarue et al., 2020) gives one such example of RL applied to combinatorial problems, solving notoriously challenging capacitated vehicle routing problems using value-based methods. As shown in (Delarue et al., 2020), the benefit of RL-based methods is that after learning a near-optimal policy, they can generate actions in discrete space very quickly, albeit without a guarantee of optimality.
RL is typically based on the Markov Decision Process (MDP) framework as described by Sutton and Barto (Sutton and Barto, 2018). This can be defined by a tuple \(\langle\mathcal{S},\mathcal{A},\mathcal{T},\mathcal{R}\rangle\) where \(\mathcal{S}\) is the set of states, \(\mathcal{A}\) is the set of actions, \(\mathcal{T}\) is a set of transition probabilities from state \(s\) to the next state \(s^{\prime}\), and \(\mathcal{R}\) is the reward function. In temporal environments, we can adopt the notation of \(s_{t}\in\mathcal{S}\), \(a_{t}\in\mathcal{A}\) for the state and action of a given time step \(t\).
In RL, an agent attempts to learn the optimal action in a given state. Performance is measured by the collective rewards over future states and actions. The behaviors of the agent are updated based on prior experience, and can collectively be defined by a policy, \(\boldsymbol{\pi}(s,a)\). RL algorithms can be broadly partitioned into two classes: value-based and policy based. In a value-based implementations, the policy \(\boldsymbol{\pi}(s,a)\) is selected using value-function approximation methods, where
\[Q^{\boldsymbol{\pi}}(s_{t},a_{t})=\sum_{\forall j\in T}\mathbb{E}_{s_{t+1},a_ {t+1},\ldots}[\gamma_{t}\mathcal{R}(s_{t+j},a_{t+j})] \tag{10}\]
is the expected reward of an action, \(\gamma_{t}\) is a discount rate placed on future reward, and an optimal policy is deterministically selected based on \(argmax_{\boldsymbol{\pi}}Q^{\boldsymbol{\pi}}\).
Rather than estimating the value-function \(Q\) and generating policies based on actions that maximize that approximation, policy-based reinforcement learning aims to optimize a functional representation of the policy \(\boldsymbol{\pi}(s,a)\). We define the functional representation of a policy as \(\boldsymbol{\pi}_{\boldsymbol{\beta}}(s,a)\), where \(\boldsymbol{\beta}\) is a set of learned parameters. Importantly, in policy-based learning the agent optimizes the parameters \(\boldsymbol{\beta}\) to generate a stochastic policy. This stochastic policy respects the fact that the cumulative reward for an action may not be deterministic, and consequentially a single best action may not exist.
Work from (Sutton et al., 1999) introduces an optimization procedure for policy-based RL that updates the parameter set \(\boldsymbol{\beta}\) via an estimate of the policy gradient. A powerful variation of policy-based optimization was introduced by (Schulman et al., 2017) to avoid detrimally large policy updates. In their version of policy-based optimization, the policy changes are regulated by limiting the reward of policy variation. Their method, titled Proximal Policy Optimization (PPO), updates the objective function to clip the reward of policy updates where the ratio \(|\frac{\boldsymbol{\pi}_{\boldsymbol{\beta}}^{\boldsymbol{\alpha}}-(a_{t}|s_{t })}{\boldsymbol{\pi}_{\boldsymbol{\beta}}^{\boldsymbol{\alpha}l}(a_{t}|s_{t})}|\) extends beyond some \(\epsilon\).
Policy-based RL is a more applicable form of RL for our proposal, as it enables a set of diverse actions to be generated in a given state. Given a requirement for non-deterministic actions, our working example implements a PPO RL algorithm with a multi-layer neural network serving as our agent. The parameter set of this network, \(\boldsymbol{\beta}\), defines our policy \(\boldsymbol{\pi}_{\boldsymbol{\beta}}\).
## 3 Accelerating Benders Decomposition
With background on BD and RL provided, we introduce our proposed method of accelerating Benders decomposition. First, we will offer specifics on how a surrogate model is used in place of the MIMP. Then, we will introduce three possible mechanisms for selecting actions from the surrogate model. Lastly, we will offer a more thorough coverage of the theoretical benefits that the surrogate model provides, and known deficiencies of BD that it addresses.
### Surrogate-MP
Recall the iterative procedure outlined in figure 1. The SP's can be solved efficiently using any standard LP solver, but each iteration calls back to a complex MIMP. Not only is
Figure 1: Iterative procedure of Benders decomposition, alternating between a MIMP (6) and SP (8).
the MIMP \(\mathcal{N}\mathcal{P}\)-hard, but its complexity scales linearly with the number of iterations as a new constraint is added from the SP. Given these mechanics, there is a strong desire to a) increase the speed of each master problem iteration and b) decrease the total number of calls to the MIMP required. We achieve both results by periodically introducing a faster surrogate model in place of the MIMP (figure 2). This surrogate model can be any model that has learned to map the stochastic input space to the discrete decision space with intentions of minimizing the problem loss. Later in the paper, we introduce an RL agent as our surrogate model to generate master problem solutions. We call this framework Surrogate-MP.
Note in this modified schema that with each iteration, the decision to use the surrogate in place of the MIMP is drawn from a Bernoulli distribution with a control parameter \(\Gamma\). If a value of 1 is returned from the Bernoulli distribution, the surrogate is used to generate global decisions. Otherwise, the standard MIMP is run and the optimality gap can be confirmed. Regardless of whether the MIMP or surrogate are used, global decisions are passed to the sub-problem and loss approximating cuts are added.
### Controlling Surrogate Usage
The surrogate model usage can be controlled in a variety of ways, and we offer three forms of control. These variants are aimed at answering (1) How frequently should we use the surrogate? (2) How can we be sure the surrogate solutions are useful for convergence? (3) If surrogate actions are non-deterministic, how can we decide which action are best to use? The three methods we implement are a greedy selection, weighted selection, and informed selection. Each of these methods assume the surrogate has generated a non-deterministic batch of actions for the given environment.
Greedy SelectionThe greedy selection process first evaluates every surrogate solution in a batch against the expectation of demand over the horizon to estimate solution performance. At each iteration, the decision to use the surrogate is made with some probability. If the surrogate is used, we select the top performing solution from the batch and use it as our MP solution. The solution is then removed from the batch and the process is continued.
Weighted SelectionRather than deterministically selecting actions based on their performance against an expectation, we can perform weighted random sampling. We again use the calculated loss of action \(i\) evaluated against expected demand, which we call \(\ell_{i}\). However, instead of selecting \(argmin_{i}(\ell_{i})\) as in the greedy method, we create a probability vector, where \(p(i)=\frac{\frac{1}{\sum_{\forall i\in I}\frac{1}{\ell_{i}}}}{\sum_{\forall i \in I}\frac{1}{\ell_{i}}}\). Using this probability vector, we perform weighted sampling from the batch of actions each time the surrogate is called.
InformedThe final proposal is observed to be strongest in our experiments, and incorporates feedback from the BD sub-problems. With informed selection, surrogate solutions are selected using the constraint set currently placed on \(\theta_{r}\). The benefit of utilizing the constraint matrix to select surrogate solutions is that these constraints inherently motivate exploration to either a) minimal or b) poorly approximated regions of the convex loss. Given final convergence is defined by a binding subset of these constraints, it is necessary to explore these minimal or poorly approximated regions.
To describe the method, we introduce a constraint matrix \(A_{r}\in\mathbb{R}^{I\times N}\) which contains the sub-gradient approximations imposed on \(\theta_{r}\), and a row vector of constant values \(c_{r}\in\mathbb{R}^{I}\) that is added to each sub-gradient approximation. \(I\) refers to the iteration number of BD, \(N\) refers to the number of MP decision variables, and \(r\) refers to the scenario.
Note each iteration generates a new set of sub-gradient approximations that are added to the matrix. As mentioned, these are the same sub-gradients that are applied to \(\theta_{r}\) in the master problem, and are generated using our dual sub-problem. On a given iteration, we have a batch of \(M\) solutions that have been generated by the surrogate. Decisions for this batch are represented by matrix \(D\in\mathbb{Z}^{N\times M}\). We begin by computing the loss approximations of each gradient, for each of the \(M\) solutions. This is given as \(TCA_{r}\in\mathbb{R}^{I\times M}\).
\[TCA_{r}=A_{r}\cdot D+(\mathbf{c_{r}}\cdot 1^{1\times I})^{T} \tag{11}\]
The \(TCA_{r}\) matrix contains approximations of the sub-problem loss for each of the \(M\) solutions, generated by each of the \(I\) constraints currently placed on \(\theta_{r}\). We can now take the maximum value for each column \(M\) as the ap
Figure 2: Iterative procedure of Surrogate-MP.
proximated cost of solution \(m\). In LP terms, this maximum value relates to the binding constraint on \(\theta_{r}\) in the MIMP, and is thus our true approximation of SP cost at that point. We represent this approximation (\(\ell_{mr}\)) as:
\[\ell_{m,r}=\max_{\forall i\in I}(TCA_{r})_{im} \tag{12}\]
Now we fully approximate the expected loss for each of the \(M\) solutions by taking an average across all \(R\) scenarios, and adding the fixed loss of that decision (denoted \(f_{m}\)).
\[\ell_{m}=\frac{1}{R}\sum_{\forall r\in R}\ell_{m,r}+f_{m} \tag{13}\]
the surrogate solution that minimizes the problem
\[argmin_{m}\ell_{m} \tag{14}\]
is then taken as our MP solution, and passed to the sub-problem for constraint generation.
### Benefits of Surrogate-MP
The benefits of using a surrogate model with learned actions in place of the MIMP is based on two central principles.
1. The time required to generate solutions from a pre-trained surrogate model is negligible compared to the time required to solve a large scale MIP.
2. The surrogate model has learned its actions from past exposure to the stochastic environment. As a result, sub-problem loss is expressed in surrogate model solutions regardless of how well \(\theta_{r}\) approximates SP loss. This means that even early iterations of the surrogate model will be highly reflective of sub-problem loss.
The first benefit is fairly self-explanatory; we desire faster MP solutions, and the surrogate provides them. The second benefit is more nuanced and worth expanding. We recall the general form MIMP (6), where \(\theta_{r}\) offers an approximation of sub-problem loss that is refined through linear constraints generated by (8). It is well observed that this approximation can converge quickly if global decisions are localized to the optimal region, but it can also be very slow if global decisions are far from the optimal region or the cuts poorly approximate the loss (Crainic et al., 2016; Baena et al., 2020). At initialization, \(\theta_{r}\) has not received any feedback from the SP, and is instead bound by some heuristic or known lower bound (commonly \(\theta_{r}\geq 0\) for non-negative loss). Given the lack of information initially imparted on \(\theta_{r}\), the MP generates global solutions that lack consideration of SP loss and can be very distant from the optimal region. Similar to a gradient based algorithm with a miss-specified learning rate, this can lead BD to oscillate around the minimal region or converge slowly, wasting compute and adding complexity with minimal benefit to the final solution (Baena et al., 2020).
The surrogate mitigates this major issue by generating global decisions that reflect an understanding of their associated SP loss without requiring strong loss approximations on \(\theta_{r}\). As a result, initial global decisions generated by the surrogate are localized to the minimal region and cuts can quickly approximate the minimum of the convex loss. These two fundamental benefits are the basis for a \(30\%\) reduction in run-times, observed in experiments with the working example that follows.
## 4 Working Example
Let us introduce an inventory management problem (IMP) as a working example. In the proposed IMP, we assume the required solutions must a) choose a delivery schedule from a finite set, b) decide an order-up-to amount (order = order-up-to - current inventory) for each order day, and c) place costly emergency orders if demand cannot be met with current inventory. For simplicity we consider a single-item, single-location ordering problem where there is a requirement to satisfy all demand using either _planned schedules_, or more costly _just-in-time emergency orders_. The demand estimate is generated using a forecast model with an error term from an unknown probability distribution.
Adaptations of the general form IMP are applied in industries ranging from financial services, to brick-and-mortar retail. In e-commerce, vendors make decisions to either assume the holding costs associated with stocking inventory near demand locations, or use more costly fulfillment options to meet consumer needs (Arslan et al., 2021). In commercial banking, cash must be held at physical locations and made available to customers when needed, with a compounding cost of capital being applied to any unused cash (Ghodrati et al., 2013). Or in commodities trading, physical assets may need to be purchased and held until a desired strike price is realized in the future (Goel and Gutierrez, 2011).
### SO Formulation and Decomposition
To model the IMP as a SO mixed-integer problem we introduce the following notation: let \(T\) be set of days \(t\), \(R\) be set of scenarios \(r\), and \(S\) define a finite set of schedules \(s\). Holding cost of an item (per unit-of-measure, per day) is \(h\), the cost of emergency services (per unit) is \(e\), the penalty applied to over-stocking (per unit over-stocked) is \(q\), and \(f_{s}\) is the fixed cost of a schedule. Capacity is defined by \(m\) and starting inventory by \(y\). The parameter \(w_{st}\) indicates
whether schedule \(s\) orders on day \(t\). Demand on day \(t\) under scenario \(r\) is \(n_{tr}\).
The decision space is defined by seven sets of variables. The decision to use schedule \(s\) is made using variable \(u_{s}\in\{0,1\}\). The order-up-to amount is decided by \(a_{t}\in\mathbb{Z}^{+}\), and \(k_{tr}\in\mathbb{Z}\) is the required order quantity to meet the order-up-to amount. Inventory on hand is monitored by \(d_{tr}\in\mathbb{Z}^{+}\), the units of holding space required to stock the inventory is \(p_{tr}\in\mathbb{Z}^{+}\), the required emergency order quantity is \(o_{tr}\in\mathbb{Z}^{+}\), and \(v_{tr}\in\mathbb{Z}^{+}\) is the number of units that inventory is over-filled by (all defined \(\forall t\in T,\forall r\in R\)). The formulation of our IMP is
\[min\sum_{\forall s\in S}(u_{s}f_{s})+\frac{1}{R}\sum_{\forall r\in R}\sum_{ \forall t\in T}(p_{tr}h+o_{tr}e+v_{tr}q) \tag{15}\]
subject to:
\[d_{tr}=y-n_{tr}+k_{tr}-v_{tr}+o_{tr},t=0,\forall r\in R \tag{16}\]
\[d_{tr}=d_{t-1,r}+k_{tr}-n_{tr}+o_{tr}-v_{tr},\forall t\in\{1,...,T\},\forall r\in R \tag{17}\]
\[p_{tr}\geq y+k_{tr}-v_{tr},t=0,\forall r\in R \tag{18}\]
\[p_{tr}\geq a_{t},\forall t\in\{1,...,T\},\forall r\in R \tag{19}\]
\[p_{tr}\geq p_{t-1,r}-a_{t},\forall t\in\{1,...,T\},\forall r\in R \tag{20}\]
\[y+k_{tr}-v_{tr}\leq m,t=0,\forall r\in R \tag{21}\]
\[d_{t-1,r}+k_{tr}-v_{tr}\leq m,\forall t\in\{1,...,T\},\forall r\in R \tag{22}\]
\[k_{tr}=a_{t}-y\sum_{\forall s\in S}u_{s}w_{st} \tag{23}\]
\[k_{tr}\geq a_{t}-d_{t-1,r},\forall t\in\{1,...,T\},\forall r\in R \tag{24}\]
\[k_{tr}\leq a_{t}-d_{t-1,r}+(1-\sum_{\forall s\in S}u_{s}w_{st})m,\forall t\in \{1,...,T\},\forall r\in R \tag{25}\]
\[k_{tr}\leq a_{t},\forall t\in T,\forall r\in R \tag{26}\]
\[k_{tr}\geq-\sum_{\forall s\in S}u_{s}w_{st}\times m,\forall t\in T,\forall r\in R \tag{27}\]
\[v_{tr}\leq a_{t},\forall t\in T,\forall r\in R \tag{28}\]
\[a_{t}\leq\sum_{\forall s\in S}u_{s}w_{st}\times m,\forall t\in T \tag{29}\]
\[\sum_{\forall s\in S}u_{s}=1 \tag{30}\]
The objective (15) minimizes the sum of planned schedule costs and the average of holding costs, emergency order costs, and over-fill costs across the \(R\) scenarios. Flow constraints (16) and (17) balance inflow and outflow of inventory through demand and deliveries. The holding cost is enforced by constraints (18), (19), and (20). Constraints (21) and (22) mandate that inventory cannot be filled beyond its capacity. Lastly, constraints (23), (24), (25), (26), (27), (28), and (29) ensure an order exactly fills the inventory to the optimal order-up-to-amount, and that orders are only placed on scheduled days. (30) guarantees exactly one schedule is selected.
For BD, we note that \(\mathbf{a}\) and \(\mathbf{u}\) are schedule and order-up-to decisions that must be made the same across all scenarios. As a result, \(\mathbf{a}\), \(\mathbf{u}\), (29), and (30) are contained in the MIMP while the remaining decision variables and constraints are delegated to the scenario specific sub-problems. For brevity, we omit the primal sub-problem formulation and directly introduce the cut-generating dual sub-problem formulation. We define the dual variables in line with their related constraints: \(\boldsymbol{\alpha}\in\mathbb{R}\) [(16), (17)], \(\boldsymbol{\gamma}\in\mathbb{R}^{+}\) [(18), (19)], \(\boldsymbol{\omega}\in\mathbb{R}^{+}\) (20), \(\boldsymbol{\phi}\in\mathbb{R}^{+}\) (21),(22)], \(\boldsymbol{\xi}^{\boldsymbol{0}}\in\mathbb{R}\) (23), \(\boldsymbol{\xi}^{\boldsymbol{lb}}\in\mathbb{R}^{+}\) (24), \(\boldsymbol{\xi}^{\boldsymbol{ub}}\in\mathbb{R}^{-}\) (25), \(\boldsymbol{\sigma}\in\mathbb{R}^{-}\) (26), \(\boldsymbol{\pi}\in\mathbb{R}^{+}\) (27), \(\boldsymbol{\beta}\in\mathbb{R}^{-}\) (28).
**Master Problem**
\[\min_{a,u,\theta}\sum_{\forall s\in S}(u_{s}\times f_{s})+\frac{1}{R}\sum_{ \forall r\in R}\theta_{r} \tag{31}\]
\[s.t.\]
\[a_{t}\leq\sum_{\forall s\in S}u_{s}w_{st}\times m,\forall t\in T \tag{32}\]
\[\sum_{\forall s\in S}u_{s}=1 \tag{33}\]
\[\theta_{r}\geq 0,\forall r\in R \tag{34}\]
**Dual Sub-problem (solved independently for each scenario \(\mathbf{r}\))**
\[\max_{\alpha,\phi,\xi^{0},\xi^{lb},\xi^{ub},\sigma,\pi}\alpha_{0r}(y-n_{0}r)+\]
\[\gamma_{0r}y+\]
\[\xi^{0}_{r}(a_{0}-y\times\sum_{\forall s\in S}u_{s}w_{s0})+\]
\[\phi_{0r}(m-y)+\]
\[\sum_{t=1}^{T}(-\alpha_{tr}n_{tr}+\gamma_{tr}a_{t}-\omega_{tr}a_{t}+\phi_{tr}m+\]
\[\xi^{lb}_{tr}a_{tr}+\xi^{ub}_{tr}(a_{tr}+(1-\sum_{\forall s\in S}u_{s}w_{st})m ))+\]
\[\sum_{\forall t\in T}(\beta_{tr}a_{t}+\sigma_{tr}a_{t}-\pi_{tr}(\sum_{\forall s \in S}u_{s}w_{st}\times m)) \tag{35}\]
\[s.t.\]
\[\alpha_{tr}\leq 0,t=T \tag{36}\]
\[\alpha_{tr}-\alpha t+1,r+\phi t+1,r+\xi^{ub}_{t+1,r}+\xi^{lb}_{t+1,r}\leq 0, \forall t\in\{0,...,T-1\} \tag{37}\]
\[\gamma_{tr}-\omega_{t+1,r}\leq h,t=0 \tag{38}\]
\[\gamma_{tr}+\omega_{tr}\leq h,t=T \tag{39}\]
\[\gamma_{tr}-\omega_{t+1,r}+\omega_{tr}\leq h,\forall t\in\{1,...,T-1\} \tag{40}\]
\[\alpha_{tr}+\beta_{tr}-\phi_{tr}+\gamma_{tr}\leq q,t=0 \tag{41}\]
\[\alpha_{tr}+\beta_{tr}-\phi_{tr}\leq q,\forall t\in\{1,...,T\} \tag{42}\]
\[-\alpha_{tr}\leq p,\forall t\in T \tag{43}\]
\[\xi_{r}^{0}-\gamma_{tr}+\phi_{tr}-\alpha_{tr}+\sigma_{tr}+\pi_{tr}=0,t=0 \tag{44}\]
\[\xi_{tr}^{lb}+\xi_{tr}^{ub}+\phi_{tr}-\alpha_{tr}+\sigma_{tr}+\pi_{tr}=0, \forall t\in\{1,...,T\} \tag{45}\]
Let us refer to the polyhedron defined by MP constraints at iteration \(i\) as \(\mathcal{P}_{i}\). The master problem generates optimal decisions \(\mathbf{a}^{*}\) and \(\mathbf{u}^{*}\) given the current approximation of sub-problem costs on \(\theta\). The objective function of the dual sub-problem (referred to as \(\mathcal{L}(\mathbf{a},\mathbf{u},r)\), where \(r\) is the scenario) is updated with \(\mathbf{a}^{*}\) and \(\mathbf{u}^{*}\), and the sub-problem is solved. Recalling the mechanics of BD, the optimal solution to the dual sub-problem has two valuable properties: a) as a numeric value it defines the true scenario specific costs, and b) as a function it offers a sub-gradient on \(\theta_{r}\). The master problem polyhedron is then updated to \(\mathcal{P}_{i+1}=\mathcal{P}_{i}\cap\{\mathbf{u},\mathbf{a},\mathbf{\theta} :\theta_{r}\geq\mathcal{L}^{*}(\mathbf{a},\mathbf{u},r)\}\), where \(\mathcal{L}^{*}(\mathbf{a},\mathbf{u},r)\) refers to the optimized loss function of the sub-problem iteration. This process is repeated until convergence, with each iteration of the MP being solved over a more refined approximation of sub-problem costs.
### RL Surrogate - Formulation
We leverage an RL agent as the surrogate model in our Surrogate-MP implementation. The state of our IMP is represented by the tuple \(s_{t}=\langle d,h,e,q,m,\mu,\sigma,\mathbf{w},\mathbf{o},\mathbf{r}\rangle \in\mathcal{S}\), where \(t\) is a time step over the horizon \(T\). Parameters \(d\), \(h\), \(e\), \(q\), \(\mathbf{w}\), and \(m\) directly follow the definitions introduced in the SO Formulation and Decomposition section (page 5). Additional state parameters include \(\mu\) as the expected demand, and \(\sigma\) as the estimated standard deviation of demand. A vector \(\mathbf{o}\) tracks orders over the time horizon \(T\). All future orders are set to zero, and past orders are taken from actions as they are performed. Similarly, a vector \(\mathbf{r}\) tracks the forecast errors from past observations. All future error observations are set to zero, and events are populated as they are observed by the state.
The actions are represented by \(\langle\mathbf{k}_{t},\mathbf{u}_{t}\rangle\in\mathcal{A}\) which denotes (a) the quantity to order, and (b) the schedule to adhere to, at time \(t\) respectively. Note that the schedule must be determined at the beginning of the horizon, and thus only \(\mathbf{u}_{t=0}\) is relevant. This is enforced through action masking and for simplicity we will refer to \(\mathbf{u}_{t=0}\) as \(\mathbf{u}\). The reward is negative cost, as defined by the objective (15).
As previously mentioned, we use PPO to optimize a multi-layer neural network as our agent. The network is a feed-forward neural network with two hidden layers and two linear output layers. The linear output layers return the log-odds that define our stochastic action space. We standardize the network inputs (the state) to be mean centered with unit variance, and generate \(\mathbf{k}_{t}\) and \(\mathbf{u}_{t}\) sequentially \(\forall t\in T\).
The agent is presented with an initial state \(s_{0}\) and must select a scheduling action to take. This scheduling action, \(\mathbf{u}\), relates to a binary vector \(\mathbf{w}\in\{0,1\}^{T}\) that defines whether an order is possible on day \(t\). If \(\mathbf{w}_{t}=1\), an order can be placed, otherwise the agent cannot order. This schedule becomes part of the state, over-writing the initial zero vector \(\mathbf{w}\).
With the schedule defined, the agent must generate a second action for state \(s_{0}\); this time selecting an order amount. The repeated visitation of state \(s_{0}\) is necessary as the selected schedule \(\mathbf{w}\) has now become part of the state. While we have not temporally shifted, the state has changed.
After a second visitation of \(s_{0}\), the agent sequentially traverses the horizon \(T\). With each time step, an ordering decision is made and either accepted or masked depending on the schedule vector \(\mathbf{w}\). Updates to the state include population of order quantities, residual updates, and an update of the inventory on hand based on observed demand and order amounts. If an order is scheduled, we retrieve the order-up-to amounts, denoted as \(\mathbf{a}\) in the SO Formulation section, by adding inventory on hand at the end of \(t-1\) to the ordering decision \(k_{t}\). If an order is not scheduled the order-up-to amount is 0. After traversing the full horizon \(T\), the agent will have selected a schedule \(\mathbf{u}\in\{0,1\}^{S}\) and have a vector of order-up-to quantities \(\mathbf{a}\in\mathbb{Z}_{\geq 0}^{T}\) from the agent. These two decision vectors, \(\mathbf{u}\) and \(\mathbf{a}\) are the essential ingredients required by the BD sub-problem. With these vectors, we can solve (35), generate sub-gradient approximations on \(\theta_{r}\), and further refine our approximation of true, stochastic, sub-problem cost.
## 5 Experiments
To evaluate the Surrogate-MP method, we implement our IMP formulation across 153 independent test cases using real-world data. Each experiment was performed with a sample size of 500 scenarios (\(R=500\)), a horizon of 28 days (\(T=28\)), and 169 possible schedules (\(S=169\)). The resultant problem has a high dimensional discrete decision space, consisting of scheduling and ordering decisions. In total, the decision space is \(\mathbb{Z}^{70,197}\). Experiments were run on a 36 CPU, 72 GB RAM c5.9xlarge AWS instance. For solving the integer master problem and linear sub-problems, we leveraged the CPLEX commercial solver with default settings, allowing for distribution across the 36 CPU ma
chine. We experimented with all three surrogate solution selection methods: greedy, random, and informed. For every implementation of Surrogate-MP, we deactivate calls to the surrogate after the optimality gap is \(\leq 5\%\). The intuition behind deactivating the surrogate model is that as the gap percent shrinks, the MIMP must be used to retrieve the certificate of optimality.
As a benchmark, we evaluate our method against a baseline implementation of Benders decomposition. Accelerations implemented in the baseline include scenario group cuts (Adulyasak et al., 2015) and partial decomposition (Crainic et al., 2016). We did not compare against a generic implementation of Benders decomposition due to tractability issues.
## 6 Results
All three implementations (greedy, random, and informed) produced faster convergence than the benchmark BD implementation. Random implementation performed \(14.96\%\) (104.51s average run-time) faster than the baseline, greedy implementation achieved \(19.43\%\) (99.92s average run-time) faster performance, and the informed surrogate implementation performed \(30.45\%\) faster (85.47s average run-time). The convergence rates are displayed in figure 3.
In addition to achieving faster average convergence, Surrogate-MP outperformed the baseline BD implementation across the majority of instances. Surrogate-MP with informed selection achieved better convergence rates on 135 of the 153 instances (\(88.24\%\), figure 4).
## 7 Conclusion & Future Work
In conclusion, by inserting a surrogate model in place of the MIMP we achieve a drastic reduction in convergence time. The proposed method is generalizable to any BD implementation, retrieves certificates of optimality, and any surrogate capable of generating MP solutions can be used. We leverage an RL agent as our surrogate, and display results showing superiority in \(88.24\%\) of instances with a \(30\%\) reduction in average run time.
Observing the performance of our method, a promising extension of this work would be to design stronger integration between the surrogate model, SP, and MP. We took steps toward integration with the informed method of selecting surrogate solutions, and realized promising results. Some opportunities for integration we leave unexplored would be to directly inform the surrogate model on the strength of past solutions, offer sub-gradient information as a feature, or redesign the surrogate objective function to focus on weakly approximated areas of the SP loss as opposed to mirroring the BD objective directly. We are additionally eager to observe the performance of Surrogate-MP on other discrete SO problems.
Disclaimer.This paper was prepared for informational purposes by the Artificial Intelligence Research group of JPMorgan Chase & Co. and its affiliates ("JP Morgan"), and is not a product of the Research Department of JP Morgan. JP Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.